Notes Sepm Unit 1
Notes Sepm Unit 1
Unit-01/Lecture-01
Software Engineering : [ RGPV/JUNE-2013(7), JUNE 2011(10)]
Software Characteristics
While developing any kind of software product, the first question in any developer s mind
is, What are the qualities that good software should have? Well before going into
technical characteristics, I would like to state the obvious expectations one has from any
software. First and foremost, a software product must meet all the requirements of the
customer or end-user. Also, the cost of developing and maintaining the software should
be low. The development of software should be completed in the specified time-frame.
Well these were the obvious things which are expected from any project (and software
development is a project in itself). Now let s take a look at Software Quality factors. These
set of factors can be easily explained by Software Quality Triangle. The three
characteristics of good application software are :-
1) Operational Characteristics
2) Transition Characteristics
3) Revision Characteristics
a) Correctness: The software which we are making should meet all the specifications
stated by the customer.
b) Usability/Learn ability: The amount of efforts or time required to learn how to use the
software should be less. This makes the software user-friendly even for IT-illiterate
people.
c) Integrity: Just like medicines have side-effects, in the same way a software may have a
side-effect i.e. it may affect the working of another application. But a quality software
should not have side effects.
d) Reliability: The software product should not have any defects. Not only this, it
shouldn t fail while execution.
e) Efficiency: This characteristic relates to the way software uses the available resources.
The software should make effective use of the storage space and execute command as
per desired timing requirements.
f) Security: With the increase in security threats nowadays, this factor is gaining
importance. The software shouldn t have ill effects on data / hardware. Proper measures
should be taken to keep data secure from external threats.
g) Safety: The software should not be hazardous to the environment/life.
Software Crisis:
A software crisis is a mismatch between what software can deliver and the capacities of
computer systems, as well as expectations of their users. This became a growing problem
in the 20th century as computing grew by leaps and bounds and software was unable to
keep pace. As the complexity of systems grows, so do the needs of users, who expect
increasingly more performance from their software. Programmers may struggle to keep
pace, creating a software crisis.
Consumer software typically moves through a slow series of development phases, but
makes up a small portion of the volume of business in the industry. The bulk of software
development is sunk into systems for specific applications, ranging from the programs
that handle missile guidance aboard naval cruisers to internal record-keeping for health
insurance companies. This software generally requires a substantial investment from the
customer, as well as extensive programming from personnel charged with developing,
testing, and maintaining it.
This is not just an issue for the development of new software products. Another concern
can be the need to maintain older software which may have problems related to poor
development or the failure to anticipate growing needs. Programmers could be spending
large amounts of time on keeping legacy software functional so a company can continue
to operate. With high investment in the older software, the company may be reluctant to
order a new program, even if it would better meet their needs, because this could involve
more expense and problems during the changeover.
The agile methods are based on a set of new principals of design and build – different
from the principals used before. However to understand the agile methodologies first of
all we need to understand the reason of their appearance. Thus we ve to first understand
the traditional methods of software development – their features, their pros and cons,
and nevertheless we ve to understand their relevance.
The Capability Maturity Model ranks companies into levels, going from those that are at
Level 1 (operating under processes that are not managed, or ad-hoc) to those that reach
Level 5 (optimizing). It is not our purpose here to go into detail about what each level
means, except to point out that at Level 5 companies must have complied with following
the tenets of more than 16 software and systems development process areas, including
the ability to (at level 2) manage their software development process (configuration
management, project monitoring and control, project planning, requirements
management); define their approach to software development (decision analysis and
resolution, organizational training, risk management, validation,
verification); quantitatively manage their development process (measure organizational
process performance, reliably measure productivity, error injection and other key
variables) and optimize their process, which involves constantly reviewing the causes of
process problems and taking measures to improve and resolve them.
The nature of the DoD s needs influenced CMMI at the core. The DoD s software
development projects tend to be large and often interact with hardware. Furthermore,
they are government funded, which requires measures of financial transparency that
limit budget flexibility, often requiring projects to provide a fixed cost. Thus, it is not
surprising to see a CMMI model that was slanted towards traditional RUP (the Rational
Unified Process) methodologies, especially in what regards to:
3. Testing after the fact , i.e. after code has been written. This is clearly a quality
control, but often leads to detecting errors late, when correcting them is more
expensive.
Time (time was more predictable but the formality significantly increased the
length of the software development vs. ad-hoc programming);
Functional relevance (a rigid process did not allow applications to easily adapt to
changes in the business environment, so the ensuing application often was what
the client asked for but not really what the client needed ;
Frustration with cost (the ensuing application was not cheap to make, due to the
administrative overhead of all the formality of the process, and the predicted cost
of the application still did not match the initial fixed price estimate –yes,
estimates where off by a lesser amount vs. ad-hoc estimations, but they were still
significantly off.
Unit-01/Lecture-02
Object Oriented Software Engineering
Object-oriented software engineering (commonly known by acronym OOSE) is an object
modelling language and methodology.
OOSE was developed by Ivar Jacobson in 1992 while at Objectory AB. It is the first object-
oriented design methodology to employ use cases to drive software design. It also uses
other design products similar to those used by Object-modelling technique.
It was documented in the 1992 book Object-Oriented Software Engineering: A Use Case
Driven Approach, ISBN 0-201-54435-0
The tool Objectory was created by the team at Objectory AB to implement the OOSE
methodology. After success in the marketplace, other tool vendors also supported OOSE.
After Rational Software bought Objectory AB, the OOSE notation, methodology, and tools
became superseded.
As one of the primary sources of the Unified Modelling Language (UML), concepts
and notation from OOSE have been incorporated into UML.
The methodology part of OOSE has since evolved into the Rational Unified
Process (RUP).
The OOSE tools have been replaced by tools supporting UML and RUP.
OOSE has been largely replaced by the UML notation and by the RUP
methodology.
2. Process: The foundation for software engineering is the process layer. Process defines a
framework for a set of Key Process Areas (KPAs) that must be established for effective
delivery of software engineering technology. This establishes the context in which
technical methods are applied, work products such as models, documents, data, reports,
forms, etc. are produced, milestones are established, quality is ensured, and change is
properly managed.
3. Methods: Software engineering methods provide the technical how-to's for building
software. Methods will include requirements analysis, design, program construction,
testing, and support. This relies on a set of basic principles that govern each area of the
technology and include modeling activities and other descriptive techniques.
Communication: This activity involves heavy communication with customers and other
stakeholders in order to gather requirements and other related activities.
Planning: Here a plan to be followed will be created which will describe the technical
tasks to be conducted, risks, required resources, work schedule etc.
Modeling: A model will be created to better understand the requirements and design to
achieve these requirements.
Unit-01/Lecture-03
Component Based Software Engineering[ RGPV/ JUNE-2014,2013(7) ]
Component-based software engineering (CBSE) (also known as component-based
development (CBD)) is a branch of software that emphasizes the separation of concerns in
respect of the wide-ranging functionality available throughout a given software system. It
is a reuse-based approach to defining, implementing and composing loosely coupled
independent components into systems. This practice aims to bring about an equally wide-
ranging degree of benefits in both the short-term and the long-term for the software
itself and for organizations that sponsor such software.
Software engineering practitioners regard components as part of the starting platform
for service-orientation. Components play this role, for example, in web services, and more
recently, in service-oriented architectures (SOA), whereby a component is converted by
the web service into a service and subsequently inherits further characteristics beyond
that of an ordinary component.
Components can produce or consume events and can be used for event-driven
architectures (EDA).
Characteristics of components
An individual software component is a software package, a web service, a web resource,
or a module that encapsulates a set of related functions (or data).
All system processes are placed into separate components so that all of the data and
functions inside each component are semantically related (just as with the contents of
classes). Because of this principle, it is often said that components are modular and
cohesive.
With regard to system-wide co-ordination, components communicate with each other
via interfaces. When a component offers services to the rest of the system, it adopts
a provided interface that specifies the services that other components can utilize, and
how they can do so. This interface can be seen as a signature of the component – the
client does not need to know about the inner workings of the component
(implementation) in order to make use of it. This principle results in components referred
to as encapsulated. The UML illustrations within this article represent provided interfaces
by a lollipop-symbol attached to the outer edge of the component.
However, when a component needs to use another component in order to function, it
adopts a used interface that specifies the services that it needs. In the UML illustrations in
this article, used interfaces are represented by an open socket symbol attached to the
outer edge of the component.
The Systems Development Life Cycle (SDLC), or Software Development Life Cycle in
systems engineering, information systems and software engineering, is the process of
creating or altering systems, and the models and methodologies that people use to
In software engineering the SDLC concept underpins many kinds of software development
methodologies. These methodologies form the framework for planning and controlling
the creation of an information system: the software development process.
The System Development Life Cycle framework provides a sequence of activities for
system designers and developers to follow. It consists of a set of steps or phases in which
each phase of the SDLC uses the results of the previous one.
A Systems Development Life Cycle (SDLC) adheres to important phases that are essential
for developers, such as planning, analysis, design, and implementation, and are explained
in the section below. A number of system development life cycle (SDLC) models have
been created: waterfall, fountain, spiral, build and fix, rapid prototyping, incremental, and
synchronize and stabilize. The oldest of these, and the best known, is the waterfall model:
a sequence of stages in which the output of each stage becomes the input for the next.
These stages can be characterized and divided up in different ways, including the
following
There are following six phases in every Software development life cycle model:
Design
Implementation or coding
Testing
Deployment
Maintenance
2) Design: In this phase the system and software design is prepared from the
requirement specifications which were studied in the first phase. System Design helps in
specifying hardware and system requirements and also helps in defining overall system
architecture. The system design specifications serve as input for the next phase of the
model.
4) Testing: After the code is developed it is tested against the requirements to make sure
that the product is actually solving the needs addressed and gathered during the
requirements phase. During this phase unit testing, integration testing, system testing,
acceptance testing are done.
6) Maintenance: Once when the customers starts using the developed system then the
actual problems comes up and needs to be solved from time to time. This process where
the care is taken for the developed product is known as maintenance.
This model involves finishing the first phase completely before commencing the next one.
When each phase is completed successfully, it is reviewed to see if the project is on track
and whether it is feasible to continue.
V-Shaped Model:
V Shape Weakness
Emphasize planning for verification and validation of the product in early stages of
product development
Each deliverable must be testable
Project management can track progress by milestones
Easy to use
· Throwaway prototyping: Prototypes that are eventually discarded rather than becoming
a part of the finally delivered software
Incremental prototyping: The final product is built as separate prototypes. At the end the
separate prototypes are merged in an overall design.
Extreme prototyping: used at web applications mainly. Basically, it breaks down web
development into three phases, each one based on the preceding one. The first phase is a
static prototype that consists mainly of HTML pages. In the second phase, the screens are
programmed and fully functional using a simulated services layer. In the third phase the
services are implemented
The usage
· This process can be used with any software developing life cycle model. While this shall
be focused with systems needs more user interactions. So, the system do not have user
interactions, such as, system does some calculations shall not have prototypes.
Adds risk analysis, and 4gl RAD prototyping to the waterfall model
Each cycle involves the same sequence of steps as the waterfall process model
Time spent for evaluating risks too large for small or low-risk projects
Time spent planning, resetting objectives, doing risk analysis and prototyping may
be excessive
The model is complex
Risk assessment expertise is required
Spiral may continue indefinitely
Developers must be reassigned during non-development phase activities
May be hard to define objective, verifiable milestones that indicate readiness to
proceed through the next iteration
It is used in shrink-wrap application and large system which built-in small phases or
segments. Also can be used in system has separated components, for example, ERP
system. Which we can start with budget module as first iteration and then we can start
with inventory module and so forth.
The usage
It can be used with any type of the project, but it needs more involvement from customer
and to be interactive. Also, it can be used when the customer needs to have some
functional requirement ready in less than three weeks.
Advantages/Disadvantages
· Decrease the time required to avail some system features.· Face to face communication
and continuous inputs from customer representative leaves no space for guesswork.· The
end result is the high quality software in least possible time duration and satisfied
customer · Scalability· Skill of the software developers· Ability of customer to express
user needs· Documentation is done at later stages· Reduce the usability of components.·
Needs special skills for the team.
1. Just like Jacobson's model, the UA is also a Use-case driven software development
methodology. In UA to modeling, all the models are built around the use-case
model.
2. UA utilizes a standard set of tools for modeling.
3. The OO analysis is done by utilizing use-cases and object modeling.
4. It favors repositories of reusable classes and emphasizes on maximum reuse.
5. UA follows a layered approach to software development.
6. UA is suitable for different development lifecycles such as Incremental development
and prototyping.
7. UA favors continuous testing throughout the development lifecycle.
phase phase
iteration
Release
Final release
The end of each
iteration is a
minor release
1.Develop Iteratively
The software requirements specification (SRS) keeps on evolving throughout the
development process and loops are created to add them without affecting the cost of
development.
2.Manage Requirements
The business requirements documentation and project management requirements need
to be gathered properly from the user in order to reach the targeted goal.
3. Use Components
The components of large project which are already tested and are in use can be
conveniently used in other projects. This reuse of components reduces the production
time.
4. Model Visually
Use of Unified modeling language (UML) facilitates the analysis and design of various
components. Diagrams and models are used to represent various components and their
interactions.
5. Verify Quality
Testing and implementing effective project quality management should be a major part of
each and every phase of the project from initiation to delivery (aka the project
management life cycle).
6. Control Changes
Synchronization of various parts of the system becomes all the more challenging when
the parts are being developed by various teams working from different geographic
locations on different development platforms. Hence special care should be taken in this
Processes are probably the easiest to understand since we all work with them daily, even
when we don t know it. A process is simply a well defined set of steps and decisions
points for executing a specific task. Well planned and deeply understood processes are
essential to your ability to automate business tasks. This is really where that magic
happens, a process level. Generally speaking, processes are highly repeatable and if it can
be repeated it can be automated (with exceptions or occasional human intervention of
course).
An example of a process would be how you process payments to vendors you work with
and the steps and decisions involved in doing so. When working through processes and
defining them be EXTREMELY detailed, otherwise attempts to automate will end in
futility. Capture every action and decision and outline sub-processes as necessary.
Hopefully that clears up the differences between frameworks, methodologies and
processes. While they appear somewhat hierarchical, and can be, they aren t always
necessarily so, particularly in both directions. Frameworks will typically have
methodologies but not always. Methodologies will typically have [multiple] processes
embedded, but some processes will be stand alone.
The real power of combining these things is in developing processes in the context of a
methodology and applying methodologies in the context of a framework and most
importantly, when you utilize all of those things in the context of YOUR business. The
exact same approach won t work for every business but every business, no matter how
large or small, can benefit from applying these principles.
Changes are inevitable when software is built. A primary goal of software engineering is
to improve the ease with which changes can be made to software. Configuration
management is all about change control. Every software engineer has to be concerned
with how changes made to work products are tracked and propagated throughout a
project. To ensure that quality is maintained the change process must be audited. A
Software Configuration Management (SCM) Plan defines the strategy to be used for
change management.
Baselines
A work product becomes a baseline only after it is reviewed and approved.
A baseline is a milestone in software development that is marked by the delivery of
SCM Content
Problem description
Problem domain information
Emerging system solution
Software process rules and instructions
Project plan, resources, and history
Organizational content information
Version Control
Combines procedures and tools to manage the different versions of configuration
objects created during the software process
Version control systems require the following capabilities
o Project repository – stores all relevant configuration objects
o Version management capability – stores all versions of a configuration object
(enables any version to be built from past versions)
o Make facility – enables collection of all relevant configuration objects and
construct a specific software version
o Issues (bug) tracking capability – enables team to record and track status of
outstanding issues for each configuration object
Uses a system modeling approach (template – includes component hierarchy and
component build order, construction rules, verification rules)
Change Control
Change request is submitted and evaluated to assess technical merit and impact on
the other configuration objects and budget
Change report contains the results of the evaluation
Change control authority (CCA) makes the final decision on the status and priority of
the change based on the change report
Engineering change order (ECO) is generated for each change approved (describes
change, lists the constraints, and criteria for review and audit)
Object to be changed is checked-out of the project database subject to access control
parameters for the object
Modified object is subjected to appropriate SQA and testing procedures
Modified object is checked-in to the project database and version control mechanisms
If you have not used a software configuration management system or are not that familiar
with the concept, you might wonder whether it is appropriate to use software
configuration management on your project. Test automation is a software development
effort. Every time a test script is created, whether through recording or coding, a file is
generated that contains code. When created, developed, or edited, that code is a valuable
test asset.
A team environment presents the risk of losing functioning code or breaking test scripts
by overwriting files. A software configuration management system offers a way to
overcome this risk. Every time a file changes, a new version is created and the original file
is preserved.
For the team that is new to software configuration management, all of the essential
The ClearCase or Rational Team Concert integration for versioning Functional Tester test
assets is specialized and cannot be duplicated with other tools. For this reason, some
ClearCase operations cannot be performed outside Functional Tester.
When you use Functional Tester, the ClearCase or Rational Team Concert operations
appear to be very simple. But a lot is going on behind the scenes. A Functional Tester
script is a collection of files. The complexity of treating several files as a single entity is
hidden because all actions in the Functional Tester user interface are performed on the
script. You do not see the related files anywhere in the user interface. In addition, some
software configuration management operations, such as merging, are very complex.
There is built-in logic to determine the order in which files are merged, and then different
utilities are employed as needed to complete the merge.
Unit-01/Lecture-07
Software system safety, an element of the total safety and software development
program, cannot be allowed to function independently of the total effort. Both simple
and highly integrated multiple systems are experiencing an extraordinary growth in the
use of computers and software to monitor and/or control safety-critical subsystems or
functions. A software error, design flaw, or the lack of generic safety-critical
requirements can contribute to or cause a system failure or erroneous human decision.
To achieve an acceptable level of safety for software used in critical applications,
software system safety engineering must be given primary emphasis early in the
requirements definition and system conceptual design process.
Once these "functional" software safety analyses are completed the software
engineering team will know where to place safety emphasis and what functional
threads, functional paths, domains and boundaries to focus on when designing in
software safety attributes to ensure correct functionality and to detect malfunctions,
failures, faults and to implement a host of mitigation strategies to control hazards.
Software security and various software protection technologies are similar to software
safety attributes in the design to mitigate various types of threats vulnerability and
risks. Deterministic software is sought in the design by verifying correct and predictable
behaviour at the system level.
Goals
Safety consistent with mission requirements, is designed into the software in a
timely, cost effective manner.
1. An individual who senses the need for a project announces the intent to develop a
project in public.
2. A developer working on a limited but working codebase, releases it to the public as
the first version of an open-source program.
3. The source code of a mature project is released to the public.
4. A well-established open-source project can be forked by an interested outside
party.
Eric Raymond observed in his essay The Cathedral and the Bazaar that announcing the
intent for a project is usually inferior to releasing a working project to the public.
It's a common mistake to start a project when contributing to an existing similar project
would be more effective (NIH syndrome). To start a successful project it is very important
to investigate what's already there. The process starts with a choice between the
adopting of an existing project, and the starting of a new project. If a new project is
started, the process goes to the Initiation phase. If an existing project is adopted, the
process goes directly to the Execution phase.
Risk Assessment:-[RGPV/JUNE-2012(10),JUNE-2014(5)]
The goal of risk assessment is to prioritize the risks so that attention and resources can
be focused on the more risky items. Risk identification is the first step in risk assessment,
which identifies all the different risks for a particular project. These risks are project-
dependent and identifying them is an exercise in envisioning what can go wrong.
Methods that can aid risk identification include checklists of possible risks, surveys,
meetings and brainstorming, and reviews of plans, processes, and work products.
Checklists of frequently occurring risks are probably the most common tool for risk
identification—most organizations prepare a list of commonly occurring risks for
projects, prepared from a survey of previous projects. Such a list can form the starting
point for identifying risks for the current project.
Based on surveys of experienced project managers, Boehm [11] has produced a list of
the top 10 risk items likely to compromise the success of a software project. Figure
shows some of these risks along with the techniques preferred by management for
managing these risks. Top risks in a commercial software organization can be found in
Figure 4.3: Top risk items and techniques for managing them
The top-ranked risk item is personnel shortfalls. This involves just having fewer people
than necessary or not having people with specific skills that a project might require. Some
of the ways to manage this risk are to get the top talent possible and to match the needs
of the project with the skills of the available personnel. Adequate training, along with
The second item, unrealistic schedules and budgets, happens very frequently due to
business and other reasons. It is very common that high-level management imposes a
schedule for a software project that is not based on the characteristics of the project and
is unrealistic. Underestimation may also happen due to inexperience or optimism.
The next few items are related to requirements. Projects run the risk of developing the
wrong software if the requirements analysis is not done properly and if development
begins too early. Similarly, often improper user interface may be developed. This requires
extensive rework of the user interface later or the software benefits are not obtained
because users are reluctant to use it. Gold plating refers to adding features in the
software that are only marginally useful. This adds unnecessary risk to the project
because gold plating consumes resources and time with little return.
Risk identification merely identifies the undesirable events that might take place during
the project, i.e., enumerates the unforeseen events that might occur. It does not specify
the probabilities of these risks materializing nor the impact on the project if the risks
indeed materialize. Hence, the next tasks are risk analysis and prioritization.
In risk analysis, the probability of occurrence of a risk has to be estimated, along with the
loss that will occur if the risk does materialize. This is often done through discussion, using
experience and understanding of the situation, though structured approaches also exist.
Once the probabilities of risks materializing and losses due to materialization of different
risks have been analyzed, they can be prioritized. One approach for prioritization is
through the concept of risk exposure (RE) [11], which is sometimes called risk impact. RE
is defined by the relationship
RE = Prob(UO) * Loss(UO),
where Prob(UO) is the probability of the risk materializing (i.e., undesirable outcome) and
Loss(UO) is the total loss incurred due to the unsatisfactory outcome. The loss is not only
the direct financial loss that might be incurred but also any loss in terms of credibility,
future business, and loss of property or life. The RE is the expected value of the loss due
to a particular risk. For risk prioritization using RE is, the higher the RE, the higher the
priority of the risk item.
2. Risk Analysis: There are quite different types of risk analysis that can be used. Basically,
risk analysis is used to identify the high risk elements of a project in software engineering.
Also, it provides ways of detailing the impact of risk mitigation strategies. Risk analysis has
also been found to be most important in the software design phase to evaluate criticality
of the system, where risks are analyzed and necessary counter measures are introduced.
The main purpose of risk analysis is to understand risks in better ways and to verify and
correct attributes. A successful risk analysis includes important elements like problem
definition, problem formulation, data collection.
3. Risk Assessment: Risk assessment is another important case that integrates risk
management and risk analysis. There are many risk assessment methodologies that focus
on different types of risks. Risk assessment requires correct explanations of the target
system and all security. It is important that a risk referent levels like performance, cost,
support and schedule must be defined properly for risk assessment to be useful.
Risk Classification:
The key purpose of classifying risk is to get a collective viewpoint on a group of factors.
These are the types of factors which will help project managers to identify the group that
contributes the maximum risk. A best and most scientific way of approaching risks is to
classify them based on risk attributes. Risk classification is considered as an economical
way of analyzing risks and their causes by grouping similar risks together into classes.
Software risks could be classified as internal or external. Those risks that come from risk
factors within the organization are called internal risks whereas the external risks come
from out of the organization and are difficult to control. Internal risks are project risks,
process risks, and product risks. External risks are generally business with the vendor,
technical risks, customers satisfaction, political stability and so on. In general, there are
many risks in the software engineering which is very difficult or impossible to identify all
of them. Some of most important risks in software engineering project are categorized as
software requirement risks, software cost risks, software scheduling risk, software quality
risks, and software business risks. These risks are explained detail below –
Unit-01/Lecture-09
Difference between user requirements and specifications
This is a blog post I wrote for the intranet at my previous workplace to help influence a
change in thinking in what constitutes requirements . Often, business will hand down It
must look like this and must work like this which significantly constrains the design and is
based on a lot of assumptions, competitors products and previous requirements. Plus it
short-circuits any knowledge of technology, trends and patterns that the design and
development team could have contributed.
USER REQUIREMENTS should detail what is needed by users; they should define the
problem, the space within which a solution can be designed.
There should be a clear and traceable relationship between every element of a design and
the requirements.
The requirements and the specification can be in the same document but it should be
clear that the requirements are defined and accepted first and that they are fixed
whereas the solution specified is just one way that the requirements can be met.
The following words should typically not be used when discussing user requirements as
they refer to a user interface solution and confuse the purpose of a requirements
document, preempt solutions and constrain the design, possibly excluding the optimal
solution:
button, screen, panel, drop-down, list, scroll, click, tap, calendar, flip out, fold, animated,
swipe, heading, label, line, grid, table, row, column, select, drag and drop, window, resize,
right li k, e u, sort, ap, li k, widget, ox, i o , see, hear, tou h …
A user can opt to view the tabular data in ascending or descending order on any element
of the data
And the matching UI specification might be written (or committed straight to wireframes)
as:
In every cell of the header row will be two controls visible at all times, one for sorting the
relevant column in ascending order and one for descending order
REFERENCCE
Software 1
Engineering P,S. Pressman
Software 2
Engineering Pankaj jalote