Software Architecture - Design and Evaluation: by Perolof Bengtsson
Software Architecture - Design and Evaluation: by Perolof Bengtsson
Software Architecture - Design and Evaluation: by Perolof Bengtsson
by
PerOlof Bengtsson
Printed in Sweden
Kaserntryckeriet AB
Karlskrona 1999
to Hans and Kristina for everlasting support
This thesis is submitted to the Research Board at University of Karlskrona/Ronneby,
in partial fulfillment of the requirements for the degree of Licentiate of Engineering.
Contact Information:
PerOlof Bengtsson
Department of Software Engineering
and Computer Science
University of Karlskrona/Ronneby
Soft Center
SE-372 25 RONNEBY
SWEDEN
Introduction 3
1. Software Architecture ..............................................................................5
2. Software Architecture Design ..................................................................9
3. Software Architecture Description ........................................................22
4. Software Architecture Analysis ..............................................................29
5. Contributions in this Thesis .................................................................33
6. Further Research ...................................................................................37
1. Software Architecture
The expression software architecture was used, perhaps the first time, in a
scientific article as early as in 1981 in [27] and the concept of dealing
with systems by decomposing the software into modules is not new.
Even earlier David L. Parnas reported on the problem of increasing soft-
ware size in his article, “On the Criteria To Be Used in Decomposing
Systems into Modules” [22] in 1972. In that article, he identified the
need to divide systems into modules by other criteria than the tasks
identified in the flow chart of the system. A reason for this is, according
to Parnas, that “The flow chart was a useful abstraction for systems with
in the order of 5,000-10,000 instructions, but as we move beyond that
it does not appear to be sufficient; something additional is needed”.
Further Parnas identifies information hiding as a criterion for module
decomposition, i.e. every module in the decomposition is characterized
by its knowledge of a design decision which it hides from all modules
[22].
Thirteen years later Parnas together with P. Clements and D. Weiss
brings the subject to light again, in the article “The Modular Structure
of Complex Systems” [23]. In the article it is shown how development
of an inherently complex system can be supplemented by a hierarchi-
cally structured module guide. The module guide allows the software
engineers to know what the interfacing modules are, and help the soft-
ware engineer to decide which modules to study.
In [28] the authors also identified that the size and complexity of sys-
tems increases and the design problem have gone beyond algorithms
and data structures of the computation. In addition, we now have struc-
tural issues of the organization of the system in large, the control struc-
tures, communication protocols, physical distribution, and selection
among design alternatives. These issues are part of software architecture
design.
In the beginning of the 1990:s software architecture got wider atten-
tion in the software engineering community and later also in industry.
Today, software architecture has become an accepted concept, most evi-
dent, perhaps, by the new role, software architect, appearing in the soft-
ware developing organizations. Other evidence includes the growing
number of software architecture courses on the software engineering
curricula and attempts to provide certification of software architects.
Architect
}
Requirements (qualities)
Architecture
Technical Environment
System
Architect’s Experience
Domain analysis
Fundamental for the recursive design method is the domain analysis. A
domain is a separate real or hypothetical world inhabited by a distinct
set of conceptual entities that behave according to rules and policies
characteristic of the domain. Analysis consists of work products that
identify the conceptual entities of a single domain and explain, in detail,
the relationships and interactions between these entities. Hence,
Process
The recursive design process defines a linear series of seven operations;
each described in more detail in following sections. The operations are:
Activities
Start with eliciting the characteristics that should shape the architecture.
Attached to the method is a questionnaire with heuristic questions that
will serve as help in the characterization. The questionnaire brings up
fundamental design considerations regarding size, memory usage etc.
The information source is the application domain and other domains,
but the information is described in the semantics of the system. The
results from this operation are the system characterization report, often
containing numerous tables and drawings.
The conceptual entities and the relationships should be precisely
described. The architect selects the conceptual entities based on the sys-
tem characterization and their own expertise and experience, and docu-
ment the results in an object information model. Each object is defined
by its attributes, which in turn is an abstraction of a characteristic.
The next step in the process is to precisely specify the theory of oper-
ation. The authors of the method have found that an informal, but
comprehensive document works well to define the theory of operation,
later described in a set of state models.
In the application domain, a set of entities is considered always
present or pre-existing. Collecting instance data for populating the
instance database means finding those entities, typically only a few
items, e.g. processor names, port numbers etc.
The populator populates the architecture by extracting elements from
the repository containing the application model and then uses the ele-
ments to create additional instances in the architecture instance data-
Scenarios
Logical View
The logical view denotes the partitions of the functional requirements
onto the logical entities in the architecture. The logical view contains a
set of key abstractions, taken mainly from the problem domain,
expressed as objects and object classes
If an object’s internal behavior must be defined, use state-transition
diagrams or state charts.
Sal es Employee
Repr e s e nt at i ve
Production
Engineer
Account ant
Process view
The process view specifies the concurrency model used in the architec-
ture. In this view, for example, performance, system availability, concur-
rency, distribution system integrity and fault-tolerance can be analyzed.
The process view is described at several levels of abstractions, each
addressing an individual concern.
In the process view, the concept of a process is defined as a group of
tasks that form an executable unit. Two kinds of tasks exist; major and
minor. Major tasks are architectural elements, individually and uniquely
addressable. Minor tasks, are locally introduced for implementation rea-
sons, e.g. time-outs, buffering, etc. Processes represent the tactical level
of architecture control. Processes can be replicated to deal with perfor-
mance and availability requirements, etc.
For the process view use an expanded version of the Booch process
view. Several styles are useful in the process view, e.g. pipes & filters
[11,28], client/server [28].
Physical View
The elements of the physical view are easily identified in the logical,
process and development views and are concerned with the mapping of
these elements onto hardware, e.g. networks, processes, tasks and
objects. In the physical view, quality requirements like availability, reli-
ability (fault-tolerance), performance (throughput) and scalability can
be addressed.
Development View
The development view takes into account internal, or, intrinsic proper-
ties/requirements like reusability, ease of development, testability, and
commonality. This view is the organization of the actual software mod-
ules in the software development environment. It is made up of pro-
gram libraries or subsystems. The subsystems are organized in a
hierarchy of layers. It is recommended to define 4-6 layers of subsystems
in the development view. A subsystem may only depend on subsystems
in the same or lower layers, to minimize dependencies.
The development view supports allocation of requirements and work
division among teams, cost evaluation, planning, progress monitoring,
reasoning about reuse, portability and security.
The notation used is taken from the Booch method, i.e. modules/
subsystems graphs. Module and subsystems diagrams that show import
and export relations represent the architecture.
The development view is completely describable only after all the
other views have been completed, i.e. all the software elements have
been identified. However, rules for governing the development view can
be stated early.
Scenarios
The fifth view (the +1) is the list of scenarios. Scenarios serve as abstrac-
tions of the most important requirements on the system. Scenarios play
two critical roles, i.e. design driver, and validation/illustration. Scenarios
are used to find key abstractions and conceptual entities for the different
views, or to validate the architecture against the predicted usage.
The scenario view should be made up of a small subset of important
scenarios. The scenarios should be selected based on criticality and risk.
Each scenario has an associated script, i.e. sequence of interactions
between objects and between processes [25]. Scripts are used for the val-
idation of the other views and failure to define a script for a scenario dis-
closes an insufficient architecture.
Scenarios are described using a notation similar to the logical view,
with the modification of using connectors from the process view to
show interactions and dependencies between elements.
Design Process
The 4+1 View Model consists of ten semi-iterative activities, i.e. all
activities are not repeated in the iteration. These are the activities:
1. Select a few scenarios based on risk and criticality.
2. Create a straw man architecture.
3. Script the scenarios.
4. Decompose them into sequences of pairs (object operation pairs,
message trace diagram).
5. Organize the elements into the four views.
6. Implement the architecture.
7. Test it.
8. Measure it/evaluate it.
9. Capture lessons learned and iterate by reassessing the risk and
extending/revising the scenarios.
10.Try to script the new scenarios in the preliminary architecture,
and discover additional architectural elements or changes.
Activities
The activities are not specified in more detail by the author [16]. But
some comments are given.
■ Synthesize the scenarios by abstracting several user requirements.
■ After two or three iterations the architecture should become sta-
ble.
■
Test the architecture by measurement under load, i.e. the imple-
mented prototype or system is executed.
■ The architecture evolves into the final version, and even though it
can be used as a prototype before the final version, it is not a
throw away.
The results from the architectural design are captured in two docu-
ments; software architecture as the 4+1 views, and a software design
guidelines. (Compare to the rationale in the Perry and Wolf definition
[24])
Process
The process is iterative and meant to be iterated in close cycles. The
process’ activities and the activities’ relationships are shown in figure 4.
Pro- Req. Architecture
files Spec. Synthesis/Recovery
Architecture
Architecture Architecture
Improvement
Improvement Evaluation
Improvement Good
Opportunity Analysis enough?
NO
Arch.
YES
Descr.
Activities
The software architect starts with synthesizing a software architecture
design based only on functional requirements. The requirement specifi-
cation serves as input to this activity. Essentially the functionality-based
architecture is the first partitioning of the functions into subsystems. At
this stage in the process, it is also recommended that the scenarios for
evaluating the quality requirements be specified. No particular attention
is given to the quality requirements, as of yet.
The next step is the evaluation step. Using one of the four types of
evaluation techniques the software architect decides if the architecture is
good enough to be implemented, or not. Most likely, several points of
improvement will reveal themselves during the evaluation and the archi-
tecture has to be improved.
Architecture transformation is the operation where the system archi-
tect modifies the architecture using one or more of the five transforma-
tion types to improve the architecture. The idea behind the
transformation is that the architecture has the exact same functions after
the transformation as before the transformation. The only difference is
that the quality properties of the architecture have changed.
Transformations will affect more then one quality attribute, e.g. reus-
ability and performance, and perhaps in opposite directions, i.e.
improving one and degrading the other. The result is a trade-off
between software qualities [3, 18]. After the transformation has been
completed, the software engineer repeats the evaluation operation and
obtains a new results. Based on these either the architecture is fulfills the
requirements or the software engineer makes new transformations.
Evaluation Techniques
The first of the four types of evaluation techniques is the scenario-based
evaluation, which is a central part in the method. Scenarios make qual-
ity requirements more concrete and meaningful in the context of the
future system by describing events relevant to the quality attribute. Exe-
cuting the scenario on the architecture, similar to scripting in 4+1 View
Model, and analyzing the result does the evaluation. Provided that the
scenarios defined are representative, this kind of analysis will lead to rel-
evant results.
Transformation Techniques
The first of the transformation categories is transformation by imposing
an architectural style. This means that the fundamental organization of
the architecture changes.
Second, the architecture can be transformed by imposing an archi-
tectural pattern. The difference from imposing a style is that a pattern is
not changing the fundamentals of the architecture, but impose a rule on
all elements of the architecture. For example, adding a concurrency
mechanism to all elements using the Periodic objects pattern [20].
Thirdly, the architecture can be transformed using a design pattern
[11,13]. The result is a less dramatic change of the architecture.
Fourthly, the architecture can be transformed by converting the qual-
ity requirements into functionality. For example, increasing the fault-
tolerance by introducing exception handling.
Finally, the quality requirements can be distributed. For example,
instead of putting availability requirements on the complete system, the
availability of the server part in a client/server, could have higher
requirements than the clients.
Components Connectors
Process Uni/Bi-Direc-
tional Ctrl Flow
Computational Uni/Bi-Direc-
Components tional Data Flow
Active Data Data & Control
Component Flow
Passive Data Implementation
Component
Aggregation
Concrete Class
Inheritance
Abstract Class
Obj. Method
Component
Name
controls inventory
Shopping Stock
Basket Broker
HTML
Generator
nodes. Nodes can be connected to other nodes using the UML nota-
tion, see example of a deployment diagram in figure 9.
Node
Name
Client
netscape.exe
HTTP Server
apache server
webstore.cgi
Collaboration Name
Software Maintenance
Assessment from Software
Architecture
Change Scenario
Variance
6. Further Research
The goal of my future research is to further challenge the underlying
hypotheses of the software maintenance prediction method. Two
hypotheses remain to be studied, i.e. if the accuracy of architecture
change impact analysis is sufficient and if scenario profiles represent
future changes well enough. Plans for their validation will be described
in the remainder of this section.
When predicting software maintenance using change scenario pro-
files it is important the change profile accurately represents the future
changes of the system. The problem is that the current state of the art
does not provide any techniques for such validation. The intuitive way
to study this suffers from the same calendar time problem that the vali-
dation of the method as a whole (see previous section). It would require
a number of change profiles be created, for a number of projects, by a
number of persons, studies of the maintenance of these projects and
finally a comparison of the change profiles and the actual changes done
to the system. Instead the approach we will take is to try and state
underlying hypotheses for the main hypothesis and study them first.
The first hypothesis in the method proposal that remains to be vali-
dated is that impact analysis on the architecture level is accurate enough
for use in the prediction method. Two steps in the validation of this
hypothesis are required. First, the accuracy required of the impact analy-
sis for prediction purposes need to be established. Assuming the need
for certain accuracy in the prediction, e.g. +/-10%, sensitivity calcula-
tions on the variables of prediction model can show the accuracy
required from the impact analysis. Second, an experiment will establish
the accuracy of the impact analysis. A preliminary design of the experi-
ment is to take a sample of software engineers, let them estimate the
impact of change for a controlled set of change scenarios. Then another
sample of software engineers implement the changes described in the
scenarios. Finally, the modification of the implemented changes and the
predicted modification are compared. One of two possible control
groups may be that the change impact are estimated without the sup-
port of an software architecture, and the other control group may be
that the impact analysis are done with support from design and/or
source code.
The second hypothesis in the method proposal that remains to be
validated is that scenario profiles are good representations of the changes
References
[16] P.B. Kruchten, ‘The 4+1 View Model of Architecture,’ IEEE Software, pp. 42-50,
November 1995.
[17] D. C. Luckham, et. al., ‘Specification and Analysis of System Architecture Using
Rapide’, IEEE Transactions on Software Engineering, Special Issue on Software
Architecture, 21(4):336-355, April 1995
[18] J.A. McCall, ‘Quality Factors’, Software Engineering Encyclopedia, Vol 2, J.J. Mar-
ciniak (ed.), Wiley, 1994, pp. 958 - 971
[19] N. Medvedovic, R.N. Taylor, “A Framework for Classifying and Comparing
Architecture Description Languages”, In Proceedings of the Sixth European Soft-
ware Engineering Conference together with Fifth ACM SIGSOFT Symposium
on the Foundations of Software Engineering, Zurich, Switzerland, September
1997.
[20] P. Molin, L. Ohlsson, ‘Points & Deviations - A pattern language for fire alarm
systems,’ Martin, Riehle, Buschmann (eds.), Pattern Languages of Program Design
3, Addison-Wesley, 1998
[21] C. R. Morris, C. H. Ferguson, ‘How Architecture Wins Technology Wars’, Har-
ward Business Review, March-April 1993, pp. 86-96.
[22] Parnas, D.L, “On the Criteria To Be Used in Decomposing Systems into Mod-
ules”, Communications of the ACM, Vol. 15, No. 12, December 1972, pp.
1053-1058.
[23] Parnas, Clements and Weiss , ‘On the Modular Structure of Complex Systems’,
IEEE Transactions on Software Engineering, Vol SE-11, No. 3, March 1985.
[24] D.E. Perry, A.L.Wolf, ‘Foundations for the Study of Software Architecture’, Soft-
ware Engineering Notes, Vol. 17, No. 4, pp. 40-52, October 1992.
[25] K. Rubin, A. Goldberg, “Object Behaviour Analysis”, Communications of ACM,
September 1992, pp. 48-62
[26] J. Rumbaugh, M. Blaha, W. Premerlani, F. Eddy, W. Lorensen, Object-oriented
modeling and design, Prentice Hall, 1991.
[27] E. Sandewall, C. Strömberg, and H. Sörensen, "Software Architecture Based on
Communicating Residential Environments", Fifth International Conference on
Software Engineering (ICSE'81), San Diego, CA, IEEE Computer Society Press,
March. 1981, pp. 144-152.
[28] M. Shaw, D. Garlan, Software Architecture - Perspectives on an Emerging Discipline,
Prentice Hall, 1996.
[29] M.Shaw, et. al., ‘Abstractions for Software Architecture and Tools to Support
Them’, IEEE Transactions on Software Engineering, 21, 4, April 1995
[30] S. Shlaer, S.J. Mellor, ‘Recursive Design of an Application-Independentt Archi-
tecture’, IEEE Software, pp. 61-72, January/February 1997.
Abstract
In this paper we present the experiences and architecture from a research
project conducted in cooperation with two industry partners. The goal
of the project was to reengineer an existing system for haemo dialysis
machines into a domain specific software architecture [23]. Our main
experiences are (1) architecture design is an iterative and incremental
process, (2) software qualities require a context, (3) quality attribute
assessment methods are too detailed for use during architectural design,
(4) application domain concepts are not the best abstractions, (5) aes-
thetics guides the architect in finding potential weaknesses in the archi-
tecture, (6) it is extremely hard to decide when an architecture design is
ready, and (7) documenting software architectures is a important prob-
lem. We also present the architecture and the design rational to give a
basis for our experiences. We evaluated the resulting architecture by
implementing a prototype application.
1. Introduction
Software architecture design is an art. Today only a few, sketchy meth-
ods exist for designing software architecture [3,14,15,16]. The challenge
facing the software architect is to find an optimal balance in software
Filter
Patient
= pump
Figure 1. Schematic of Haemo Dialysis Machine
Hardware API
Figure 2. Legacy system decomposition
The MMI has the responsibilities of presenting data and alarms the
user, i.e. a nurse, and getting input, i.e., commands or treatment data,
from the user and setting the protective and control system in the cor-
rect modes.
The control system is responsible for maintaining the values set by the
user and adjusting the values according to the treatment selected for the
time being. The control system is not a tight-loop process control sys-
tem, only a few such loops exists, most of them low-level and imple-
mented in hardware.
2.2 Requirements
The aim during architectural design is to optimize the potential of the
architecture (and the system built based on it) to fulfil the software
quality requirements. For dialysis systems, the driving software quality
requirements are maintainability, reusability, safety, real-timeliness and
demonstrability. Below, these quality requirements are described in the
context of dialysis systems.
Maintainability
Past haemo dialysis machines produced by our partner company have
proven to be hard to maintain. Each release of software with bug correc-
tions and function extensions have made the software harder and harder
to comprehend and maintain. One of the major requirements for the
software architecture for the new dialysis system family is that maintain-
ability should be considerably better than the existing systems, with
respect to corrective but especially adaptive maintenance:
1. Corrective maintenance has been hard in the existing systems since
dependencies between different parts of the software have been
hard to identify and visualize.
2. Adaptive maintenance is initiated by a constant stream of new and
changing requirements. Examples include new mechanical com-
ponents as pumps, heaters and AD/DA converters, but also new
treatments, control algorithms and safety regulations. All these
new requirements need to be introduced in the system as easily as
possible. Changes to the mechanics or hardware of the system
Reusability
The software developed for the dialysis machine should be reusable.
Already today there are different models of haemo dialysis machines and
market requirements for customization will most probably require a
larger number of haemo dialysis models. Of course, the reuse level
between different haemo dialysis machine models should be high.
Safety
Haemo dialysis machines operate as an extension of the patients blood
flow and numerous situations could appear that are harmful and possi-
bly even lethal to the patient. Since the safety of the patient has very
high priority, the system has extremely strict safety requirements. The
haemo dialysis system may not expose the dialysis patient to any hazard,
but should detect the rise of such conditions and return the dialysis
machine and the patient to a state which present no danger to the
patient, i.e. a safe-state. Actions, like stopping the dialysis fluid if con-
centrations are out of range and stopping the blood flow if air bubbles
are detected in the extra corporal system, are such protective measures to
achieve a safe state. This requirement have to some extent already been
transformed into functional requirements by the safety requirements
standard for haemo dialysis machines [8], but only as far as to define a
number of hazard situations, corresponding thresh-hold values and the
method to use for achieving the safe-state. However, a number of other
criteria affecting safety are not dealt with. For example, if the communi-
cation with a pump fails, the system should be able to determine the
risk and deal with it as necessary, i.e. achieving safe state and notify the
nurse that a service technician is required.
Real-timeliness
The process of haemo dialysis is, by nature, not a very time critical pro-
cess, in the sense that actions must be taken within a few milli- or
microseconds during normal operation. During a typical treatment,
once the flows, concentrations and temperatures are set, the process
only requires monitoring. However, response time becomes important
when a hazard or fault condition arises. In the case of a detected hazard,
e.g. air is detected in the extra corporal unit, the haemo dialysis machine
must react very quickly to immediately return the system to a safe state.
Timings for these situation are presented in the safety standard for
haemo dialysis machines [8].
Demonstrability
As previously stated, the patient safety is very important. To ensure that
haemo dialysis machines that are sold adhere to the regulations for
safety, an independent certification institute must certify each construc-
tion. The certification process is repeated for every (major) new release
of the software which substantially increases the cost for developing and
maintaining the haemo dialysis machines. One way to reduce the cost
for certification is to make it easy to demonstrate that the software per-
forms the required safety functions as required. This requirement we
denote as demonstrability.
Architecture Architecture
Improvement Evaluation
Improvement Good
Opportunity Analysis enough?
NO
YES Arch.
Descr.
3. Lessons Learned
During the architecture design project, we gathered a number of experi-
ences that, we believe, have validity in a more general context than the
project itself. In this section, we present the lessons that we learned. In
the next section, the architecture design process leading to them and the
foundations for our experiences are presented.
4. Architecture
The haemo dialysis architecture project started out with a very informal
description of the legacy architecture, conveyed both in text and figures
and via several discussions during our design meetings. For describing
the resulting architecture we use two of the four views from the 4+1
View Model [16], i.e. Logical View and Dynamic View. The develop-
Device
The system is modeled as a device hierarchy, starting with the entities
close to the hardware up to the complete system. For every device, there
are zero or more sub-devices and a controlling algorithm. The device is
either a leaf device or a logical device. A leaf device is parameterized with
a controlling algorithm and a normalizer. A logical device is, in addition
to the controlling algorithm and the normalizer, parameterized with one
or more sub devices.
ControllingAlgorithm
In the device archetype, information about relations and configuration
is stored. Computation is done in a separate archetype, which is used to
parameterize Device components. The ControllingAlgorithm performs
calculations for setting the values of sub output devices based on the val-
ues it gets from input sub devices and the control it receives from the
encapsulating device. When the device is a leaf node the calculation is
normally void.
Normaliser
To deal with different units of measurement a normalization archetype
is used. The normalizer is used to parameterize the device components
and is invoked when normalizing from and to the units used by up-hier-
archy devices and the controlling algorithm of the device.
AlarmDetectorDevice
Is a specialization of the Device archetype. Components of the Alarm-
DetectorDevice archetype is responsible for monitoring the sub devices
and make sure the value read from the sensors are within the alarm
threshold value set to the AlarmDetectorDevice. When threshold limits
are crossed an AlarmHandler component is invoked.
AlarmHandler
The AlarmHandler is the archetype responsible for responding to
alarms by returning the haemo dialysis machine to a safe-state or by
addressing the cause of the alarm. Components are used to parameterize
the AlarmDetectorDevice components.
0 1 ControllingAlgortihm
Device calculate( )
getValue( )
setValue( )
* Normaliser
normalise( )
denormalise( )
hazard surveillance
0
AlarmHandler
AlarmDetectorDevice activate( )
reset( ) Sends alarm events reset( )
Scheduler
The scheduler archetype is responsible for scheduling and invoking the
periodic objects. Only one scheduler element in the application may
exist and it handles all periodic objects of the architecture. The sched-
uler accepts registrations from periodic objects and then distributes the
execution between all the registered periodic objects. This kind of
scheduling is not pre-emptive and requires usage of non-blocking I/O.
Periodic object
A periodic object is responsible for implementing its task using non-
blocking I/O and using only the established time quanta. The tick()
method will run to its completion and invoke the necessary methods to
complete its task. Several periodic objects may exist in the application
architecture and the periodic object is responsible for registering itself
with the scheduler (figure 5).
Scheduler
execute( )
1..*
PeridicObject
tick( )
Target
holds data other entities are dependent on. The target is responsible for
notifying the link when its state changes.
Observer
depends on the data or change of data in the target. Is either updated by
a change or by own request.
Link
Maintains the dependencies between the target and its observers. Also
holds the information about the type of connection, i.e. push or pull. It
would be possible to extend the connection model with periodic
updates.
Link
update( )
Target Observer
notify( )
update( ) pushconnect( ) notify( )
pullconnect( )
conductivity.set(0.2); // in milliMol
temperature.set(37.5); // in Celsius
weightloss.set(2000); // in milliLitre
dialysisFluidFlow.set(200);//in milliLitre/minute
overHeatAlarm.set(37.5,5); // ideal value in
// Celsius and maximum deviation in percent
wait(180); // in minutes
HD Treatment HaemoDialysisMachine
ReversedFlowAlarm Protective
System
FrequenceToRevolutions
Control System: Hardware API Level
Alarm Monitoring
The control system may utilize AlarmDevices to detect problem situa-
tions and the protective system will consist of a more complex configu-
ration of different types of AlarmDevices. These will also be run
periodically and in pseudo parallel. The message sequence of one tick()
for alarm monitoring is shown in figure 9.
calculate
Activate
Treatment Process
The treatment process is central to the haemo dialysis machine and its
software. The general process of a treatment consists of the following
steps; (1) preparation, (2) self test, (3) priming, (4) connect patient
blood flow, (5) set treatment parameters, (6) start dialysis, (7) treatment
done (8) indication, (9) nurse feeds back blood to patient, (10) discon-
nect patient from machine, (11) treatment records saved, (12) disinfect-
ing, (13) power down.
The process generally takes several hours and the major part of the
time the treatment process are involved in a monitoring and controlling
cycle. The more detailed specification of this sub process is here.
1. Start fluid control
2. Start blood control
3. Start measuring treatment parameters, e.g. duration, weight loss,
trans-membrane pressure, etc.
4. Start protective system
5. Control and monitor blood and fluid systems until time-out or
accumulated weight loss reached desired values
6. Treatment shutdown
4.6 Rationale
During the design of the haemo dialysis architecture we had to make a
number of design decisions. In this section the major design decisions
and their rationale are presented.
The iterations
The architecture design was iterated and evaluated some three times
more, each addressing the requirements of the previous design and
incorporating more of the full requirement specification.
In the first iteration, we used the Facade design pattern [11] to rem-
edy the problem of hiding details from the treatment specifications.
Spurred by the wonderful pattern we introduced several facades in the
architecture. The result was unnecessary complexity and did not give
the simple specification of a treatment that we desired.
In the second iteration, we reduced the number of facades and
adjusted the abstraction, into a device hierarchy. This allowed us to use
sub-devices that were communicating with the hardware and dealt with
the low-level problems such as normalization and hardware APIs. These
low-level devices were connected as logical inputs and outputs to other
logical devices. The logical devices handle logical entities, e.g. a heater
device and a thermometer device are connected to the logical device
Temperature (figure 7). This allows for specification of treatments using
the vocabulary of the logical devices, adapted from the low level hard-
ware parameters to the domain vocabulary.
In the third major iteration, the architecture was improved for flexi-
bility and reuse by introducing parameterization for normalization and
control algorithms. Also the alarm detection device was introduced for
detecting anomalies and hazards situations.
Concurrency
The control system involves constantly executing control loops that
evaluate the current state of the process and calculates new set values to
keep the process at its optimal parameters. This is supposed to be done
simultaneously, i.e. in parallel. However, the system is in its basic ver-
sion only equipped with a signal processor reducing parallelism to
pseudo parallel. On a single processor system we have the options of (1)
choose to use a third party real-time kernel supporting multi-threads
and real-time scheduling. And (2) we can design and implement the sys-
tem to be semi-concurrent using the periodic objects approach [20] and
make sure that the alarm functions are given the due priority for achiev-
ing swift detection and elimination of hazards. Finally (3) we may
choose the optimistic approach, i.e., design a sequentially executing sys-
tem and make it fast enough to achieve the tightest response time
requirements.
The first one is undesirable because of two reasons, i.e. resource con-
sumption and price. The resources, i.e. memory and processor capacity,
consumed by such a real-time kernel are substantial especially since we
most likely will have to sacrifice resources, e.g. processor capacity and
memory, for service we will not use. In addition, the price for a certified
real-time kernel is high and the production and maintenance depart-
ments become dependent on third-party software.
The third option is perhaps the most straightforward option and
could probably be completed. The profound problem is that it becomes
un-deterministic. This is affecting the demonstrability negatively.
Because of the software certification, it is unrealistic to believe that such
an implementation would be allowed in a dialysis machines.
The second option, pose limitations in the implementation and
design of the system, i.e. all objects must implement their methods
using non-blocking I/O. However, it still is the most promising solu-
tion. Periodic objects visualize the parallel behavior clearly, using the
scheduler and its list of periodic objects especially since it has been used
successfully in other systems.
Communication
The traditional component communication semantics are that a sender
sends a message to a known receiver. However, this simple message send
4.7 Evaluation
In this section an analysis of the architecture design is presented with
respect to the quality requirements. As stated in section 3.2, the tradi-
tional assessment methods are inadequate for the architecture level and
therefore our evaluation was strongest on maintainability (prototype)
and more subjective for the other quality requirements.
Maintainability
To evaluate the maintainability and feasibility of the architecture the
industrial partner EC-Gruppen developed a prototype of the fluid-sys-
tem. The prototype included controlling fluid pumps and the conduc-
tivity sensors. In total the source code volume for the prototype was 5,5
kLOC.
The maintainability was validated by an extension of the prototype.
Changing the pump process control algorithms, a typically common
maintenance task. The change required about seven (7) lines of code to
change in two (2) classes. And the prototype was operational again after
less than two days work from one person. Although this is not scientifi-
cally valid evidence, it indicates that the architecture easily incorporates
planned changes.
Reusability
The reusability of components and applications developed using this
architecture has not been measured, for obvious reasons. But our pre-
liminary assessment shows that the sub quality factors of reusability
[19], i.e. generality, modularity, software system independence, machine
independence and self-descriptiveness, all are reasonably accounted for
in this architecture. First, the architecture supports generality. The
device archetype allow for separation between devices and most of the
application architecture will be made of devices of different forms. Sec-
ond, the modularity is high. The archetypes allows for clear and distin-
guishable separation of features into their own device entity. Third, the
architecture has no excessive dependencies to any other software system,
e.g. multi processing kernel. Fourth, the hardware dependencies have
been separated into their own device entities and can easily by substi-
tuted for other brands or models. Finally, the archetypes provide com-
prehensible abstraction for modeling a haemo dialysis machine.
Locating, understanding and modifying existing behavior is, due to the
architecture an easy and comprehendible task.
Safety
The alarm devices ensure the safety of the patient in a structured and
comprehensible way. Every hazard condition is monitored has its own
AlarmDetectorDevice. This makes it easier to demonstrate what safety
precautions have been implemented from the standard.
Real-timeliness
This requirement was not explicitly evaluated during the project.
Instead our assumption was that the data processing performance would
equal that of a Pentium processor. Given that the prototype would work
on a PC running NT, it would be able to run fast enough with a less
resource consuming operating system in the haemo dialysis machine.
Demonstrability
Our goal when concerned with the demonstrability was to achieve a
design that made the specification of a treatment and its complex sub-
processes very similar to how domain experts would express the treat-
ment their own vocabulary. The source code example in section 4 for
specifying a treatment in the application is very intuitive compared to
specifying the parameters of the complex sub processes of the treat-
ments. Hence, we consider that design goal achieved.
5. Conclusions
In this paper, the architectural design of a haemo dialysis system and the
lessons learned from the process leading to the architecture have been
presented. The main experiences from the project are the following.
First, quality requirements are often specified without any context and
this complicates the evaluation of the architecture for these attributes
and the balancing of quality attributes. Second, assessment techniques
developed by the various research communities studying a single quality
attribute, e.g. performance or reusability, are generally intended for later
phases in development and require sometimes excessive effort and data
not available during architecture design. Third, the archetypes use as the
foundation of a software architecture cannot be deduced from the appli-
cation domain through domain analysis. Instead, the archetypes repre-
sent chunks of domain functionality optimized for the driving quality
requirements. Fourth, during the design process we learned that design
is inherently iterative, that group design meetings are far more effective
than individual architects and that documenting design decisions is very
important in order to capture the design rationale. Fifth, architecture
designs have an associated aesthetics that, at least, is perceived inter-sub-
jectively and an intuitively appealing design proved to be an excellent
indicator, as well as the lack of appeal. Sixth, it proved to be hard to
decide when one was done with the architectural design due to the nat-
ural tendency of software engineers to perfect solutions and to the
required effort of architecture assessment. Finally, it is very hard to doc-
ument all relevant aspects of a software architecture. The architecture
design presented in the previous section provides some background to
our experiences.
Acknowledgements
We would like to thank our partners in the research project, Althin
Medical AB, Ronneby, and Elektronik Gruppen AB, Helsingborg, espe-
cially, Lars-Olof Sandberg, Anders Kambrin, and Mogens Lundholm,
and our collegues, Michael Mattsson and Peter Molin, who participated
in the design of the architecture.
References
[1] G. Abowd, L. Bass, P. Clements, R. Kazman, L. Northrop, A. Moormann
Zaremski, Recommend Best Industrial Practice for Software Architecture Evaluation,
CMU/SEI-96-TR-025, 1997.
[2] C. Argyris, R. Putnam, D. Smith, Action Science: Concepts, methods, and skills for
research and intervention, Jossey-Bass, San Fransisco, 1985
[3] P. Bengtsson, J. Bosch, "Scenario-based Software Architecture Reengineering",
Proceedings of the 5th International Conference on Software Reuse (ICSR5), IEEE,
2-5 june, 1998
[4] B. W. Boehm, “A Spiral Model of Software Development and Enhancement”,
IEEE Computer, 61-72, May 1988
[5] G. Booch, Object-Oriented Analysis and Design with Applications, Benjamin/
Cummings Publishing Company, 1994.
[6] J. Bosch, ‘Design of an Object-Oriented Measurement System Framework’,
Object-Oriented Application Frameworks, M. Fayad, D. Schmidt, R. Johnson
(eds.), John Wiley, (coming)
[7] F. Buschmann, R. Meunier, H. Rohnert, M.Stahl, Pattern-Oriented Software
Architecture - A System of Patterns, John Wiley & Sons, 1996.
[8] CEI/IEC 601-2, Safety requirements std.for dialysis machines
[9] P. C. Clements, A Survey of Architecture Description Languages, Eight
International Workshop on Software Specification and Design, Germany, March
1996
[10] N.E. Fenton, S.L. Pfleeger, Software Metrics - A Rigorous & Practical Approach
(2nd edition), International Thomson Computer Press, 1996.
[11] Gamma et. al., Design Patterns Elements of Reusable Design, Addison.Wesley,
1995.
[12] IEEE Standard Glossary of Software Engineering Terminology, IEEE Std.
610.12-1990.
Abstract
This paper presents a method for reengineering software architectures.
The method explicitly addresses the quality attributes of the software
architecture. Assessment of quality attributes is performed primarily
using scenarios. Design transformations are done to improve quality
attributes that do not satisfy the requirements. Assessment and design
transformation can be performed for several iterations until all require-
ments are met. To illustrate the method, we use the reengineering of a
prototypical measurement system into a domain-specific software archi-
tecture as an example.
1. Introduction
Reengineering of a software system is generally initiated by major
changes in the requirements the system should fulfil. These changes are
often concerned with the software qualities rather than the functional
requirements. For example, due to architecture erosion [20], the main-
tainability of the software system may have deteriorated. To improve
this, the system is reengineered.
To the best of our knowledge, few architecture reengineering meth-
ods have been defined. Traditional system design methods tend to focus
measurement
system
camera
trigger samples actuator
(t=0) beer
{
can
conveyer belt
BeerCan
BeerCan(Camera,Lever) senses damaged cans
calibrate()
wait(int millis)
measure()
removes beercans Camera
Lever damagedLine()
remove() calibrate()
3.1 Overview
The input for the architecture reengineering method consists of the
updated requirements specification and the existing software architec-
ture. As output, an improved architectural design is generated. In figure
3, the steps in the method are presented graphically.
1. Incorporate new functional requirements in the architecture.
Although software engineers generally will not design a system
less reliable or reusable, the software qualities are not explicitly
addressed at this stage. The result from this is a first version of the
application architecture design.
2. Software quality assessment. Each quality attribute (QA) is esti-
mated, using primarily scenario-based analysis for assessment
technique. If all estimations are as good or better than required,
the architectural design process is finished. Otherwise, the next
step is entered.
3. Architecture transformation. During this stage, QA-optimizing
transformations are used to improve the architecture. Each set of
transformations (one or more) results in a new version of the
architectural design.
4. Software quality assessment. The design is again evaluated and the
process is repeated from 3 until all software quality requirements
are met or until the software engineer decides that no feasible
solution exists.
functionality-based requirement
architecture redesign specification
(1)
software
architecture
architecture assess
not OK
transformations quality attributes
(3) (2,4)
OK
quality attribute
optimizing
transformations
tectures, are less suitable for systems where performance is a major issue,
although the flexibility of this style is high.
Four different approaches for assessing quality attributes have been
identified, i.e., scenarios, simulation, mathematical modelling and expe-
rience based reasoning. For each quality attribute, the engineer can
select the most suitable approach for evaluation. In the subsequent sec-
tions, each approach is described in more detail.
Scenario-based evaluation
The assessment of a software quality using scenarios is done in these
steps:
1. Define a representative set of scenarios. A set of scenarios is devel-
oped that concretises the actual meaning of the attribute. For
instance, the maintainability quality attribute may be specified by
scenarios that capture typical changes in requirements, underlying
hardware, etc.
2. Analyse the architecture. Each individual scenario defines a context
for the architecture. The performance of the architecture in that
context for this quality attribute is assessed by analysis. Posing
typical question[18] for the quality attributes can be helpful (sec-
tion 3.3).
3. Summarise the results. The results from each analysis of the archi-
tecture and scenario are then summarised into an overall results,
e.g., the number of accepted scenarios versus the number not
accepted.
The usage of scenarios is motivated by the consensus it brings to the
understanding of what a particular software quality really means. Sce-
narios are a good way of synthesising individual interpretations of a soft-
ware quality into a common view. This view is both more concrete than
the general software quality definition[10], and it is also incorporating
the uniqueness of the system to be developed, i.e., it is more context
sensitive.
In our experience, scenario-based assessment is particularly useful for
development related software qualities. Software qualities such as main-
tainability can be expressed very naturally through change scenarios. In
Simulation
Simulation of the architecture[30] using an implementation of the
application architecture provides a second approach for estimating qual-
ity attributes. The main components of the architecture are imple-
mented and other components are simulated resulting in an executable
system. The context, in which the system is supposed to execute, could
also be simulated at a suitable abstraction level. This implementation
can then be used for simulating application behaviour under various cir-
cumstances.
Simulation complements the scenario-based approach in that simu-
lation is particularly useful for evaluating operational software qualities,
such as performance or fault-tolerance.
Mathematical modelling
Various research communities, e.g., high-performance computing [28],
reliability [25], real-time systems [16], etc., have developed mathemati-
cal models, or metrics, to evaluate especially operation related software
qualities. Different from the other approaches, the mathematical models
allow for static evaluation of architectural design models.
Mathematical modelling is an alternative to simulation since both
approaches are primarily suitable for assessing operational software qual-
ities.
Experience-based reasoning
A fourth approach to assessing software qualities is through reasoning
based on experience and logical reasoning based on that experience.
Experienced software engineers often have valuable insights that may
prove extremely helpful in avoiding bad design decisions and finding
issues that need further evaluation. Although these experiences generally
are based on anecdotal evidence, most can often be justified by a logical
line of reasoning. This approach is different from the other approaches.
First, in that the evaluation process is less explicit and more based on
subjective factors as intuition and experience. Secondly, this technique
makes use of the tacit knowledge of the involved persons.
Distribute requirements
The final type of transformation deals with software quality require-
ments using the divide-and-conquer principle: a software quality require-
ment at the system level is distributed to the subsystems or components
that make up the system. Thus, a software quality requirement X is dis-
tributed over the n components that make up the system by assigning a
software quality requirement xi to each component ci such that X=x1+
... +xn. A second approach to distribute requirements is by dividing the
software quality requirement into two or more software quality require-
ments. For example, in a distributed system, fault-tolerance can be
divided into fault-tolerant computation and fault-tolerant communica-
tion.
4. Measurement Systems
The domain of measurement systems denotes a class of systems used to
measure the relevant values of a process or product. These systems are
different from the, better known, process control systems in that the
measured values are not directly, i.e., as part of the same system, used to
control the production process that creates the measured product or
process. Industry uses measurement systems for quality control on pro-
duced products, e.g., to separate acceptable from unacceptable products
or to categorise the products in quality grades.
The goal of the reengineering project was to define a DSSA that pro-
vides a reusable and flexible basis for instantiating measurement sys-
tems. Although the software architecture of the beer can inspection
system introduced in section 2 is a rather prototypical instance of a mea-
surement system, we used it as a starting point. This application archi-
tecture, obviously, does not fulfil the software quality requirements of a
DSSA. Consequently, it needs to be re-engineered and transformed to
match the requirements.
The challenge of reengineering projects is to decide when one has
achieved the point where the reengineered architecture fulfils its require-
ments. The functional requirements generally can be evaluated relatively
easy by tracing the requirements in the design. Software quality require-
ments such as reusability and robustness, on the other hand, are much
harder to assess. In the next section, we describe our approach to evalu-
ating some of the software quality requirements put on the measure-
ment system DSSA.
Reusability
The reusability quality attribute provides a balance between the two
properties; generality and specifics. First, the architecture and its com-
ponents should be general because they should be applied in other simi-
lar situations. For example, a weight sensor component can be
sufficiently generic to be used in several applications. Secondly, the
architecture should provide concrete functionality that provides consid-
erable benefit when it is reused.
To evaluate the existing application architecture we use scenarios.
However, reusability is a difficult software property to assess. We evalu-
ate by analysing the architecture with respect to each scenario and assess
the ratio of components reused as-is and total number of components.
Note that the scenarios are presented as vignettes [14] for reasons of
space.
R1 Product packaging quality control. For example, sugar packages that
are both measured with respected to intact packaging and weight.
R2 Surface finish quality control where multiple algorithms may be
used to derive a quality figure to form a basis for decisions.
R3 Quality testing of microprocessors where each processor is either
rejected or given a serial number and test data logged in a quality history
database in another system.
R4 Product sorting and labelling, e.g., parts are sorted after tolerance
levels and labelled in several tolerance categories and are sorted in differ-
ent storage bins.
R5 Intelligent quality assurance system, e.g., printing quality assurance.
The system detects problems with printed results and rejects occasional
misprints, but several misprints in a sequence might cause rejection and
raising an alarm.
All presented scenarios require behaviour not present in the initial
software architecture. However, the scenarios are realistic and measure-
ment systems exist that require functionality defined by the scenarios.
Maintainability
In software-intensive systems, maintainability is generally considered
important. A measurement system is an embedded software system and
its function is very dependent on its context and environment. Changes
to that environment often inflict changes to the software system. The
goal for maintainability in this context is that the most likely changes in
requirements are incorporated in the software system against minimal
effort.
In addition, maintainability of the DSSA is assessed using scenarios.
For the discussion in this paper, the following scenarios are applied on
the DSSA. Again, the scenarios are presented as vignettes for reasons of
space.
M1 The types of input or output devices used in the system is excluded
from the suppliers assortment and need to be changed. The correspond-
5.2 Transformations
Problem. Although the beer cans application is a small set of classes, the
task of changing or introducing a new type of items require the source
code of most components to be changed. In all the specified reuse sce-
narios, the use of new types of measurement item is involved.
Trigger MeasurementItem
MeasurementValue
Sensor Actuator
Problem resolved. The trigger need no longer to know the actual type
of measurement item to be created, but instead requests a new measure-
ment item from the ItemFactory. The use of the Abstract Factory pattern
did eliminate that problem.
Changing to strategies
■
Calculation strategy: Calculation of derived data and decisions
show that there are similar methods to perform different tasks of
different objects. Different calculation strategies can be defined.
These strategies can be used by sensors, measurement items and
actuators.
The newly introduced components are marked with (3) in figure 5.
MeasurementValue
(1)
ActuationStrategy
(3)
Sensor CalculationStrategy
(1) (3)
1. The UML notation for inheritance is used to show the equivalence in interfaces
1,00
0,80
0,60
0,40
M5
M4
M3
0,20
M2
M1
0,00
R1
R2
1 R3
2
3
R4
4
5
R5
6
Figure 6. The effect of each transformation
6. Related Work
To the best of our knowledge, no architecture reengineering methods
exists to date. Traditional design methods often focus on the system
functionality [2,12,22,24,30] or on a single software quality, e.g., high-
performance [28], real-time [16] and reusable systems [13]. Architec-
ture design methods have been proposed by Shlaer & Mellor [27],
Kruchten [15] and Bosch & Molin [6]. Different from architecture
reengineering, architecture design methods start from the requirement
specification only. Boehm [3] discusses conflicts between, what he calls,
quality requirements, but focuses on identifying these conflicts and
solving them during requirement specification rather than during archi-
tectural design or reengineering.
Architecture evaluation is discussed by Kazman et al. [14]. Their
SAAM method also uses scenarios, but does not discuss other tech-
niques for architecture evaluation. In addition, no relation to architec-
ture (re)design is made. Further, the work described in [1] primarily
uses scenarios for architecture evaluation.
Architecture transformation uses the notions of architectural styles
[26], architectural patterns [7] and design patterns [9]. However, rather
than viewing these concepts as ready designs, we treat styles and pat-
7. Conclusions
This paper presented method for reengineering software architectures
that provides a practical approach to evaluating and redesigning archi-
tectures. The method uses four techniques for architecture evaluation,
i.e., scenarios, simulation, mathematical modelling and experience
based reasoning. To improve the architecture, five types of architecture
transformations are available; to impose architectural style, to apply
architectural pattern, to use design pattern, convert quality require-
ments to functionality and distribute quality requirements.
We illustrated the method using a concrete system from the measure-
ment systems domain, i.e., a beer can inspection system. This system
was reengineered into a domain-specific software architecture for mea-
surement systems. The focus in this paper was on the reusability and
maintainability requirements on the DSSA. Scenarios were defined for
assessing each requirement. Although no predominant approach to
assessing quality attributes of systems exists, this paper shows that sce-
narios provide a powerful means for practitioners. Architecture transfor-
mations provide an objective way to redesign since associated with each
transformation, a problem is explicitly addressed, alternative transfor-
mations are investigated and the rationale for the design decisions is
captured.
Future work includes the extensions of the reengineering method for
more quality requirements and the application of the method in more
industry case studies and projects.
Acknowledgements
The authors wish to thank Will Tracz for his constructive and detailed
comments.
References
[1] G. Abowd, L. Bass, P. Clements, R. Kazman, L. Northrop, A. Moormann
Zaremski, Recommend Best Industrial Practice for Software Architecture Evaluation,
CMU/SEI-96-TR-025, January 1997.
[2] G. Booch, Object-Oriented Analysis and Design with Applications (2nd edition),
Benjamin/Cummings Publishing Company, 1994.
[3] B. Boehm, ‘Aids for Identifying Conflicts Among Quality Requirements,’
International Conference on Requirements Engineering (ICRE96), Colorado, April
1996, and IEEE Software, March 1996.
[4] J. Bosch, ‘Design of an Object-Oriented Measurement System Framework,’
submitted, 1997.
[5] J. Bosch, P. Molin, M. Mattsson, PO Bengtsson, ‘Object-oriented Frameworks:
Problems and Experiences,’ submitted, 1997.
[6] J. Bosch, P. Molin, ‘Software Architecture Design: Evaluation and
Transformation,’ submitted, 1997.
[7] F. Buschmann, C. Jäkel, R. Meunier, H. Rohnert, M.Stahl, Pattern-Oriented
Software Architecture - A System of Patterns, John Wiley & Sons, 1996.
[8] N.E. Fenton, S.L. Pfleeger, Software Metrics - A Rigorous & Practical Approach
(2nd edition), International Thomson Computer Press, 1996.
[9] Gamma et. al., Design Patterns Elements of Reusable Design, Addison.Wesley,
1995.
[10] IEEE Standard Glossary of Software Engineering Terminology, IEEE Std.
610.12-1990.
[11] R.S. D'Ippolito, Proceedings of the Workshop on Domain-Specific Software
Architectures, CMU/SEI-88-TR-30, Software Engineering Institute, July 1990.
[12] I. Jacobson, M. Christerson, P. Jonsson, G. Övergaard, Object-oriented software
engineering. A use case approach, Addison-Wesley, 1992.
[13] E. Karlsson ed., ‘Software Reuse A Holistic Approach’, Wiley, 1995.
[14] R. Kazman, L. Bass, G. Abowd, M. Webb, ‘SAAM: A Method for Analyzing the
Properties of Software Architectures,’ Proceedings of the 16th International
Conference on Software Engineering, pp. 81-90, 1994.
[15] P.B. Krutchen, ‘The 4+1 View Model of Architecture,’ IEEE Software, pp. 42-50,
November 1995.
[16] J.W.S. Liu, R. Ha, ‘Efficient Methods of Validating Timing Constraints,’ in
Advanced in Real-Time Systems, S.H. Son (ed.), Prentice Hall, pp. 199-223, 1995.
[17] D. C. Luckham, et. al., Specification and Analysis of System Architecture Using
Rapide, IEEE Transactions on Software Engineering, Special Issue on Software
Architecture, 21(4):336-355, April 1995
[18] J.A. McCall, Quality Factors, Software Engineering Encyclopedia, Vol 2, J.J.
Marciniak ed., Wiley, 1994, pp. 958 - 971
[19] P. Molin, L. Ohlsson, ‘Points & Deviations - A pattern language for fire alarm
systems,’ to be published in Pattern Languages of Program Design 3, Addison-
Wesley.
[20] D.E. Perry, A.L.Wolf, ‘Foundations for the Study of Software Architecture,’
Software Engineering Notes, Vol. 17, No. 4, pp. 40-52, October 1992.
[21] J.S. Poulin, ‘Measuring Software Reusability’, Proceedings of the Third
Conference on Software Reuse, Rio de Janeiro, Brazil, November 1994.
[22] The RAISE Development Method, The RAISE Method Group, Prentice Hall,
1995.
[23] D.J. Richardson, A.L. Wolf, ‘Software Testing at the Architectural Level,’
Proceedings of the Second International Software Architecture Workshop, pp. 68-71,
San Francisco, USA, October 1996.
[24] J. Rumbaugh, M. Blaha, W. Premerlani, F. Eddy, W. Lorensen, Object-oriented
modeling and design, Prentice Hall, 1991.
[25] P. Runeson, C. Wohlin, ‘Statistical Usage Testing for Software Reliability
Control’, Informatica, Vol. 19, No. 2, pp. 195-207, 1995.
[26] M. Shaw, D. Garlan, Software Architecture - Perspectives on an Emerging
Discipline, Prentice Hall, 1996.
[27] S. Shlaer, S.J. Mellor, ‘Recursive Design of an Application-Independentt
Architecture’, IEEE Software, pp. 61-72, January/February 1997.
[28] C. U. Smith, Performance Engineering of Software Systems, Addison-Wesley, 1990.
[29] W. Tracz, ‘DSSA (Domain-Specific Software Architecture) Pedagogical Example,’
ACM Software Engineering Notes, Vol. 20, No. 3, pp. 49-62, July 1995.
[30] Wirfs-Brock, B. Wilkerson, L. Wiener, Designing Object-Oriented Software,
Prentice Hall, 1990.
III
Abstract
A method for the prediction of software maintainability during software
architecture design is presented. The method takes (1) the requirement
specification, (2) the design of the architecture (3) expertise from soft-
ware engineers and, possibly, (4) historical data as input and generates a
prediction of the average effort for a maintenance task. Scenarios are
used by the method to concretize the maintainability requirements and
to analyze the architecture for the prediction of the maintainability. The
method is formulated based on extensive experience in software archi-
tecture design and detailed design and exemplified using the design of
software architecture for a haemo dialysis machine. Experiments for
evaluation and validation of the method are ongoing and future work.
1. Introduction
One of the major issues in software development today is the software
quality. Rather than designing and implementing the correct function-
ality in products, the main challenge is to satisfy the software quality
requirements, e.g. performance, reliability, maintainability and flexibil-
ity. The notion of software architecture has emerged during the recent
years as the appropriate level to deal with software qualities. This
because, it has been recognized [1,2] that the software architecture sets
the boundaries for the software qualities of the resulting system.
Traditional object-oriented software design methods, e.g. [5,14,21]
focus primarily on the software functionality and give no support for
software quality attribute-oriented design, with the exception of reus-
ability and flexibility. Other research communities focus on a single
quality attribute, e.g. performance, fault-tolerance or real-time. How-
ever, real-world software systems are never just a real-time system or a
fault-tolerant system, but generally require a balance of different soft-
ware qualities. For instance, a real-time system that is impossible to
maintain or a high-performance computing system with no reliability is
of little use.
To address these issues, our ongoing research efforts aim on develop-
ing a method for designing software architectures, i.e. the ARCS
method [6]. In short, the method starts with an initial architecture
where little or no attention has been given to the required software qual-
ities. This architecture is evaluated using available techniques and the
result is compared to the requirements. Unless the requirements are
met, the architect transforms the architecture in order to improve the
software quality that was not met. Then the architecture is again evalu-
ated and this process is repeated until all the software quality require-
ments have been met or until it is clear that no economically or
technically feasible solution exists.
The evaluation of software architectures plays a central role in archi-
tectural design. However, software architecture evaluation is not well
understood and few methods and techniques exist. Notable exceptions
are the SAAM method discussed in [15] and the approach described in
[10]. In this paper, we propose a method for predicting maintainability
of a software system based on its architecture. The method defines a
maintenance profile, i.e. a set of change scenarios representing perfective
and adaptive maintenance tasks. Using the maintenance profile, the
architecture is evaluated using so-called scenario scripting and the
expected maintenance effort for each change scenario is evaluated.
Based on this data, the required maintenance effort for a software sys-
tem can be estimated. The method is based on our experience in archi-
tectural design and its empirical validation is part of ongoing and future
work. The remainder of this paper is organized as follows. In the next
section, the maintenance prediction method is presented in more detail.
The architecture used as an example is discussed in section 3 and the
n = 1 m=1
the patients blood while it is outside the body. However, these details
are omitted since they are not needed for the discussion in this paper.
dialysis fluid
concentrate
heater sensor The extra
corporal circuit
H20
Filter
Patient
= pump
Figure 2. Schematic of Haemo Dialysis Machine
3.1 Requirements
The aim during architectural design is to optimize the potential of the
architecture (and the system built based on it) to fulfil the software
quality requirements. For dialysis systems, the driving software quality
requirements are maintainability, reusability, safety, real-timeliness and
demonstrability. Below, we elaborate on the maintainability requirement.
Maintainability. Past haemo dialysis machines produced by our partner
company have proven to be hard to maintain. Each release of software
with bug corrections and function extensions have made the software
0 1 ControllingAlgortihm
Device calculate( )
getValue( )
setValue( )
* Normaliser
normalise( )
denormalise( )
hazard surveillance
0
AlarmHandler
AlarmDetectorDevice activate( )
reset( ) Sends alarm events reset( )
cation architecture when doing the scripting, i.e. change impact analy-
sis.
Device
The system is modeled as a device hierarchy, starting with the entities
close to the hardware as leaves, ending with the complete system as the
root. For every device, there are zero or more sub-devices and a control-
ling algorithm. The device is either a leaf device or a logical device.
ControllingAlgorithm
In the device archetype, information about relations and configuration
is stored. Computation is done in a separate archetype, the Control-
lingAlgorithm, which is used to parameterize Device components.
Normaliser
To convert from and to different units of measurement the normaliza-
tion archetype is used.
AlarmDetectorDevice
Is a specialization of the Device archetype. Components of the Alarm-
DetectorDevice archetype are responsible for monitoring the sub
devices. When threshold limits are crossed an AlarmHandler compo-
nent is invoked.
AlarmHandler
The AlarmHandler is the archetype responsible for responding to
alarms by returning the haemo dialysis machine to a safe-state or by
addressing the cause of the alarm.
HD Treatment HaemoDialysisMachine
ReversedFlowAlarm Protective
System
107
3. Example Application Architecture
Architecture Level Prediction of Software Maintenance
Scheduler
The scheduler archetype is responsible for scheduling and invoking the
periodic objects. Only one scheduler element in the application may
exist and it handles all periodic objects of the architecture. The sched-
uler accepts registrations from periodic objects and then distributes the
execution between all the registered periodic objects.
Periodic object
A periodic object is responsible for implementing its task using non-
blocking I/O and using only the established time quanta. The tick()
method will run to its completion and invoke the necessary methods to
complete its task.
Link
update( )
Target Observer
notify( )
update( ) pushconnect( ) notify( )
pullconnect( )
Target
Maintains information that other entities may be dependent on. The
target is responsible for notifying the link when its state changes.
Observer
Depends on the data or change of data in the target. Is either updated
by a change or by own request.
Link
Maintains the dependencies between the target and its observers. Also
holds the information about the type of connection, i.e. push or pull. It
would be possible to extend the connection model with periodic
updates.
conductivity.set(0.2); // in milliMol
temperature.set(37.5); // in Celsius
weightloss.set(2000); // in milliLitre
dialysisFluidFlow.set(200);//in milliLitre/minute
overHeatAlarm.set(37.5,5); // ideal value in
// Celsius and maximum deviation in percent
wait(180); // in minutes
4. Prediction Example
In this section, we will present an example prediction for the architec-
ture presented in section 3. It is presented to illustrate the practical
usage of the method, rather than to give a perfect prediction of this par-
ticular case.
Hardware C8 Replace blood pumps using revolutions per minute with 0.087
pumps using actual flow rate (ml/s).
Com. C9 Add function for uploading treatment data to patient’s 0.043
and I/O digital journal.
Algorithm C10 Change controlling algorithm for concentration of 0.132
Change dialysis fluid from PI to PID.
Sum 1.0
In section 2.2, a total number of ten scenarios per category were sug-
gested. Both for reasons of space and illustrativeness, we will however
only use a total of ten scenarios in this example.
Size
Component (LOC)
HDFTreatment 200
HaemoDialysisMachine 500
ConcentrationDevice 100
TemperatureDevice 100
WeightlossDevice 150
DialysisFluidFlowDevice 150
ConcCtrl 175
TempCtrl 30
SetCtrl 30
AcetatPump 100
ConductivitySensor 100
FluidHeater 100
Sum 2805
Size
Component (LOC)
TempSensor 100
FlowdifferentialPump 100
FluidPrePump 100
FluidPostPump 100
mSTomMol 20
JouleToPercent 20
PT100toCelsius 40
FrequenceToRevolutions 40
OverHeatAlarm 50
ReversedFlowAlarm 300
FluidAlarmHandler 200
Sum 2805
mates presented in table 2 are synthesized using the size data from an
early prototype implementation.
Table 3. Impact Analysis per Scenario
C7 see C3 = 350
4.6 Calculation
The prediction is calculated using the formula presented in figure 1:
0.043*60 + 0.043*127.5 + 0.087*350 + 0.174*10 + 0.217*100 +
0.087*190 + 0.087*350 + 0.087*120 + 0.043*290 + 0.132*100 = 145
LOC / Change
Given that we estimate around 20 maintenance task for the pre-
dicted period of time, either from first to second release or for the com-
ing year. Assuming that we also have an estimated or historical data of
maintenance productivity we are able to extrapolate the estimate from
this method to a total maintenance effort estimate. We assume that we
have a perfective maintenance productivity that are similar to the
median reported in [12], i.e. 1.7 LOC/day, which amounts to about 0.2
LOC/hour. Then we get the following estimate:
20 changes per 145 LOC = 2900 LOC
2900 / 0.2 = 14 500 hours of effort
This would represent a medium project of about 6-7 persons work-
ing around 2300 hours per year.
5. Related work
Architecture assessment is important for achieving the required software
quality attributes. A well-known method is the scenario-based architec-
ture assessment method (SAAM) [15]. The SAAM method of assessing
software architecture is primarily intended for assessing the final version
of the software architecture and involves all stakeholders in the project.
The method we propose differs in that it does not involve all stake hold-
ers, and thus requires less resources and time, but instead provides an
instrument to the software architects that allows them to repeatedly
evaluate architecture during design. We recognize the need for stake-
holder commitment and believe that these two methods should be used
in combination.
In addition, a method based on an ISO standard has been proposed
in [10], which suggests a rigorous metrics approach to the problem of
6. Conclusions
We have presented a method for prediction of maintainability from
software architecture. The method provides a number of benefits: First,
it is practical and has been used during architectural design. Second, its
use provides benefits for more than just the prediction, e.g. improved
requirements understanding. Third, it combines the usage of design
expertise and historical data for validation of scenario profiles. This way
the method more efficiently incorporates the uniqueness of the changes
for the predicted period of time. Fourth, the method is very slim in
terms of effort and produced artifacts. Finally, it is suitable for design
processes that iterate frequently with evaluation in every iteration, e.g.
as in the ARCS method [3].
Weaknesses of the method include its dependency on a representa-
tive maintenance profile and the problem of validating that a profile is
representative. In our future work we aim to address this in a number of
ways. First, we are planning a study investigating how individual knowl-
edge and expertise affects the representativeness of a maintenance pro-
file and thus how the activities concerned with generating maintenance
profiles should be staffed. Second, we will continue to study industrial
maintenance practice and intend to incorporate that knowledge can be
incorporated into the method. Finally, we intend to study the sensitivity
of the method for variation of the input variables, e.g. if the method is
more or less sensitive to the representativeness of the maintenance sce-
nario profile than we currently think, or if the size estimates are more
significant for the results.
References
[1] L. Bass, P. Clements, R. Kazman, ‘Software Architecture In Practise’, Addison
Wesley, 1998.
[2] P. Bengtsson, ‘Towards Maintainability Metrics on Software Architecture: An
Adaptation of Object-Oriented Metrics’, First Nordic Workshop on Software
Architecture (NOSA'98), Ronneby, August 20-21, 1998.
[3] P. Bengtsson, J. Bosch, ‘Scenario Based Software Architecture Reengineering’,
Proceedings of International Conference of Software Reuse 5 (ICSR5), 1998.
[4] Bohner, S. A, Arnold, R.S., Software Change Impact Analysis, IEEE Computer
Society Press, 1996.
[5] G. Booch, Object-Oriented Analysis and Design with Applications, (2nd edition),
Benjamin/Cummings Publishing Company, 1994.
[6] J. Bosch, P. Molin, ‘Software Architecture Design: Evaluation and
Transformation’, submitted, 1997.
[7] F. Buschmann, R. Meunier, H. Rohnert, M.Stahl, Pattern-Oriented Software
Architecture - A System of Patterns, John Wiley & Sons, 1996.
[8] S.R. Chidamber and C.F. Kemerer, ‘Towards a metrics suite for object-oriented
design,’ in proceedings: OOPSLA'91, pp.197-211, 1991.
[9] CEI/IEC 601-2 Safety requirements standard for dialysis machines.
[10] J.C. Dueñas, W.L. de Oliveira, J.A. de la Puente, ‘A Software Architecture
Evaluation Method,’ Proceedings of the Second International ESPRIT ARES
Workshop, Las Palmas, LNCS 1429, Springer Verlag, pp. 148-157, February
1998.
[11] E. Gamma, R. Helm, R. Johnson, J.O. Vlissides, Design Patterns Elements of
Reusable Design, Addison.Wesley, 1995.
Abstract
Scenario profiles are used increasingly often for the assessment of quality
IV
attributes during the architectural design of software systems. However,
the definition of scenario profiles is subjective and no data is available
on the effects of individuals on scenario profiles. In this paper we
present the design, analysis and results of a controlled experiment on the
effect of individuals on scenario profiles, so that others can replicate the
experiments on other projects and people. Both scenario profiles created
by individuals and by groups are studied. The findings from the experi-
ment showed that groups with prepared members proved to be the best
method for creating scenario profiles. Unprepared groups did not per-
form better than individuals when creating scenario profiles.
1. Introduction
During recent years, the importance of explicit design of the architec-
ture of software systems is recognized [2,5,8,13]. The software architec-
ture constrains the quality attributes and the architecture should
support the quality attributes significant for the system. This is impor-
tant since changing the architecture of a system after it has been devel-
oped is generally prohibitively expensive, potentially resulting in a
system that provides the correct functionality, but has unacceptable per-
formance or is very hard to maintain.
Architecture assessment is important to decide the level at which the
software architecture supports various quality attributes. The need for
evaluation and assessment methods have been indicated by [1,2, 9,10,
11]. Architecture assessment is not just important to the software archi-
tect, but is relevant for all stakeholders, including the users, the cus-
tomer, project management, external certification institutes, etc.
One can identify three categories of architecture assessment tech-
niques, i.e. scenario-based, simulation and static model-based assess-
ment. However, these techniques all make use of scenario profiles, i.e. a
set of scenarios. For assessing maintainability, for example, a mainte-
nance profile is used, containing a set of change scenarios.
Although some scenarios profiles can be defined as ‘complete’, i.e.
covering all scenarios that can possibly occur, most scenario profiles are
‘selected’. Selected scenario profiles contain a representative subset of the
population of all possible scenarios. To use the aforementioned mainte-
nance profile as an example, it is, for most systems, impossible to define
all possible change scenarios, which requires one to define a selection
that should represent the complete population of change scenarios.
Scenario profiles are generally defined by the software architect as
part of architecture assessment. However, defining a selected scenario
profile is subjective and we have no means to completely verfiy the rep-
resentativeness of the profile. Also, to the best of our knowledge, no
studies have been reported about the effects of individuals on the cre-
ation of scenario profiles, i.e. what is the deviation between profiles cre-
ated by different individuals. The same is the case for groups defining
scenario profiles.
Above, the general justification of the work reported in this paper is
presented. A second reason for conducting this study is that in [5] we
proposed a method for (re)engineering software architectures and archi-
tecture assessment is a key activity. As part of that method, we have
developed a technique for scenario-based assessment of maintainability
[6]. An important part of the technique is the definition of a mainte-
nance scenario profile. Since the accuracy of the assessment technique is
largely dependent on the representativeness of the scenario profile, we
conducted an experiment to determine what the effect of individuals
and groups is on the definition of scenario profiles. Therefore, we will
2. Scenario Profiles
Scenario profiles describe the semantics of software quality factors, e.g.
maintainability or safety, for a particular system. The description is
done in the terms of a set of scenarios. Scenarios may be assigned an
associated weight or probability of occurrence within in a certain time,
but we do not address that in this paper. To describe, for example, the
maintainability requirement for a system, we list a number of scenarios
that each describe a possible and, preferably, likely change to the system.
The set of scenarios is called a scenario profile. An example of a software
change scenario profile for the software of a haemo dialysis machine is
presented in figure 1.
that the domain experts are aware of, may be over looked. The latter,
however, remains to be empirically validated and will not directly be
addressed in this experiment.
In the experiment reported in this paper, we address the first situa-
tion, i.e. defining a scenario profile without historical data, since few
prediction methods are available for this situation. Once the source code
of a (similar) system is available, traditional assessment methods exists,
e.g. Li & Henry [14].
tage is the increased cost, at least when compared to the individual case,
but possibly also when compared to the unprepared group alternative.
The experiment reported in this paper studies the difference in the
produced results from these three methods and compares the methods.
3. The Experiment
3.2 Hypotheses
We state the following null-hypotheses:
nP
i
score ( P i ) = ∑ f ( m ( sx ) )
x=1
3.6 Operation
The experiment is executed according to the following steps: (schedule
in figure 2)
Ranking Scheme
Some problems exists with this method of ranking. First, the reference
profile will be relative to the profiles since it is based on them. Second,
there might be one single brilliant person that has realized a unique sce-
nario that is really important, but since only one profile included it, its
impact will be strongly reduced.
The first problem might not be a problem, if we accept the assump-
tion that the number of profiles containing a particular scenario is an
acceptable indicator of its relevance. In the case of having significant dif-
ferences between the individually created profiles and the group profiles,
the differences will be normalized in the reference profile. Given that
the individually prepared profiles are more diverse than the profiles pre-
pared by groups, those profiles will render on average lower rank scores,
while the group profiles will render on average higher rank scores. In
case the results of the experiment is in favor to the null-hypothesis, we
will not be able to make any distinction between the group prepared
profiles or the individually prepared profiles ranking scores.
The second problem can be dealt with in two ways. First we can
make use of the delphi method or the wide band delphi [7]. In that case
we would simply synthesize the reference profile, distribute it and have
another go at the profiles and get more refined versions of the profile.
The second approach is to make use of the weightings of each scenario
and make the assumption that the relevance of a scenario is not only
indicated by the number of profiles that include it, but also the weight it
is assigned in these profiles. The implication of this is that a scenario
that is included in all 20 of the profiles but has a low average weighting,
is relevant but not significant for the outcome. However, a scenario
included in only one or a few profiles is deemed less relevant by the gen-
eral opinion. If it has a high average weighting, those few consider it
very important for the outcome. Now, we can incorporate the average
weighting in the ranking method by defining the ranking score for a
profile as the sum of rank products (the frequency times the average
weighting) of its scenarios. This would decrease the impact of a com-
monly occurring scenario with little impact and strengthen the less fre-
quent scenarios with higher impact.
Our conclusion, however, is that the ranking scheme used in this
paper does not suffer from any major threats to the validity of the con-
clusions that we base on it.
establish what scenarios are equivalent, i.e. coding the data. The coding
is done by taking the list of scenarios and for every scenario check if
there was a previous scenario describing a semantically equivalent situa-
tion. This is done using the database table and we establish a reference
profile using a frequency for each unique scenario.
The possible threat is that the reference profile reflects the knowledge
of the person coding the scenarios of all the profiles, instead of the con-
sensus among the different profiles. To reduce the impact of this threat,
the coded list has been inspected by an additional person. Any deviating
interpretations have been discussed and the coding have been updated
according to the consensus after that discussion.
4.1 Mortality
The design of the experiment requires the participation of 12 persons
for a full day. For the experiment we had managed to gather in excess of
12 voluntary students, with the promise of a free lunch during the
experiment and a nice á la carte-dinner after participating in the experi-
ment. Unfortunately, some students did not show up for the experiment
without prior notification. As a result, the experiment participants were
only nine persons. Instead of aborting the experiment, we chose to keep
the groups of three and to proceed with only three groups, instead of the
planned four. As a consequence, the data from the experiment is less
complete as intended (see figures 5 and 6). But nevertheless, we feel that
the collected data is useful and allow us to validate our hypotheses and
make some interesting observations.
Once the experiment had started, we had no mortality problems, i.e.
all the participants completed their tasks and we collected the data
according to plan.
Description frequency
new DBMS 9
new operating system on server 7
new version of TOR 7
introduction of smart card hardware 5
additional search capabilities 5
pureWeb (cgi) clients 4
support for serials 4
new communication protocol 4
user interface overhaul 4
new java technology 4
Figure 3. Alpha Reference Profile Top 10
Description frequency
remote administration 7
upgrade of database 6
upgrade of OS 6
real-time presentation of values 4
change of System 1000 physical components (3-4 pcs.) 4
rule-based problem-learning system 3
change from metric system to american standard 3
new user levels 3
Figure 4. Beta Reference Profile Top 8
In the table in figure 6, the profile scores for project Beta are pre-
sented. The prepared group, C in this case, scores a very high score, but
the unprepared groups, A and B, score much less. This is interesting
since the groups members are the same for both projects.
tion over the scores and the number of cases is presented. The average
score for prepared groups is substantially higher than the score for
unprepared groups or individuals. Secondly, the standard deviation is
the largest for individuals, i.e. 13, but only 6 for unprepared groups and
3 for prepared groups. Finally, it is interesting to note that the standard
deviation for all profiles is larger than for any of the treatments, which
indicates that the profiles for each type of treatment are more related to
each other than to profiles for other treatment types.
5. Related Work
Architecture assessment is important for achieving the required software
quality attributes. Several authors propose and advocate scenario based
techniques for architecture assessment. A well-known method is the sce-
nario-based architecture assessment method (SAAM) [12]. SAAM
assesses the architecture after the architecture design and incorporates all
stakeholders of the system. Other methods include the architectural
trade-off analysis method (ATA) [11] that uses scenarios to analyze and
bring out trade off points in the architecture. The 4+1 View method
[13] uses scenarios in its fifth view to verify the resulting architecture.
To this point, no studies have been reported on the creation of scenario
profiles for architecture assessment.
In [3] a framework for experimentation in software engineering is
presented along with a survey of experiments conducted up to 1986. In
our work with the experiment design we have used this framework to
ensure an as robust design as possible.
6. Conclusions
During recent years, the importance of explicit design of the architec-
ture of software systems is recognized. This is because the software
architecture constrains the quality attributes of the system. Conse-
quently, architecture assessment is important to decide how well the
software architecture supports various quality attributes. One can iden-
tify three categories of architecture assessment techniques, i.e. scenario-,
simulation- and static model-based assessment. However, these tech-
niques make all use of scenario profiles. Although some scenarios pro-
files can be defined as ‘complete’, i.e. covering all scenarios that can
possibly occur, most scenario profiles are ‘selected’. Selected scenario pro-
files contain a representative subset of the population of all possible sce-
narios.
Scenario profiles are generally defined as a first step during architec-
ture assessment. However, defining a selected scenario profile is subjec-
tive and we have no means to decide upon the representativeness of the
profile. Also, to the best of our knowledge, no studies are available
about the effects of individuals on the definition of scenario profiles, i.e.
what is the deviation between profiles defined by different individuals.
The same is the case for groups defining scenario profiles.
In this paper we have presented the design and results of an experi-
ment on three methods for creating scenario profiles. The methods, or
treatments, for creating scenario profiles that were examined are (1) an
individual prepares a profile, (2) a group with unprepared members pre-
pares a profile and (3) a group with members that, in advance, created
their individual profiles as preparation.
We also have stated a number of hypotheses, with the corresponding
null-hypotheses and, although, the results of the experiment data do not
allow us dismiss each of our null-hypotheses, we find support for the
following hypotheses:
Acknowledgments
We would like to thank the students who participated in the experi-
ment.
References
[1] G. Abowd, L. Bass, P. Clements, R. Kazman, L. Northrop, A. Moormann
Zaremski, Recommend Best Industrial Practice for Software Architecture Evaluation,
CMU/SEI-96-TR-025, 1997.
[2] Basili, V.R., Selby, R.W., Hutchens, D.H., “Experimentation in Software
Engineering”, IEEE Transactions on Software Engineering, vol. se-12, no. 7, July,
1986
[3] L. Bass, P. Clements, R. Kazman, ‘Software Architecture In Practise’, Addison
Wesley, 1998.
S2
S3
S4
...
S80
ID Category
C1
C2
...
C10
S2
S3
S4
...
S80
ID Category
C1
C2
...
C10