Software Engineering - Processes and Tools
Software Engineering - Processes and Tools
net/publication/240619429
CITATIONS READS
6 2,677
10 authors, including:
All content following this page was uploaded by Gerhard Weiss on 07 January 2014.
Introduction 1
Software engineering traditionally plays an important role among the differ-
ent research directions located in the Software Park Hagenberg, as it provides
the fundamental concepts, methods and tools for producing reliable and high
quality software. Software engineering as a quite young profession and en-
gineering discipline is not limited to focus on how to create simple software
programms, but in fact introduces a complex and most of the time quite costly
lifecycle of software and derived products. Some efforts have been made to
define software engineering as a profession and to outline the boundaries of
this emerging field of research [PP04, Som04]. Several different definitions of
the term software engineering appeared since its first mentioning on a NATO
Software Engineering Conference1 in 1968. A good example of an early defi-
nition of the term software engineering which is often cited in the literature
is the following:
The practical application of scientific knowledge in the design and construction of
computer programs and the associated documentation required to develop, operate,
and maintain them. [Boe76]
1 Proceedings of the famous 1968 and 1969 NATO Software Engineering Workshops are
available at http://homepages.cs.ncl.ac.uk/brian.randell/NATO/index.html
158 Gerhard Weiss, Gustav Pomberger et al.
Figure 1 depicts the four main levels of models involved in software process
engineering and identifies the corresponding software process engineering ac-
tivities associated with model instantiation.
Software Process
Meta-model
Process Modeling
Software Process
Model
Process Instantiation
Software Process
Process Execution
Software Product
further, less quantifiable benefits, like better teachability of the process and
easier familiarization of new employees with an organization’s practices and
procedures, increased independence of specific persons and the establishment
of a general basis for professionalism and credibility.
Motivated by the overall goal of enhancing performance, improving an or-
ganization’s software process has become a central topic in software process
engineering. Research shows that improving an organization’s process quality
can lead to substantial gains in productivity, early defect detection, time to
market, and quality, that in total add up to significant returns on the invest-
ment in software process improvement. Further identifiable benefits refer to
improved cost performance, improved estimates and deliveries on schedule,
and increased customer as well as employee satisfaction [HCR+ 94].
Beside the process engineering activities built into software best practice
process models as own processes or activities, two major types of models for
software process evaluation and improvement can be distinguished regarding
the issue of scale of the intended improvement activity [Kin01]:
• Software process improvement action life cycle models
• Software process improvement program life cycle models.
Software process improvement action life cycle models are primarily meant for
guiding a single improvement action and generally fail to give the necessary
guidelines for a full software process improvement program. As they do not
address improvement program-level issues, they are typically kept relatively
simple. Examples of such models are:
• the Plan-Do-Check-Act (PDCA) model [She31]
• the Process Improvement Paradigm-cycle [Dio93] (see Figure 4).
IV Software Engineering – Processes and Tools 169
Process stabilization
• Document
• Disseminate
• Institutionalize
Projects
• Adjust • Instrument
• Confirm • Measure
ge
Pr
an
• Automate • Analyze
oc
ch
es
s
s
es
c
oc
on
Pr
tro
l
Process improvement paradigm cycle. Figure 4
Models of this type are primarily intended for software process staff, pro-
cess owners, and non-process professionals having a role in a software process
improvement action.
Software process improvement program life cycle models on the other side,
put more emphasis on aspects such as initiation, management and coordina-
tion of the overall improvement program and in particular on the coordination
of individual process improvement actions. Examples of such models are:
• the IDEAL (Initiating-Diagnosing-Establishing-Acting-Learning) cycle
[McF96] and
• the ISO 15504-7 cycle [ISO98] (see Figure 5).
These models are mainly intended for people who have been entrusted the
management of a large scale process initiative. They are important for staging
and managing a successful improvement program and represent a major step
towards an institutionalized software process engineering system.
170 Gerhard Weiss, Gustav Pomberger et al.
Organization’s needs
Software process
improvement request
1 7
Examine Sustain
organization‘s improvement
needs 8 gains
Monitor 6
performance Confirm
improvements
2 Initiate
process
improvement 5
Implement
improvements
3
Prepare and 4
Analyze results
conduct process
and derive
assessment
action plan
The goal of the project GDES 2 -Reuse that we carried out together with
Siemens Corporate Technology was the development of an assessment-based
methodology for evaluating an industrial engineering organization’s reuse
practices and identifying and exploiting its reuse potential.
While software engineering deals with software only, industrial engineer-
ing has to enable the parallel development of different engineering disciplines,
like mechanical engineering, electrical engineering, and communications and
control system engineering. Industrial engineering projects range from rather
2 Globally Distributed Engineering and Services
176 Gerhard Weiss, Gustav Pomberger et al.
Concepts (z)
Shared Services
Collaboration
So
ftw Workflow Support
ar
e
So Co
ftw nf Measurement
ar ig
e ur
En at
So io Version Control
gin n
M
Engineering (y)
ftw ee an
ar rin ag Traceability
So e g em
Pr M
ftw oc an en
ar es ag t
e s em
Q M
Software Construction
Software Design
Software Requirements
Software Testing
Software Maintenance
ua an en
lity ag t
M em
an en
ag t
em
en
t
M
an
ag
em
en
t(
x)
Figure 6 Conceptual model for product engineering and lifecycle management in-
tegration.
simple and small projects (e.g. semi-automated assembly line) to large and
highly complex projects (e.g. nuclear power plants). Like software engineer-
ing, industrial engineering today has to cope with increasing demands for
more flexible, more reliable, more productive, faster, and cost optimized plan-
ning and realization of industrial solutions. Simultaneously, industrial engi-
neering has to deal with more demanding customer requirements, increased
complexity of solutions and harder competition in a global market. Increasing
reuse has therefore been identified as one key element for increasing quality
and productivity in industrial engineering (see [LBB+ 05]). Reuse is one of the
most basic techniques in industrial engineering and pervades all engineering
phases and all engineering artifacts. Although recognized as a fundamental
and indispensable approach, it is hardly systematized and often only applied
in an ad hoc manner. As a consequence the reuse potential in industrial en-
gineering organizations is rarely exploited and in most cases not even known.
On the other side, reuse is well understood in the domain of software en-
gineering (see e.g. [JGJ97, Sam01, MMYA01]) and the distinction between
bottom-up reuse concepts like component-oriented reuse and top-down ap-
proaches like copy-and-modify, reuse of prefabricates (e.g. application frame-
works), the application of platforms, or the system-family or software product
IV Software Engineering – Processes and Tools 177
Purpose
Reuse Result
Base Practice
Artifact (I/O)
Assessment Model
and aggregated towards the phases of the PRM on the one side and towards
the maturity stages of the RMM on the other side.
A major value of the work performed within GDES-Reuse lies in the inte-
gration and systematisation of best practices from a series of reuse approaches
in a single model and in the integration of a “staged” reuse maturity model
with a “continuous” process model. The focus of the work was on providing
a best practice framework for the strategic design of engineering processes in
the sense of which paradigm or development approach or combination of those
to use. The approach chosen to resolve this problem is compliant to estab-
lished process assessment and improvement approaches like CMMI [CMM06]
or SPICE [ISO03] but much more focused with respect to modelling depth
and thus rather a complement to those models than a substitution of those.
Furthermore, we regard the project’s results re-transformable and applica-
ble to the domain of software engineering, as the various reuse paradigms and
approaches developed in the field of software engineering represented a start-
ing point for model development. Moreover, the engineering of control and
communication systems, as one of the core industrial engineering disciplines,
typically includes software engineering as a major sub-discipline.
The methodology for the evaluation of an actual reuse situation has so far
been applied in two real world evaluation projects (see [SPV07]).
IV Software Engineering – Processes and Tools 181
Under the umbrella of the project SISB 3 together with Siemens Corporate
Technology we carried out research into methods for the evaluation and de-
velopment of engineering strategies for the industrial solutions business. In
this section we highlight results from this research that are relevant for the
area of process engineering. These main results are:
• an understanding of the role of engineering strategies in the overall strategy
development context of an organization,
• the development of a meta-model for describing engineering strategies,
• the identification of the engineering strategy objects relevant for the in-
dustrial solutions business, and
• the development of a methodology to support the evaluation and develop-
ment of engineering strategies.
In order to understand strategy development at the engineering level we have
to relate engineering strategies to the overall strategy development efforts in
an organization. Typically a distinction is made between the corporate strat-
egy, various division strategies and various functional strategies [VRM03].
While a corporate strategy deals with determining which market segments
should be addressed with which resources, etc., a division strategy refines
the corporate strategy by addressing the major question how to develop a
long term unique selling proposition compared to the market competitors
and how to develop a unique product or service. Functional strategies on the
other side define the principles for the functional areas of a division in accor-
dance with the division strategy and therefore refine the division strategy in
the distinct functional areas, like marketing, finance, human resources, engi-
neering, or software development. Depending on the size and structure of a
company there might be no explicit distinction between corporate strategies
and division strategies, but nevertheless they are part of the relevant context
for the development of functional strategies.
Figure 8 depicts the conceptual framework (meta-model) developed for the
description of functional strategies. The core elements of such a strategy are
strategic goals, strategy objects and strategic statements. The strategic goals
formulated in the engineering strategy are refinements of strategic goals on
the corporate respectively divisional level, mapped on the functional area.
A strategy object is a topic (e.g. process management) that refines one ore
more strategic goals. As the strategy objects—and therefore also the strate-
gic statements—are targeted towards the functional strategic goals it is also
assured that the divisional or corporate goals are not violated. Although not
necessary on the conceptual level, the grouping of strategy objects facilitates
understanding of strategy objects on a more abstract level and also allows
focusing of the strategy assessment or development process. The approach for
3 Systematic Improvement of the Solutions Business
182 Gerhard Weiss, Gustav Pomberger et al.
Strategy Strategy
Priority
Key Area Target Group
Grouping
Dimension
3
grouped by
*
Strategic refined by 1+ Strategy
Goal 1+ Object
contributes to
1
described by
1+
Strategic
Statement
Strategy Strategy
Structure Tuning
Second
Strategy Concept
(Reviewed)
American history. USA Today reported: “FirstEnergy, the Ohio energy com-
pany . . . cited faulty computer software as a key factor in cascading problems
that led up to the massive outage.” (USA Today5 , 19 Nov 2003).
These and similar reports are only the tip of the iceberg. A study com-
missioned by the National Institute of Standards and Technology found that
software bugs cost the U.S. economy about $59.5 billion per year [Tas02].
The same study indicates that more than a third of these costs (about $22.2
billion) could be eliminated by improving software testing.
The massive economic impact of software quality makes it a foremost con-
cern for any software development endeavor. Software quality is in the focus
of any software project, from the developer’s perspective as much as from
the customer’s. At the same time, the development of concepts, methods,
and tools for engineering software quality involves new demanding challenges
for researchers.
In this chapter we give an overview of research trends and practical impli-
cations in software quality engineering illustrated with examples from past
and present research results achieved at the SCCH. Since its foundation,
SCCH has been active in engineering of high quality software solutions and
in developing concepts, methods, and tools for quality engineering. A num-
ber of contributions have been made to following areas, which are further
elaborated in the subsequent subsections.
• Concepts of quality in software engineering and related disciplines.
• Economic perspectives of software quality.
• Development of tool support for software testing.
• Monitoring and predicting software quality.
Software quality has been an issue since the early days of computer program-
ming [WV02]. Accordingly a large number of definitions of software quality
have emerged. Some of them have been standardized [IEE90]6 , but most of
them are perceived imprecise and overly abstract [Voa08]. To some extent,
this perception stems from the different viewpoints of quality inherent in
5 http://www.usatoday.com/tech/news/2003-11-19-blackout-bug x.htm
6 The IEEE Standard 610.12-1990 defines software quality as “(1) The degree to which
a system, component, or process meets specified requirements. (2) The degree to which a
system, component, or process meets customer or user needs or expectations.”
186 Gerhard Weiss, Gustav Pomberger et al.
Quality Models
Quality must be built into a software product during development and main-
tenance. Software quality engineering [Tia05] ensures that the process of in-
corporating quality into the software is done correctly and adequately, and
that the resulting software product meets the defined quality requirements.
The measures applied in engineering of software quality are constructive
or analytical in their nature. Constructive measures are technical (e.g., appli-
cation of adequate programming languages and tool support), organizational
(e.g., enactment of standardized procedures and workflows), and personnel
measures (e.g., selection and training of personnel) to ensure quality a pri-
ori. These measures aim to prevent defects through eliminating the source
of the error or blocking erroneous human actions. Analytical measures are
used to asses the actual quality of a work product by dynamic checks (e.g.,
testing and simulation) and static checks (e.g., inspection and review). These
measures aim to improve quality through fault detection and removal.
Software testing is one of the most important and most widely practiced
measures of software quality engineering [LRFL07] used to validate that cus-
tomers have specified the right software solution and to verify that developers
have built the solution right. It is a natural approach to understand a software
system’s behavior by executing representative scenarios within the intended
context of use with the aim to gather information about the software system.
More specifically, software testing means executing a software system with
defined input and observing the produced output, which is compared with the
expected output to determine pass or fail of the test. Accordingly, the IEEE
Standard 610.12-1990 defines testing as “the process of operating a system or
component under specified conditions, observing or recording the results, and
making an evaluation of some aspect of the system or component” [IEE90].
Compared to other approaches to engineer software quality, testing pro-
vides several advantages, such as the relative ease with which many of the
testing activities can be performed, the possibility to execute the program
in its expected environment, the direct link of failed tests to the underlying
defect, or that testing reduces the risk of failures of the software system. In
contrast, however, software testing is a costly measure due to the large num-
ber of execution scenarios required to gather a representative sample of the
real-world usage of the software system. In fact, the total number of possi-
ble execution scenarios for any non-trivial software system is so high that
IV Software Engineering – Processes and Tools 189
Testing tools are frequently associated with tools for automating the execu-
tion of test cases. Test execution, however, is only one activity in the software
testing process, which also involves test planning, test analysis and design,
test implementation, evaluating exit criteria and reporting, plus the parallel
activity of test management. All of these activities are amenable to automa-
tion and benefit from tool support.
In the following we describe a tool-based approach specifically for test
management and present some results from the research project TEMPPO
(Test Execution Managing Planning and rePorting Organizer) conducted by
Siemens Austria and SCCH. The project results are an excellent example
for the sustaining benefit that can be achieved by linking science and indus-
try. The project fostered a fruitful knowledge exchange in both directions.
Requirements for managing testing in step with actual practice in large soft-
190 Gerhard Weiss, Gustav Pomberger et al.
7 http://www.pse.siemens.at/SiTEMPPO
IV Software Engineering – Processes and Tools 191
Test-driven development (TDD) [Bec02] has been one of the outstanding in-
novations over the last years in the field of software testing. In short, the
premise behind TDD is that software is developed in small increments fol-
lowing a test-develop-refactor cycle also known as red-green-refactor pattern
[Bec02].
In the first step (test), tests are implemented that specify the expected
behavior before any code is written. Naturally, as the software to be tested
does not yet exist, these tests fail – often visualized by a red progress bar.
Thereby, however, the tests constitute a set of precisely measurable objectives
for the development of the code in the next step. In the second step (develop),
the goal is to write the code necessary to make the tests pass – visualized
by a green progress bar. Only as much code as necessary to make the bar
turn from red to green should be written and as quickly as possible. Even the
intended design of the software system may be violated if necessary. In the
third step (refactor), any problematic code constructs, design violations, and
duplicate code blocks are refactored. Thereby, the code changes performed
in the course of refactoring are safeguarded by the existing tests. As soon
as change introduces a defect breaking the achieved behavior, a test will fail
192 Gerhard Weiss, Gustav Pomberger et al.
and indicate the defect. After the refactoring has been completed, the cycle
is repeated until all planned requirements have finally been implemented.
Amplified by the paradigm shift towards agile processes and the inception
of extreme programming [BA04], TDD has literally infected the developers
with unit testing [BG00]. This breakthrough is also attributed to the frame-
work JUnit8 , the reference implementation of the xUnit family [Ham04] in
Java. The framework provides the basic functionality to swiftly implement
unit tests in the same programming language as the tested code, to combine
related tests to test suites, and to easily run the tests or test suites from the
development environment including a visualization of the test results.
TDD has been successfully applied in the development of server and desk-
top applications, e.g., business software or Web-based systems. The develop-
ment of embedded software systems would also benefit from TDD [Gre07].
However, it has not been widely used in this domain due to a number of
unique challenges making automated unit testing of embedded software sys-
tems difficult at least.
• Typical programming languages employed in embedded software develop-
ment have been designed for runtime and memory efficiency and, thus,
show limited support for writing testable code. Examples are limitations
in error and exception handling, lack of comprehensive meta-information,
rigid binding at compile-time, and little encouragement to clearly separate
interfaces and implementation.
• The limiting factor is usually the underlying hardware with its harsh re-
source and timing constraints that forces the developers to design for run-
time and memory efficiency instead for testability. When the code is tuned
to produce the smallest possible memory footprint, debugging aids as well
as additional interfaces to control and to introspect the state of the soft-
ware system are intentionally removed.
• Cross-platform development with a separation between host development
environments and target execution platforms is a typical approach in build-
ing embedded software systems. The development tools run in a host en-
vironment, usually including a hardware simulator. Larger increments are
cross-compiled and tested on the actual target system once it becomes
available.
• In addition, unit testing is concerned with a number of domain-specific
issues causing defects that demand domain-specific test methods and tool
support. In embedded software development, these specific issues include,
for example, real-time requirements, timing problems, and asynchronous
execution due to multi-threaded code or decentralized systems.
The goal of the project was to tackle these challenges and to introduce the
concept of TDD to the development of embedded software for mobile and
handheld devices. Together with the partner company we developed a frame-
work for automated unit testing with the aim to resemble the design of the
8 http://www.junit.org
IV Software Engineering – Processes and Tools 193
Development Exexution
of Test and Code of Tests
trigger remote
test execution
Analysis
of Test Results
TCP/IP or
serial connection
Target Environment
report Remote Execution
test results of Tests
Workflow for unit testing in the host development environment as well Figure 12
as on the target device.
reduces the limited budget and, thus, the number of affordable manual tests.
The overly simplistic cost models for automated testing frequently found in
the literature tend to neglect this trade-off and fail to provide the necessary
guidance in selecting an optimally balanced testing strategy taking the value
contribution of testing into account [Ram04].
The problem is made worse by the fact that manual and automated test-
ing cannot be simply traded against each other based on pure cost consid-
erations. Manual testing and automated testing have largely different defect
detection capabilities in terms of what types of defects they are able to reveal.
Automated testing targets regression problems, i.e. defects in modified but
previously working functionality, while manual testing is suitable for explor-
ing new ways in how to break (new) functionality. Hence, for effective manual
testing detailed knowledge about the tested software system and experience
in exploring a software system with the aim to find defects play an important
role [BR08]. In [RW06] we propose an economic model for balancing manual
and automated software testing and we describe influence factors to facili-
tate comprehension and discussion necessary to define a value-based testing
strategy.
Frequently, technical constraints influence the feasibility of automaton ap-
proaches in software testing. In the project Aragon, a visual GUI editor as a
part of an integrated development environment for mobile and multimedia de-
vices, has been developed [PPRL07]. Testing the highly interactive graphical
user interface of the editor, which comprises slightly more than 50 percent of
the application’s total code, involved a number challenges inherent in testing
graphical user interfaces such as specifying exactly what the expected results
are, testing of the aesthetic appearance, or coping with frequent changes.
While we found a manual, exploratory approach the preferable way of
testing the GUI, we also identified a broad range of different tasks that can
effectively be automated. As a consequence we set up the initiative TestSheets
utilizing Eclipse cheat sheets for implementing partial automated test plans
embedded directly in the runtime environment of the tested product [PR08].
This integration enabled active elements in test plans to access the product
under test, e.g., for setting up the test environment, and allows to tap into
the product’s log output. Test plans were managed and deployed together
with the product under test.
We found that partial test automation is an effective way to blend manual
and automated testing amplifying the benefit of each approach. It is primar-
ily targeted at cumbersome and error-prone tasks like setting up the test
environment or collecting test results. Thereby, partial automation enhances
the capability of human testers, first, because it reduces the amount of low-
level routine work and, second, because it provides room for exploring the
product under test from various viewpoints including aspects like usability,
attractiveness and responsiveness, which are typically weakly addressed by
automated tests.
IV Software Engineering – Processes and Tools 195
Software Cockpits
• A user-centered design that supports the users’ daily activities keeps the
administrative overhead at a minimum and is in line with personal needs
for feedback and transparency.
• A comprehensive overview of all relevant information is presented as a
set of simple graphics on a single screen as the lynchpin of the software
cockpit. It can be personalized in terms of user specific views and filters.
• The presented information (i.e. in-process metrics from software develop-
ment and quality engineering) is easy to interpret and can be traced back
to the individual activities in software development. Abstract metrics and
high-level indicators have been avoided. This encourages the users to re-
flect on how their work affects the overall performance of the project and
the quality status of the product.
• In addition, the interactive analysis of the measurement data allows drilling
down from aggregated measurements to individual data records and in-
place exploration is supported by mechanisms such as stacked charting of
data along different dimensions, tooltips showing details about the data
points, and filters to zoom in on the most recent information.
Data about the points in time where defects are introduced, reported, and
resolved, i.e. the lifecycle of defects [Ram08], is gathered in the data ware-
house and can be used to construct the history and current state of defective
modules of a software system. The data about the software system’s past
states can also serve as the basis for predicting future states of a software
system, indicating which modules are likely to contain defects in upcoming
versions.
The rationale for identifying defect-prone modules prior to analytical qual-
ity assurance (QA) measures such as inspection or testing has been sum-
marized by Nagappan et al.: “During software production, software quality
assurance consumes a considerable effort. To raise the effectiveness and effi-
ciency of this effort, it is wise to direct it to those which need it most. We
therefore need to identify those pieces of software which are the most likely
to fail—and therefore require most of our attention.” [NBZ06] As the time
and effort for applying software quality assurance measures is usually limited
due to economic constraints and as complete testing is considered impossible
for any non-trivial software system [KFN99], the information about which
modules are defect-prone can be a valuable aid for defining a focused test
and quality engineering strategy.
The feasibility and practical value of defect prediction has been investi-
gated in an empirical study we conducted as part of the research project
Andromeda, where we applied defect prediction for a large industrial soft-
ware system [RWS+ 09]. The studied software system encompasses about 700
IV Software Engineering – Processes and Tools 199
KLOC of C++ code in about 160 modules. Before a new version of the system
enters the testing phase, up to almost 60 percent of these modules contain
defects. Our objective was to classify the modules of a new version as poten-
tially defective or defect-free in order to prioritize the modules for testing. We
repeated defect prediction for six consecutive versions of the software system
and compared the prediction results with the actual results obtained from
system and integration testing.
The defect prediction models [KL05] we used in the study have been based
on the data retrieved from previous versions of the software system. For every
module of the software system the data included more than 100 metrics like
the size and complexity of the module, the number of dependencies to other
modules, or the number of changes applied to the module over the last weeks
and months. Data mining techniques such as fuzzy logic-based decision trees,
neural networks, and support vector machines were used to construct the
prediction models. Then, the models were parameterized with the data from
the new versions to predict whether a module is defective or defect-free.
Preliminary results showed that our predictions achieve an accuracy of
78 (highest) to 67 percent (lowest). On average 72 percent of the modules
were accurately classified. Hence, in case testing has to be stopped early and
some modules have to be left untested, a test strategy prioritizing the mod-
ules based on the predicted defectiveness is up to 43 percent more effective
than a strategy using a random prioritization. Even in with the lowest pre-
diction accuracy the gain can be up to 29 percent compared to a random
testing strategy when only 60 percent of all modules are tested. The gain
over time is illustrated in Figure 15. The testing strategy based on average
defect prediction results (blue) is compared to the hypothetical best case—a
strategy ordering the modules to be tested according to their actual defec-
tiveness (green)—and the worst case—a strategy ordering the modules purely
random (red).
The depicted improvements in testing achieved by means of defect pre-
diction are intermediate results from ongoing research. So far, the prediction
models have been based on simple metrics derived from selected data sources.
Combining the data in more sophisticated ways allows including additional
aspects of the software system’s history and, thus, promises to further in-
crease the prediction performance [MGF07]. In a specific context of a project,
the results can be improved even further by tuning of the applied data min-
ing methods. For the future, we plan to extend this work to a larger set of
industrial projects of various sizes and from different domains.
200 Gerhard Weiss, Gustav Pomberger et al.
100%
80%
70%
60%
50%
40%
30%
0%
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Total modules tested
Architecture Design
Architecture Implementation
before the system has been build, during it is built and after the system has
been built [DN02]. Architecture analysis can be performed manually by using
architecture evaluation methods or automatically using architecture analysis
tools.
Architecture evaluation methods like the Software Architecture Analysis
Method (SAAM) [CKK02] or its successor the Architecture Tradeoff Analysis
Method (ATAM) [CKK02] are scenario-based evaluation methods that have
been developed particularly to validate quality attributes, which are usually
difficult to analyze. Architecture evaluation methods are time-consuming and
resource-intensive processes. They are usually used for evaluating the ini-
tial design of a software system with its stakeholders and for assessing the
architecture of an already implemented system. They are not intended for
continuous architecture analysis.
Architecture Description Languages (ADLs) are formal languages to rep-
resent the architecture of a software system [Cle96]. They allow the automatic
analysis of system properties before it has been built [Cle95]. An ADL de-
scribes a system in terms of components, connectors and their configurations
[MT00]. Usually ADLs have a textual as well as a graphical representation
[Cle96]. A large number of general purpose and domain-specific ADLs ex-
ist [MT00]. Disadvantages of ADLs are lack of tool support [MDT07, MT00],
lack of implementation integration [MT00] and lack of standardization. Some
ADLs allows code generation in the sense of model-driven software develop-
ment, which may lead to problems in synchronizing architecture and code as
mentioned above. While UML is sometimes discussed as a general purpose
ADL, its suitability as an ADL is still subject of study and debate [MDT07].
In most cases UML, is used for architecture documentation as described be-
low.
Architecture Documentation
Since the architecture of a software system is not entirely contained in the
implementation it must be documented separately [Hof05]. Documenting soft-
ware architecture is quite different from architectural descriptions that are
created for analysis [IMP05]. While the latter requires a formal description
that can be processed by tools, architecture descriptions for documentation
purposes are usually described informal using natural language. Researchers
have proposed a view-based approach for describing software architectures
[RW05, Kru95, CBB+ 02, HNS99]. An architectural view is a representation of
a system from the perspective of an identified set of architecture-related con-
cerns [Int08]. Architecture documentations usually consist of multiple views.
The concepts of view-based architecture documentation are defined in the
ISO/IEC 42010 standard: Systems and Software Engineering – Architectural
Description [Int08].
204 Gerhard Weiss, Gustav Pomberger et al.
Technology
System Model Bindings
Language
Language Basic Structure Bindings
Element Model Model Java, C#
Core Model
of the LISA language definition shown in Figure 16 can be used for describ-
ing architectural relationships that are defined statically in code. Model ele-
ments at these lower layers can be partly extracted from or mapped to source
code. Examples are the elements of the Language Element Model, which in-
clude concepts like classes and interfaces. These elements can be organized
by structures in the Basic Structure Model. The Basic Structure Model can
be used for defining elements like functional units, subsystems, deployment
units, and layers. Together the elements of the lower layers of the LISA lan-
guage definition enable functionality provided by architecture management
tools. This includes usage and dependency analysis, synchronizing architec-
ture with code, and defining and checking architectural constraints at the
level of programming language concepts. Although the lower layers of LISA
are aligned with concepts found in concrete programming languages they are
still abstract. Bindings to particular programming languages are provided by
Language Binding definitions as shown in Figure 16.
The upper layers of LISA include the definition of abstract models for
describing components, configurations, and whole systems. Again the bind-
ing to specific component technologies and models is provided by Technology
Binding Models. Currently LISA supports bindings for EJB [EJB06], Spring
206 Gerhard Weiss, Gustav Pomberger et al.
[Spr08b], OSGi [OSG07], Spring Dynamic Modules for OSGi [Spr08a], and
SCA [SCA07]. Examples for elements at the higher layers of LISA are compo-
nent, contract, port, composite, application, location and tier. These elements
can be used for describing and analyzing architectures of component-based
and distributed service-oriented software systems. In such systems architec-
tural relationships are not defined in code but through late composition and
configuration. Finally, (Quality) Attribute Models as shown in Figure 16 can
be used for attaching semantic attributes and policies to architectural ele-
ments at all levels of abstraction. Such attributes can be used for annotating
and validating non-functional attributes of a software system.
User Interface
Architecture Architecture
Modeling Visualization
Application Logic
Implementation
Model Validation Implementation
Implementation
Validation Connection
Validation Connection
Manipulation Synchronization
Model
Integrated Technology
Technology
Technology
Submodels
Architecture Submodels
Submodels
Model
As shown in the figure, the toolkit provides an API for editing a LISA-
based architecture model as well as functional components for validating
architectural constraints and for synchronizing an architecture with a system
implementation. In addition, the toolkit provides user interface components
for architecture modeling and visualization. All UI components are working
on the same architectural model and thus support editing and visualization
of different aspects of a system in a consistent way. Violation of architectural
constraints defined in the model are immediately shown in all graphical and
textual representations of the affected elements.
Examples of available visualizations and modeling tools are shown in Fig-
ures 19 and 20. Figure 19 shows usage and dependency relationships of classes
and interfaces organized in different layers in an object-oriented software sys-
tem. The figure shows layer violations (see (1) and (2) in Figure 19) as an
example for the violation of architectural constraints.
Figure 20 shows diagrams for representing and editing the architecture of
a service-oriented software system using the Service Component Architecture
(SCA). Instead of classes and interfaces, the main elements at this layer of
abstraction are components and contracts. The Component Decomposition
Diagram provides on overview of the components of the system. In LISA
components are independent of a particular implementation technology. The
208 Gerhard Weiss, Gustav Pomberger et al.
(1)
(2)
systems both within enterprises (EAI) and between enterprises (B2B) re-
quires standardization. Standardization is a strong trend at all integration
levels. Examples are Web Service standards like SOAP and WSDL as well as
higher-level standards for B2B-integration like ebXML and RosettaNet.
To increase reusability and to flexibly adapt to changing business condi-
tions and processes, enterprise applications are increasingly decomposed into
small reusable and composable elements using standardized interfaces. At
the presentation level such elements are portal components, which can be
composed to web portals and customizable workplaces. At the business logic
layer, the central elements for composition are services. Currently, the term
Service-Oriented Architecture (SOA) is usually used for flexible enterprise
information system architectures based on services using standardized (Web
Service) protocols.
The main result of the Enipa project is a component model for enhanced
integration of portal components in web portals. The model supports not only
the aggregation of components within one web page, but also the composition
of component navigation into a central navigation area, the communication
between local and remote components, and heterogeneous environments. The
approach is based on existing standards like Portlets and WSRP and uses
XML for describing component navigation and communication capabilities.
It is declarative and may also be used for improving integration capabilities
of already existing portal components (see [WZ05] and [WWZ07]).
The results of the IT4S project are an approach for SOA governance and a
versioning approach for service evolution. Notable aspects of the governance
approach are an extensible model for describing service metadata of arbi-
trary service types (not only Web services), support for the process of service
specification and service creation, a service browser for service reuse, and
the support for service evolution through information about service version-
ing, service dependencies and service installations [DW06]. The versioning
214 Gerhard Weiss, Gustav Pomberger et al.
Domain-Specific Language
the level of abstraction in the given domain, and brings software specification
closer to the domain experts. In distinction to general-purpose programming
languages, which are universally applicable to many domains, domain-specific
languages are created specifically for problems in the domain and are not in-
tended to problems outside it.
As a formal language, a DSL is defined by its concrete syntax, abstract
syntax, and semantics. The concrete syntax (or notation) specifies the con-
crete appearance of a DSL visible to the user. The notation can be one of
various forms—textual, graphical, tabular, etc.—depending on the problem
domain at hand. The concrete syntax usually is of increased importance for
DSLs, as it—to a great extent—determines acceptance by users.
The goal of the abstract syntax (or meta-model in context of model-driven
development) is to describe the structural essence of a language including
elements and relationships between elements like containment and references.
Concrete and abstract syntax of textual languages are often defined in a single
source [KRV07, GBU08] or the concrete syntax defines the abstract syntax
implicitly [LJJ07, PP08].
Whereas the formal definition of both abstract and concrete syntax is well
elaborated, the language semantics is usually given by the code generators.
Formal definition of language semantics is still an open research field, e.g.
[Sad08].
In general, DSLs are either standalone, embedded into a host language,
or used as domain-specific presentation extensions. A standalone DSL pro-
vides full abstraction of the underlying general-purpose programming lan-
guage used for the solution [KT08]. An embedded DSL is one which extends
an existing general-purpose language (e.g. [AMS08]). A domain-specific pre-
sentation extension of a general-purpose programming language (e.g. [EK07])
facilitates readability and closes the gap between problem and solution do-
main. In context of domain-specific modeling, the focus is on standalone
DSLs.
Code Generators
DSL program (model or text). For the former one, two main approaches are
available [CH03]: visitor-based and template-based.
Domain Framework
A domain framework provides the interface between the generated code and
the underlying target platform. Domain-specific frameworks [FJ99] are not
specific to the DSM approach but a general approach for software reuse to
increase productivity in a specific domain. However, a domain framework can
support a DSM approach by providing the immutable part of a solution not
visible to users which can be customized and configured by DSL programs.
In general, a domain framework is written in a general-purpose programming
language by and for software experts, whereas a DSM solution puts a domain-
specific language on top of a framework.
Tool Support
feedback of a resulting MMI even in the modeling (design) phase for a large
set of different devices and to provide code generators that transforms MMI
models into platform-specific code that can be used by the APOXI framework.
Figure 23 shows a screen dump of the tool Aragon which has been developed
on top of the Eclipse platform.
Domain Modeling
their layout and positioning, containment relations, and how the user inter-
face gestures are connected to application behavior. In distinction to other UI
builder tools, Aragon pursues an approach which is target agnostic, i.e., the
tool itself is not dependent on the target implementation but fully configured
by meta-information which is realized in a target independent and extensible
form.
As result, the meta-model, i.e., the abstract syntax of the modeling lan-
guage, comprises the following concepts:
• Meta-information on UI components provided by a framework (e.g. APOXI)
and extensions like windows and menus together with their attributes, and
constraints on their configuration, composition, and layout.
• Meta-information on available applications and features provided by ap-
plications. This information is required to connect MMI elements with
application features implemented in a general-purpose programming lan-
guage, e.g., the C++ programming language.
Code Generation
According to the DSM architecture, code generators transform models con-
forming to a DSL into target code or an intermediate representation which
then is interpreted on the target. Aragon supports both forms of code gen-
eration, i.e., it allows transforming a window layout alternatively to resource
files or to C++ source code. The former one is used for easier customization
because resource files may be stored on flash memory of a mobile device and
easily replaced. The latter one is more compact and can be loaded fast into
memory, which is required for low-end mobile devices due to limited memory
and CPU resources.
Because of the textual output of both forms, the Aragon code generators
follow the transformation technique model-to-text [LJJ07]. For this technique
two main approaches are available [CH03]: visitor-based and template-based.
However, both approaches hinder extensibility by DSM users as required for
Aragon. The reason is that template languages are often complex and visitors
directly operate on the internal representation of a meta-model, which usually
shall be hidden to DSM users.
As consequence, we have combined both techniques to a two-phase code
generator, which can be extended by DSM users more easily:
1. Domain models given in XML are transformed by means of XSLT into
an intermediate model (model-to-model transformation). The XSLT rules
can be extended by DSM users.
2. A non-extensible visitor transforms the intermediate model into resulting
resource files or C++ source code (model-to-text transformation).
222 Gerhard Weiss, Gustav Pomberger et al.
• Testers usually are not familiar with source code of the system under test
(SUT) even are not usually C++ experts.
• Languages like C++ cause many programming errors, most notable er-
rors concerning memory management. Using a high-level testing language
prevents many programming errors and facilitates more robust test cases
that cannot harm the SUT.
• The use of a separate language (and not the language used to program the
SUT) leads to decoupling of the SUT and test cases. Instead of using the
API of a SUT directly, high-level language constructs defined by a DSL
are more stable with regard of changes of the SUT.
• A DSL also facilitates high-level constructs for testing as well as of the
problem domain.
We defined a textual domain-specific language that includes first-class con-
cepts on language level for testing of mobile software frameworks. Besides
general-purpose elements for assignment, loops, conditional statements, etc.,
the language provides following domain-specific elements:
• Statements to verify test results.
• Statements to control test case execution.
• Statements for logging.
• Statements to simulate the protocol stack for both sending and receiving
signals of different communication protocols from a test case.
Figure 24 gives an (simplified) example of a test case intended to test a
single function of the system under test. The instruction in line 2 simulates a
function call to the SUT which in turn sends a signal (MN ATTACH REQ)
to the protocol stack. This event is consumed and verified from the statement
in line 3. To simulate the response back from the protocol stack to the SUT,
an SDL signal is created and initialized (lines 4–6) and sent to the SUT in
line 7. Triggered by this event, the SUT will call a callback function. This call
is consumed and verified by the script in line 8. Test scripts written in the
defined language are compiled into an intermediate code that is interpreted
by the test environment which can be considered as domain framework in
context of a DSM solution.
nodes and their connections in a two dimensional way, as shown in Figure 25.
Nodes representing aggregate actions are placed in horizontal columns result-
ing in a column for each aggregate. The chosen icons together with the order
of columns corresponding to the aggregate position on the actual machine
give more specific information for domain experts to identify individual ac-
tions of aggregates compared to general-purpose programming languages or
software diagrams. Vertically, the actions are placed according to their de-
pendency starting with the very first action of a machine cycle on the top.
The vertical dimension of a single action and, hence, of the entire graph,
corresponds to the time required to execute an action, or the entire machine
cycle respectively. This technique that maps the property duration of an ac-
tion to a visual dimension facilitates to locate time-consuming actions and
to identify actions along the critical path.
The described incorporation of domain aspects (e.g. aggregates and dura-
tion) as elements of a visual domain-specific language is a typical example
how a DSL can facilitate end-user programming for domain exerts.
Acknowledgements
Research and development described in this chapter has been carried out by
Software Competence Center Hagenberg GmbH (SCCH) in close cooperation
with its scientific partners and its partner companies within the frame of the
Austrian Kplus and COMET competence center programs.
References
[ABD+ 04] Alain Abran, Pierre Bourque, Robert Dupuis, James W. Moore, and
Leonard L. Tripp. Guide to the Software Engineering Body of Knowledge
- SWEBOK. IEEE Press, Piscataway, NJ, USA, 2004 version edition, 2004.
[AGM05] Paris Avgeriou, Nicolas Guelfi, and Nenad Medvidovic. Software architecture
description and uml. pages 23–32. 2005.
[AMS08] Lennart Augustsson, Howard Mansell, and Ganesh Sittampalam. Paradise:
a two-stage dsl embedded in haskell. In ICFP ’08: Proceeding of the 13th
ACM SIGPLAN international conference on Functional programming, pages
225–228, New York, NY, USA, 2008. ACM.
[BA04] Kent Beck and Cynthia Andres. Extreme Programming Explained: Embrace
Change (2nd Edition). Addison-Wesley Professional, 2 edition, November
2004.
[BAB+ 05] Stefan Biffl, Aybüke Aurum, Barry Boehm, Hakan Erdogmus, and Paul
Grünbacher. Value-Based Software Engineering. Springer Verlag, oct 2005.
[Bac97] James Bach. Good enough quality: Beyond the buzzword. Computer, 30(8):96–
98, 1997.
References 227
[BCK03] Len Bass, Paul Clements, and Rick Kazman. Software Architecture in Practice,
Second Edition. Addison-Wesley Professional, April 2003.
[BD04] Pierre Bourque and Robert Dupuis, editors. SWEBOK - Guide to the Software
Engineering Body of Knowledge, 2004 Version. IEEE Computer Society, 2004
version edition, 2004.
[Bec02] Kent Beck. Test Driven Development: By Example. Addison-Wesley Profes-
sional, November 2002.
[Bei90] Boris Beizer. Software Testing Techniques 2E. International Thomson Com-
puter Press, 2nd edition, June 1990.
[BG00] Kent Beck and Erich Gamma. More Java Gems, chapter Test-infected: pro-
grammers love writing tests, pages 357–376. Cambridge University Press, 2000.
[Boe76] B. W. Boehm. Software engineering. Transactions on Computers, C-
25(12):1226–1241, 1976.
[Boe88] B. W. Boehm. A spiral model of software development and enhancement.
Computer, 21(5):61–72, May 1988.
[BR08] Armin Beer and Rudolf Ramler. The role of experience in software testing
practice. In Proceedings of the 34th EUROMICRO Conference on Software
Engineering and Advanced Applications, pages 258–265, Parma, Italy, 2008.
IEEE Computer Society.
[Bru01] H. Bruyninckx. Open robot control software: the OROCOS project. In
Robotics and Automation, 2001. Proceedings 2001 ICRA. IEEE International
Conference on, volume 3, pages 2523–2528 vol.3, 2001.
[BW08] Georg Buchgeher and Rainer Weinreich. Integrated software architecture man-
agement and validation. In Software Engineering Advances, 2008. ICSEA ’08.
The Third International Conference on, pages 427–436, 2008.
[BWK05] Stefan Berner, Roland Weber, and Rudolf K. Keller. Observations and lessons
learned from automated testing. In Proceedings of the 27th international con-
ference on Software engineering, pages 571–579, St. Louis, MO, USA, 2005.
ACM.
[CBB+ 02] Paul Clements, Felix Bachmann, Len Bass, David Garlan, James Ivers, Reed
Little, Robert Nord, and Judith Stafford. Documenting Software Architectures:
Views and Beyond. Addison-Wesley Professional, September 2002.
[CEC00] Krzysztof Czarnecki, Ulrich Eisenecker, and Krzysztof Czarnecki. Genera-
tive Programming: Methods, Tools, and Applications. Addison-Wesley Profes-
sional, June 2000.
[CH03] Krzysztof Czarnecki and Simon Helsen. Classification of model transformation
approaches. In Proceedings of the 2nd OOPSLA Workshop on Generative
Techniques in the Context of MDA, 2003.
[Cha05] R. N. Charette. Why software fails. IEEE Spectrum, 42(9):42–49, September
2005.
[Chr92] Gerhard Chroust. Modelle der SoftwareEntwicklung. Oldenbourg Verlag
München Wien, 1992. in German.
[CJKW07] Steve Cook, Gareth Jones, Stuart Kent, and Alan C. Wills. Domain Specific
Development with Visual Studio DSL Tools (Microsoft .Net Development).
Addison-Wesley Longman, Amsterdam, May 2007.
[CKK02] Paul Clements, Rick Kazman, and Mark Klein. Evaluating Software Archi-
tectures: Methods and Case Studies. Addison-Wesley Professional, January
2002.
[Cle95] Paul Clements. Formal methods in describing architectures. In Monterey
Workshop on Formal Methods and Architecture, September 1995.
[Cle96] Paul C. Clements. A survey of architecture description languages. In IWSSD
’96: Proceedings of the 8th International Workshop on Software Specification
and Design, Washington, DC, USA, 1996. IEEE Computer Society.
228 Gerhard Weiss, Gustav Pomberger et al.
[Ham04] Paul Hamill. Unit Test Frameworks. O’Reilly Media, Inc., October 2004.
[HB06] LiGuo Huang and Barry Boehm. How much software quality investment is
enough: A Value-Based approach. IEEE Software, 23(5):88–95, 2006.
[HCR+ 94] James Herbsleb, Anita Carleton, James Rozum, Jane Siegel, and David
Zubrow. Benefits of CMM-based software process improvement: Initial re-
sults. Technical Report CMU/SEI-94-TR-013, Software Engineering Institute,
Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, August 1994.
[Hel07] Hello2morro. Sonarj. http://www.hello2morrow.de, 2007.
[HHK+ 08] Walter Hargassner, Thomas Hofer, Claus Klammer, Josef Pichler, and Gernot
Reisinger. A script-based testbed for mobile software frameworks. In Proceed-
ings of the First International Conference on Software Testing, Verification
and Validation, pages 448–457. IEEE, April 2008.
[HNS99] Christine Hofmeister, Robert Nord, and Dilip Soni. Applied Software Archi-
tecture. Addison-Wesley Professional, November 1999.
[Hof05] Christine Hofmeister. Architecting session report. In WICSA ’05: Proceed-
ings of the 5th Working IEEE/IFIP Conference on Software Architecture
(WICSA’05), pages 209–210, Washington, DC, USA, 2005. IEEE Computer
Society.
[HSSL02] B. Henderson-Sellers, F. Stallinger, and B. Lefever. Bridging the gap from pro-
cess modelling to process assessment: the OOSPICE process specification for
component-based software engineering. In Proceedings of the 28th Euromicro
Conference, pages 324–331. IEEE Computer Society, 2002.
[HT06] Brent Hailpern and Peri Tarr. Model-driven development: The good, the bad,
and the ugly. IBM Systems Journal, 45(3):451–461, July 2006.
[Hum89] W. Humphrey. Managing the Software Process. AddisonWesley Reading
Mass., 1989.
[Hum95] W. Humphrey. A Discipline for Software Engineering. SEI Series in Software
engineering. AddisonWesley, 1995.
[IEE90] IEEE Std 610.12-1990: IEEE standard glossary of software engineering termi-
nology, 1990.
[IMP05] P. Inverardi, H. Muccini, and P. Pelliccione. Dually: Putting in synergy uml
2.0 and adls. In WICSA ’05: Proceedings of the 5th Working IEEE/IFIP
Conference on Software Architecture, pages 251–252, Washington, DC, USA,
2005. IEEE Computer Society.
[Int08] International Organization for Standardization (ISO). Systems and software
engineering - architectural description working draft 3, 2008.
[ISO95] ISO/IEC 12207:1995, Information technology - Software life cycle processes,
1995. Amd.1:2002; Amd.2:2004.
[ISO98] ISO/IEC TR 15504-7:1998(e), Information technology - Software process as-
sessment - Part 7: Guide for use in process improvement, 1998.
[ISO01] ISO/IEC 9126-1:2001, Software engineering - Product quality - Part 1: Quality
model, 2001.
[ISO03] ISO/IEC 15504:2003, Information Technology - Process Assessment, 2003.
[ISO05] ISO/IEC 25000:2005, Software engineering - Software product Quality Re-
quirements and Evaluation (SQuaRE) - Guide to SQuaRE, 2005.
[ISO09] ISO/IEC PDTR 29110:2009, Software Engineering - Lifecycle Profiles for Very
Small Enterprises (VSE), January 2009.
[JBK06] Frédéric Jouault, Jean Bézivin, and Ivan Kurtev. Tcs: a dsl for the specification
of textual concrete syntaxes in model engineering. In GPCE ’06: Proceedings of
the 5th international conference on Generative programming and component
engineering, pages 249–254, New York, NY, USA, 2006. ACM.
[JGJ97] I. Jacobson, M. Griss, and P. Jonsson. Software Reuse: Architecture, Process
and Organization for Business Success. Addison-Wesley Professional, 1997.
[Joh99] Ralph E. Johnson. Building Application Frameworks: Object-Oriented Foun-
dations of Framework Design. John Wiley & Sons, 1 edition, September 1999.
230 Gerhard Weiss, Gustav Pomberger et al.
[McF96] Bob McFeeley. IDEAL: A user’s guide for software process improvement. Hand-
book CMU/SEI-96-HB-001, Software Engineering Institute, Carnegie Mellon
University, Pittsburgh, Pennsylvania 15213, February 1996.
[MD08] Tom Mens and Serge Demeyer. Software Evolution. Springer Verlag, March
2008.
[MDT07] Nenad Medvidovic, Eric M. Dashofy, and Richard N. Taylor. Moving archi-
tectural description from under the technology lamppost. Information and
Software Technology, 49(1):12–31, January 2007.
[MFF+ 06] Pierre-Alain Muller, Franck Fleurey, Frédéric Fondement, Michel Hassenforder,
Rémi Schneckenburger, Sébastien Gérard, and Jean-Marc Jézéquel. Model-
Driven Analysis and Synthesis of Concrete Syntax. 2006.
[MGF07] T. Menzies, J. Greenwald, and A. Frank. Data mining static code attributes to
learn defect predictors. IEEE Transactions on Software Engineering, 33(1):2–
13, 2007.
[MH04] Jürgen Münch and Jens Heidrich. Software project control centers: concepts
and approaches. Journal of Systems and Software, 70(1-2):3–19, February
2004.
[Mil02] Dave Miller. Fundamental Concepts for the Software Quality Engineer, chap-
ter Choice and Application of a Software Quality Model, pages 17–24. ASQ
Quality Press, 2002.
[MK08] Jennifer Mcginn and Nalini Kotamraju. Data-driven persona development. In
CHI ’08: Proceeding of the twenty-sixth annual SIGCHI conference on Human
factors in computing systems, pages 1521–1524, New York, NY, USA, 2008.
ACM.
[MKB06] Brad A. Myers, Andrew J. Ko, and Margaret M. Burnett. Invited research
overview: end-user programming. In CHI ’06: CHI ’06 extended abstracts on
Human factors in computing systems, pages 75–80, New York, NY, USA, 2006.
ACM.
[MKMG97] R. T. Monroe, A. Kompanek, R. Melton, and D. Garlan. Architectural styles,
design patterns, and objects. Software, IEEE, 14(1):43–52, 1997.
[MMYA01] H. Mili, A. Mili, S. Yacoub, and E. Addy. Reuse-Based Software Engineering:
Techniques, Organizations, and Controls. Wiley-Interscience, 2001.
[MT00] Nenad Medvidovic and Richard N. Taylor. A classification and comparison
framework for software architecture description languages. IEEE Trans. Softw.
Eng., 26(1):70–93, January 2000.
[NBZ06] Nachiappan Nagappan, Thomas Ball, and Andreas Zeller. Mining metrics to
predict component failures. In Proceedings of the 28th international conference
on Software engineering, pages 452–461, Shanghai, China, 2006. ACM.
[Obj07] Object Management Group. Uml superstructure specification v2.1.1.
OMG Document Number formal/07-02-05 http://www.omg.org/cgi-
bin/apps/doc?formal/07-02-05.pdf, 2007.
[Obj08] Object Management Group. Software & systems process engineering meta-
model specification, version 2.0. http://www.omg.org/spec/SPEM/2.0/PDF,
April 2008.
[Ope08] OpenUP - Open Unified Process, 2008. http://epf.eclipse.org/wikis/openup/.
[OSG07] Osgi service platform release 4, 2007.
[PCCW93] Mark C. Paulk, Bill Curtis, Mary Beth Chrissis, and Charles V. Weber. Capa-
bility maturity model for software, version 1.1. Technical Report CMU/SEI-93-
TR-02, Software Engineering Institute, Carnegie Mellon University, February
1993.
[PGP08] F. Pino, F. Garcia, and M. Piattini. Software process improvement in small and
medium software enterprises: A systematic review. Software Quality Journal,
16(2):1573–1367, June 2008.
232 Gerhard Weiss, Gustav Pomberger et al.
[PHS+ 08a] Herbert Prähofer, Dominik Hurnaus, Roland Schatz, Christian Wirth, and
Hanspeter Mössenböck. Monaco: A dsl approach for programming automation
systems. In SE 2008 - Software-Engineering-Konferenz 2008, pages 242–256,
Munic, Germay, February 2008.
[PHS+ 08b] Herbert Prähofer, Dominik Hurnaus, Roland Schatz, Christian Wirth, and
Hanspeter Mössenböck. Software support for building end-user programming
environments in the automation domain. In WEUSE ’08: Proceedings of the
4th international workshop on End-user software engineering, pages 76–80,
New York, NY, USA, 2008. ACM.
[PP04] Gustav Pomberger and Wolfgang Pree. Software Engineering. Hanser Fach-
buchverlag, October 2004.
[PP08] Michael Pfeiffer and Josef Pichler. A comparison of tool support for textual
domain-specific languages. Proceedings of the 8th OOPSLA Workshop on
Domain-Specific Modeling, pages 1–7, October 2008.
[PP09] Michael Pfeiffer and Josef Pichler. A DSM approach for End-User Program-
ming in the Automation Domain. 2009. accepted for publication at 7th IEEE
International Conference on Industrial Informatics (INDIN 2009).
[PPRL07] Josef Pichler, Herbert Praehofer, Gernot Reisinger, and Gerhard Leonharts-
berger. Aragon: an industrial strength eclipse tool for MMI design for mobile
systems. In Proceedings of the 25th conference on IASTED International
Multi-Conference: Software Engineering, pages 156–163, Innsbruck, Austria,
2007. ACTA Press.
[PR08] Josef Pichler and Rudolf Ramler. How to test the intangible properties of
graphical user interfaces? In Proceedings of the 2008 International Conference
on Software Testing, Verification, and Validation, ICST 08, pages 494–497.
IEEE Computer Society, 2008.
[PRS00] G. Pomberger, M. Rezagholi, and C. Stobbe. Handbuch für Evaluation und
Evaluierungsforschung in der Wirtschaftsinformatik, chapter Evaluation und
Verbesserung wiederverwendungsorientierter Software-Entwicklung. Olden-
bourg Verlag, München/Wien, 2000. in German.
[PRZ09] Guenter Pirklbauer, Rudolf Ramler, and Rene Zeilinger. An integration-
oriented model for application lifecycle management. 2009. accepted for ICEIS
2009, 11th International Conference on Enterprise Information Systems.
[PSN08] R. Plösch, F. Stallinger, and R. Neumann. SISB - systematic improvement of
the solution business: Engineering strategies for the industrial solutions busi-
ness, version 1.0. Technical report, Software Competence Center Hagengerg,
August 2008. (non-public project deliverable).
[PW92] Dewayne E. Perry and Alexander L. Wolf. Foundations for the study of soft-
ware architecture. SIGSOFT Softw. Eng. Notes, 17(4):40–52, October 1992.
[Ram04] Rudolf Ramler. Decision support for test management in iterative and evolu-
tionary development. In Proceedings of the 19th IEEE international conference
on Automated software engineering, pages 406–409, Linz, Austria, 2004. IEEE
Computer Society.
[Ram08] Rudolf Ramler. The impact of product development on the lifecycle of defects.
In Proceedings of the DEFECTS 2008 Workshop on Defects in Large Software
Systems, pages 21–25, Seattle, Washington, 2008. ACM.
[RBG05] Rudolf Ramler, Stefan Biffl, and Paul Grünbacher. Value-Based Software Engi-
neering, chapter Value-Based Management of Software Testing, pages 225–244.
Springer Verlag, 2005.
[RCS03] Rudolf Ramler, Gerald Czech, and Dietmar Schlosser. Unit testing beyond
a bar in green and red. In Proceedings of the 4th International Conference
on Extreme Programming and Agile Processes in Software Engineering, XP
2003, pages 10–12, Genova, Italy, 2003. LNCS.
[Roy70] W. W. Royce. Managing the development of large software systems:: Concepts
and techniques. In Proc. IEEE WESCON, pages 1–9. IEEE, August 1970.
References 233
[RR99] Arthur A. Reyes and Debra J. Richardson. Siddhartha: a method for devel-
oping domain-specific test driver generators. In In Proc. 14th Int. Conf. on
Automated Software Engineering, pages 12–15, 1999.
[RvW07] Ita Richardson and Christiane Gresse von Wangenheim. Why are small soft-
ware organizations different? IEEE Software, 24(1):18–22, January/February
2007.
[RW05] Nick Rozanski and Eóin Woods. Software Systems Architecture: Working
With Stakeholders Using Viewpoints and Perspectives. Addison-Wesley Pro-
fessional, April 2005.
[RW06] Rudolf Ramler and Klaus Wolfmaier. Economic perspectives in test automa-
tion: balancing automated and manual testing with opportunity cost. In Pro-
ceedings of the 2006 international workshop on Automation of software test,
pages 85–91, Shanghai, China, 2006. ACM.
[RW08] Rudolf Ramler and Klaus Wolfmaier. Issues and effort in integrating data
from heterogeneous software repositories and corporate databases. In Proceed-
ings of the Second ACM-IEEE international symposium on Empirical software
engineering and measurement, pages 330–332, Kaiserslautern, Germany, 2008.
ACM.
[RWS+ 09] Rudolf Ramler, Klaus Wolfmaier, Erwin Stauder, Felix Kossak, and Thomas
Natschläger. Key questions in building defect prediction models in practice.
In 10th International Conference on Product Focused Software Development
and Process Improvement, PROFES 2009, Oulu, Finnland, 2009.
[RWW+ 02] Rudolf Ramler, Edgar Weippl, Mario Winterer, Wieland Schwinger, and Josef
Altmann. A quality-driven approach to web testing. In Ibero-american Con-
ference on Web Engineering, ICWE 2002, pages 81–95, Argentina, 2002.
[Sad08] Daniel A. Sadilek. Prototyping domain-specific language semantics. In OOP-
SLA Companion ’08: Companion to the 23rd ACM SIGPLAN conference on
Object-oriented programming systems languages and applications, pages 895–
896, New York, NY, USA, 2008. ACM.
[Sam01] J. Sametinger. Software Engineering with Reusable Components. Springer,
2001.
[SB03] Douglas C. Schmidt and Frank Buschmann. Patterns, frameworks, and mid-
dleware: their synergistic relationships. In ICSE ’03: Proceedings of the 25th
International Conference on Software Engineering, pages 694–704, Washing-
ton, DC, USA, 2003. IEEE Computer Society.
[SBPM09] David Steinberg, Frank Budinsky, Marcelo Paternostro, and Ed Merks. EMF:
Eclipse Modeling Framework (2nd Edition) (Eclipse). Addison-Wesley Long-
man, Amsterdam, 2nd revised edition (rev). edition, January 2009.
[SCA07] Service component architecture specifications, 2007.
[Sch06] D. C. Schmidt. Guest editor’s introduction: Model-driven engineering. Com-
puter, 39(2):25–31, 2006.
[SDR+ 02] F. Stallinger, A. Dorling, T. Rout, B. Henderson-Sellers, and B. Lefever. Soft-
ware process improvement for component-based software engineering: an in-
troduction to the OOSPICE project. In Proceedings of the 28th Euromicro
Conference, pages 318–323. IEEE Computer Society, 2002.
[Sha90] M. Shaw. Prospects for an engineering discipline of software. Software, IEEE,
7(6):15–24, Nov 1990.
[She31] Walter A. Shewhart. Economic control of quality of manufactured product. D.
Van Nostrand Company, New York, 1931.
[SJSJ05] Neeraj Sangal, Ev Jordan, Vineet Sinha, and Daniel Jackson. Using depen-
dency models to manage complex software architecture. SIGPLAN Not.,
40(10):167–176, October 2005.
[Sof07] Software Tomography GmbH. Sotoarc. http://www.software-
tomography.de/index.html, 2007.
234 Gerhard Weiss, Gustav Pomberger et al.
[Som04] Ian Sommerville. Software Engineering. Addison Wesley, seventh edition, May
2004.
[SPP+ 06] F. Stallinger, R. Plösch, H. Prähofer, S. Prummer, and J. Vollmar. A process
reference model for reuse in industrial engineering: Enhancing the ISO/IEC
15504 framework to cope with organizational reuse maturity. In Proc. SPICE
2006, Luxembourg, May 4-5, 2006, pages 49–56, May 2006.
[SPPV09] Fritz Stallinger, Reinhold Plösch, Gustav Pomberger, and Jan Vollmar. Bridg-
ing the gap between ISO/IEC 15504 conformant process assessment and or-
ganizational reuse enhancement. 2009. (accepted for SPICE Conference 2009,
Software Process Improvement and Capability Determination, 2-4 June 2009,
Turku, Finland).
[Spr08a] Spring dynamic modules for osgi(tm) service platforms, 2008.
[Spr08b] The spring framework - reference documentation, 2008.
[SPV07] F. Stallinger, R. Plösch, and J. Vollmar. A process assessment based approach
for improving organizational reuse maturity in multidisciplinary industrial en-
gineering contexts. In Proceedings of ESEPG 2007, Amsterdam, 14th June
2007, June 2007.
[SRA06] Christoph Steindl, Rudolf Ramler, and Josef Altmann. Web Engineering: The
Discipline of Systematic Development of Web Applications, chapter Testing
Web Applications, pages 133–153. Wiley, 2006.
[SSM03] A. Sinha, C. S. Smidts, and A. Moran. Enhanced testing of domain specific
applications by automatic extraction of axioms from functional specifications.
In Software Reliability Engineering, 2003. ISSRE 2003. 14th International
Symposium on, pages 181–190, 2003.
[Ste00] David B. Stewart. Designing software components for real-time applications.
In in Proceedings of Embedded System Conference, page 428, 2000.
[Tas02] Gregory Tassy. The economic impacts of inadequate infrastructure for software
testing, NIST planning report 02-3, May 2002.
[Tia05] Jeff Tian. Software Quality Engineering: Testing, Quality Assurance, and
Quantifiable Improvement. Wiley & Sons, 1., auflage edition, February 2005.
[TK05] Juha-Pekka Tolvanen and Steven Kelly. Defining domain-specific modeling
languages to automate product derivation: Collected experiences. pages 198–
209. 2005.
[TMD09] R. N. Taylor, Nenad Medvidovi, and Irvine E. Dashofy. Software Architecture:
Foundations, Theory, and Practice. John Wiley & Sons, January 2009.
[TvdH07] Richard N. Taylor and Andre van der Hoek. Software design and architecture
the once and future focus of software engineering. In FOSE ’07: 2007 Future
of Software Engineering, pages 226–243, Washington, DC, USA, 2007. IEEE
Computer Society.
[V-M06] V-Modell XT, part1: Fundamentals of the V-Modell XT, version 1.2.1. Tech-
nical report, 2006. http://www.v-modell-xt.de/.
[vDKV00] Arie v. van Deursen, Paul Klint, and Joost Visser. Domain-specific languages:
An annotated bibliography. SIGPLAN Notices, 35(6):26–36, 2000.
[vGB02] Jilles van Gurp and Jan Bosch. Design erosion: problems and causes. Journal
of Systems and Software, 61(2):105–119, March 2002.
[Voa08] Jeffrey Voas. Software quality unpeeled. STSC CrossTalk, (Jun 2008):27–30,
2008.
[VRM03] M. Venzin, C. Rasner, and V. Mahnke. Der Strategieprozess - Praxishandbuch
zur Umsetzung im Unternehmen. 2003. in German.
[VS06] Markus Völter and Thomas Stahl. Model-Driven Software Development :
Technology, Engineering, Management. John Wiley & Sons, June 2006.
[Was96] A.I. Wasserman. Toward a discipline of software engineering. Software, IEEE,
13(6):23–31, Nov 1996.
References 235