Ecaade2013 Vol 2 Lowres
Ecaade2013 Vol 2 Lowres
Ecaade2013 Vol 2 Lowres
eCAADe 2013
Computation and Performance
18-20 September, 2013
Delft University of Technology
Volume 2
Edited by
Rudi Stouffs and Sevil Sariyildiz
eCAADe 2013
Computation and Performance
Volume 2
Copyright © 2013
www.ecaade.org
All rights reserved. Nothing from this publication may be reproduced, stored in computer-
ised system or published in any form or in any manner, including electronic, mechanical,
reprographic or photographic, without prior written permission from the publisher.
http://ecaade2013.bk.tudelft.nl/
Edited by
Rudi Stouffs
Sevil Sariyildiz
This is the second volume of the conference proceedings of the 31st eCAADe confer-
ence, held from 18-20 September 2013 at the Faculty of Architecture of Delft University of
Technology in Delft, the Netherlands. Both volumes together contain 150 papers that were
submitted and accepted to this conference.
The theme of the 31st eCAADe conference is the role of computation in the consideration of
performance in planning and design.
Since long, a building no longer simply serves to shelter human activity from the natural en-
vironment. It must not just defy natural forces, carry its own weight, its occupants and their
possessions, it should also functionally facilitate its occupants’ activities, be aesthetically
pleasing, be economical in building and maintenance costs, provide temperature, humidity,
lighting and acoustical comfort, be sustainable with respect to material, energy and other
resources, and so forth. Considering all these performance aspects in building design is far
from straightforward and their integration into the design process further increases com-
plexity, interdisciplinarity and the need for computational support.
One of the roles of computation in planning and design is the measurement and prediction
of the performances of buildings and cities, where performance denotes the ability of build-
ings and cities to meet various technical and non-technical requirements (physical as well as
psychological) placed upon them by owners, users and society at large.
This second volume contains 75 papers grouped under eleven subthemes that vary from
Simulation, Prediction and Evaluation over Models of Computation: Human Factors to Lan-
guages of Design.
Autodesk GmbH
Bentley Systems
With the 31st eCAADe conference held in Delft, eCAADe has finally come full circle. The very first
eCAADe conference, before the actual founding of the eCAADe organization in 1983, was held in Delft
in 1982. 31 years later, we are proud to welcome the eCAADe organization back to its origins.
This Delft conference has been a while in the making. The idea was first raised by Martijn Stelling-
werff in 2006 and a preliminary proposal was presented to the eCAADe council at that time. However,
we encountered some turbulent times with the destruction by a fire of the Faculty of Architecture
building in Delft in 2008 and only in 2010 were we ready to present a definitive proposal for the
conference in Delft. From that time until the publication of these proceedings, many people helped to
make this happen and we hope to mention them all here:
First of all, we would like to thank both deans, Wytze Patijn (in 2010) and Karen Laglas (since 2011),
for their endorsement and support, and especially the director of International Affairs at that time, Ag-
nes Wijers, for her immediate support upon approaching her with the idea and for her ample support
in the early planning of the conference event.
The eCAADe council was supportive throughout the entire process and helped with many aspects
of the organisation. Both presidents, Wolfgang Dokonal (up to 2011) and José Duarte (since 2011),
were very supportive. Bob Martens, as liaison with the conference host, was particularly helpful with
many issues in the process. We received especially a lot of support from Henri Achten as previous
conference organiser. Martin Winchester made sure the OpenConf system was running smoothly and
reliably. Nele de Meyere and Maaike Waterschoot reacted promptly when approached with adminis-
trative questions. Financial support was generously provided by the sponsors Autodesk and Bentley
Systems.
The Call for Extended Abstracts yielded 287 submissions. Fortunately, we were able to count on
135 international reviewers in helping us to assess all submissions (see the List of Reviewers section).
Each submission was double-blind reviewed by three reviewers. Following the reviewers’ recom-
mendations, 150 papers were finally accepted for publication and presentation. We congratulate the
authors for their accomplishment. Next to the authors, the reviewers, who volunteered valuable time
and effort, the session chairs, who led the presentations, and the students and other volunteers, who
assisted throughout the conference and its preparations, deserve our sincere thanks and acknowl-
edgements.
As conference chairs, we had the support from the organising committee, including, Kas Ooster-
huis, Joop Paul, Bige Tunçer, Martijn Stellingwerff, Michael Bittermann, Michela Turrin, Paul de Ruiter,
Nimish Biloria and Henriette Bier. Joop Paul deserves a special note for securing Gerard Loozekoot, di-
rector of UN Studio, as keynote speaker. A special thanks goes to Irem Erbas, who, next to Bige Tunçer,
Nimish Biloria and Michela Turrin, assisted in processing part of the proceedings. The secretarial team
of the department of Architectural Engineering + Technology assisted on numerous occasions and
Françoise van Puffelen, in particular, especially assisted in all financial matters. Thijs Welman secured
the website and Martijn Stellingwerff designed the conference website. From the faculty we further-
more want to thank the FMVG (Facility Management and Real Estate) people who helped with the
planning of and preparations for the event.
We are very grateful to have as keynote speakers at the conference Sean Hanna (as prominent
Sean Hanna
Sean Hanna is a Lecturer in Space and Adaptive Architectures at University College London, Director of the
MSc/MRes programs in Adaptive Architecture and Computation at the Bartlett School of Graduate Studies,
and Academic Director of UCL’s Doctoral Training Center in Virtual Environments, Imaging and Visualisation.
He is a member of the Space Group, noted as one of the UK’s highest performing research groups in the field
of architecture and the built environment.
Originally from a background of architectural practice, his application of design algorithms includes
major projects with architects Foster + Partners and sculptor Antony Gormley. His research is primarily in
developing computational methods for dealing with complexity in the built environment, including the
modelling of space and its perception, and he is on the advisory boards of two related UCL spin-out compa-
nies. His publications address the fields of spatial modelling, machine intelligence, collaborative creativity,
among others, and his work has been featured in the non-academic press, including the Architects’ Journal
and The Economist.
Shrikant Sharma
Shrikant Sharma leads SMART Solutions – Buro Happold’s specialist service that offers advanced computa-
tional solutions to support architectural design, engineering, construction and operations of buildings and
urban spaces. The team, founded by Shrikant in 2002, is renowned for delivering simple, innovative solutions
for complex engineering problems in the built environment.
Shrikant has a PhD in Engineering and over 15 years of experience in the development and application
of novel modelling and analysis techniques. A firm believer in the power of rapid design optioneering tools
that integrate architectural, functional, engineering, and environmental assessments of buildings and urban
spaces, Shrikant has been driving the development of a suite of intuitive real time software tools that work
within commercial CAD and BIM environments. He has also led the application of such technologies for
integrated modelling and optimisation on a number of projects such as Scunthorpe Sports Academy, Louvre
Abu Dhabi, Sidra Trees Qatar Convention Centre, and London City Airport.
SMART Space - Buro Happold’s crowd flow modelling and consultancy service is run by Shrikant. It uses
novel analytical and simulation techniques to help the architects, planners, developers, and regulators to
understand and optimise space layout, design and management.
Shrikant is actively engaged in the advancements in computation design and simulation through rigor-
ous ongoing research and development, and has developed innovative software tools such as SMART Form
and SMART Move.
Gerard Loozekoot
Gerard Loozekoot is Director and Senior Architect at UNStudio. He earned his Master’s degree in Architecture
from Delft University of Technology, worked as an architect at UNStudio since 2000 and became partner at
UNStudio in 2008. His great interest are innovative typological innovations, such as the projects Theater in
Lelystad, the UNStudio office tower in Amsterdam or the airport in Georgia. In addition, sustainable innova-
tions are one of the main pillars of his projects. In Dienst Uitvoering Onderwijs in Groningen and Le Toison
d’Or in Brussels, Gerard demonstrates that the added value of sustainable buildings have become the new
standard. As director and senior architect is Gerard actively involved in all phases of the construction.
97 Performative Design
99 Architectural Thermal Forms
Isak Worre Foged
107 DaylightGen: From Daylight Intentions to Architectural Solutions
Mohamed-Anis Gallas, Gilles Halin
117 Performance Driven Design and Design Information Exchange
Sina Mostafavi, Mauricio Morales Beltran, Nimish Biloria
127 Performance Based Pavilion Design
Sevil Yazici
Abstract. This research aimed to explore the energy savings through the use of smart
control as well as ceiling fan in intelligent building. As the energy consumption of
air-conditioning (AC) accounts for about 40% of total residential energy, therefore,
applying smart control system to the use of AC to achieve the effects of comfy and energy
savings should be able to generate positive effect for the energy consumption of overall
residential. This study used the smart control system in the intelligent building lab to
transmit message to AC for its implementation of next operating step through the indoor
temperature sensor in order to achieve energy saving effect.
Keywords. Intelligent building; smart control; energy saving; ZigBee; smart living.
INTRODUCTION
This study focused on the exploration of the smart since 2006, using buildings as medium to integrate
controlled AC (Air Conditioner) and ceiling fan, using ICT and other related communication products to
the lab of intelligent building, in order to achieve the merge innovation and design application for the
objective of energy saving. Although each country construction of new living environment, there are
has different definition about intelligent building, all still few of successful intelligent buildings over
of their basic objectives are about the same. Intel- these years. The main reason is not because of the
ligent building combines structure, system, service technology problem, but of intelligent building
and operation management to create the most opti- requiring the cooperation of many different fields
mal combination and process for the construction of under cross-platforms. Without proper guidance, ar-
highly efficient, excellent function and comfortable chitects are difficult to carry out plans and designs.
buildings. Therefore, intelligent building must be In view of this, our study tried to use established
able to satisfy users’ needs, control easily, save en- intelligent building lab to conduct smart control of
ergy, improve management effectiveness and clarify energy saving on available AC and ceiling fan in the
information. space so as to explore the future development and
This study focused on the role of energy saving direction for intelligent building by means of energy
in intelligent building. Taiwan has started promot- saving efficiency.
ing the intelligent building mark since 2004; how-
ever, over the eight years, there were only ten cases METHOD AND DEVELOPMENT OF INTEL-
certificated, which is obviously lower compared LIGENT BUILDING’S ENERGY SAVING
to 359 green mark buildings in the past. Moreover, An overview of building spaces utilization shows
although intelligent building has already became that, the proportion of electricity used by AC ac-
a government policy and Taiwan’s Executive Yuan counts for about 40% of overall energy consump-
has also started promoting intelligent building tion, while lights and electric outlets takes up about
40% (Taiwan Power, 2006). This study applied smart cadian rhythms which meet human body’s different
control to AC and ceiling fan as a main planning di- biological needs so that to enhance living safety and
rection to compare its energy saving effectiveness comfort (Fu et.al., 2010); setting up multiple ZigBee
to traditional model as a reference for future intel- moisture sensor around the indoor space to adjust
ligent building design. the AC operation, improve living condition and re-
For smart controlled AC and home appliance, duce energy consumption through temperature
they can be automatically adjusted by different ap- and moisture data collection (Wang et. al. , 2010);
proaches. ,For example, conducting smart control , using the position method of BBM (Best Beacon
on living environment through EEG (Electroen- Match) to smartly control living environment (Jin et.
cephalography) (Lin et. al., 2010); using BCI(Brain al., 2007), such system can control AC and lighting;
Computer Interface) as biological and electric using Smart phone as interface to monitor and con-
monitoring system to achieve the goal of active en- trol the living condition (Zhong et. al., 2011) (Li et.
vironment control; applying CPSs (Cyber Physical al., 2012) to replace remote control; using wireless of
Systems)such as Bluetooth, ZigBee RF and infrared sensor networks to establish a physical environment
ray to carry out various communication protocols so for room control to adjust the use of electrical ap-
as to convert a variety of different signals through a pliances automatically for the energy-saving effect
smart control box (Bai, 2012) using pyroelectric in- through the data monitoring (Yeh, 2009).
frared sensor-based indoor location aware system
(PILAS) as receivers (Kastner et. al., 2010) to monitor RESEARCH METHODS
residents’ activities, position, pattern, or health con- This study used ZigBee as the main transmitter to
dition to provide the best living environment; han- construct an environment and interface which are
dling complicated intelligent home equipment by able to meet the requirements of smart control (Fig-
the construction of low cost sensor and control sys- ure 1), in which the temperature and moisture in
tems based on ZigBee (Blesa et. al., 2009); using low the room were detected by sensor, and such data
budget stationary sensor to set up electronic nose were then transferred to AC through ZigBee’s sig-
for air quality monitoring (Zampolli et. al., 2004) to nals which will further advise AC about the next step
control the AC system for the best air flow; provid- to take through the intelligent control way so as to
ing different colors of light by photovoltaic lighting achieve the objective of energy-saving
systems developed in accordance with human cir- Most homes’ AC temperature control uses re-
Figure 4
The measured indoor
temperatures and energy con-
sumptions of AC automatic
mode under the condition the
shutdown of ceiling fan on
May 17.
to select the most suitable mode for the next step temperatures of spring season during the experi-
order placing, therefore, this research attempted to mental period which gradually became warming, in
explore how to use the current transmitter to let the order to obtain a more objective analysis of the data,
AC knows that the temperature has been reached as this study set the value at room temperature of 26 °C
well as what’s next step it should operate in order to and implemented four days’ records and observed
achieved the objectives of energy saving and comfy. the temperatures and energy saving effects on the
above-mentioned two air conditioning modes, then
EXPERIMENTAL PROCEDURE AND RE- explored the indoor ceiling fan impact on AC ef-
SULTS fect on the basis of data test. The gateway control-
As the AC used in this lab. was a fixed-frequency ler used in our laboratory was able to record indoor
split air-conditioning being used for five years, in temperature and total electricity consumption every
order to confirm whether the use of intelligent con- minute. The findings showed that, under the con-
trol of energy efficiency is achieved as expected, dition of indoor fan shut down as well at the tem-
this study first carried out the multi-day tests and perature of 26 °C, the room temperature was only
records for the AC mode controlled by remote con- reduced to 28.5 °C and 29.2 °C, respectively (Figures
troller and automatic mode. Owing to the unstable 3 and 4). Moreover, despite the air conditioner au-
Figure 6
The measured indoor
temperatures and energy con-
sumptions of AC automatic
mode combining with ceiling
fan on May 14.
tomatic mode reached the energy consumption of rectly, resulting in the AC failed to carry out proper
1.995kw in 2 hours, the quality of the indoor tem- cooling effect in accordance with the actual indoor
perature cannot achieve the desired comfort. temperature value.
After opening the indoor ceiling fans to help While coupling with the ceiling fan under the
improve indoor air circulation under the same AC same set, value of the room temperature improved
and automatic AC mode at temperature of 26 °C, the significantly and the indoor temperature was fairly
room temperature value showed a significant im- satisfied -though not reaching but was closing to
provement trend (Figures 5, 6). the set temperature of 26 °C.
Test results of these few days revealed that the Based on the AC automatic mode and its achiev-
split air conditioner in our study neither could reach able target temperature values, this study set differ-
desired temperature within two hours nor reduce ent smart modes for the controller to test and record
the indoor temperature to a reasonable value, de- the AC’s energy consumption situation.
spite that indoor temperature showed a dropping • The temperature was set at 26 °C and the gate-
trend. This phenomenon showed that the air tem- way controller would transmit message to AC
perature sensor was located in a place which was to change the mode to fan mode when the
not able to detect the room temperature value cor- room temperature reached 26 °C, or restarted
Figure 8
Demanding the AC to cool
down rapidly by setting
temperature at 24 °C and
ordering AC to change to fan
mode when the temperature
reached the set temperature,
and to restart the AC com-
pressor when temperature
restored to 27 °C.
the AC compressor when the indoor tem- which led both operating modes of two hours
perature rose to 27 °C, so the cycle execution. test to exert the same effectiveness (Figure 7).
The initial findings showed that, although the • The temperature of AC was set at 24 °C. When
temperature was set at 26 °C, the indoor tem- room temperature reached 24 °C, the AC would
perature was unable to cool down to 26 °C. Two be ordered to change into fan mode, and
possibilities are judged for such condition: A. would restart when indoor temperature rose
Sensor mounted on the AC was too close to the to 27 °C. The results of measured data showed
outlet which led too high return cool air rate, that, indoor temperature dropped from origi-
thus causing the inconsistency between the nal 33.5 °C to 29.5 °C within ten minutes, but
temperature detected by the sensor and actual then gradually cooled to 27.5 °C in the rest
indoor temperature and resulting in automatic 110 minutes (Figure 8), failing to reach the
control mode failed to perform its function. expected set temperature. The observations
B. The variation of detected temperature be- made here for such condition were that, in or-
tween sensor of this study and sensor mounted der to avoid excess use of AC, both ACs used
on the AC. After examination, it was found that by classrooms and lab. were adjusted and set.
there was 1°C difference between both sensors Therefore, no matter how the air temperatures
main calculation problems is the actual choice of was applied, which simulated a fire that released
mesh: decreasing the size of the elements, a more heat but also a specific quantity of particles and
realistic simulation is obtained, although this re- gas, based on input data. The levels of HRR deve-
quires a longer computation time and greater hard- loped from the case study are based on information
ware power. derived from the experimental data and compared
The mathematical model used for solving the with the values identified in the technical literature
analysis of fire phenomenon and interface problems for those specific activities.
among contiguous space elements is the phase field In the case of a multi-purpose hall, the presence
model (Boettinger et al., 2002) that, by means of a of scene panels is assumed (Table 1) which increases
specific and infinitesimal mesh, can correct dynamic the total fire load.
interface problems. A rapid decrease of the HRR was observed due
to the action of the simulated sprinkler system
MULTI-PURPOSE EXHIBITION CENTRE AT (automatic opening), giving a value of the fire ex-
TIVOLI - CASE STUDY tinction coefficient. This was made possible only
The case study was an experimental master’s degree through the correct sizing of the sprinkler piping, by
thesis of a project for a multi-purpose exhibition cen- indicating the flow of the single sprinkler obtained
tre with a multi-storey underground parking station from the product specifications and the UNI regula-
in Tivoli, near Rome; the simulation was focused on tions (UNI EN 12845, 2005).
the compartment of the conference hall (Figure 4). Some fire scenarios of NFPA 101 (Coté and Har-
The fundamental value to be specified was, as rington, 2012) were determined in compliance with
previously mentioned, the Heat Release Rate - HRR. the law (D.M., 2007). Relevant critical scenarios rep-
This was schematized with a burner, on which a vent resentative of the actual conditions were produced
using: a preliminary check of regulations; expected curve of heat release HRRPUA [KW/m²] derived from
performance levels; considering, for additional the sum of the value of the initial burner plus any
safety, the failure of the sprinkler system or devices contribution of fuel elements; visibility in the room
for automatic door opening. It was also possible to for the egress time (by rate dimming detectors -
simulate fire scenarios that are worse than the NFPA BEAM); fume, particle and heat flow trends in the
101 code fire scenarios. Those could be the most se- compartment; sprinkler operation.
vere fires ever recorded, or the average of the worst The main architectural problems were encoun-
fires having occurred with some regularity. The fire tered both:
scenarios were examined for a period of 10 minutes, • outside the building, the relationship with the
an appropriate time for a preliminary investigation square (quality of urban space)
(Figure 5 and 6). • inside the building, the layout of the confer-
The choice of particulate reaction rate is one of ence hall within the overall shape of the build-
the most critical aspects (Kittle, 1993): for example, ing (quality of architectural composition).
in the pre-flashover phase, a modest amount of As far as the first point is concerned, namely the ur-
electrical equipment could spread smoke more dan- ban environment, two aspects must be taken into
gerously than a common fire with greater reaction consideration which heavily affect the architectural
rate levels. composition and the cost of the building: the time
All the simulations verified the fire scenarios required for firemen to arrive from the nearest fire
considered by analysing variations of: gas tempera- station and the building conformation, which is
ture at human head height and at construction el- linked to the constraints imposed by the under-
ement height (by temperature detectors - THCP); ground parking safety openings. In the case of the
first aspect, the most important features are the to be mainly dependent on the materials used and
non-combustible nature of the materials used and/ to exclude any reference to the ‘form’. In actual fact
or the time needed for the bearing structure to re- the shape of the building, of its interior and the ac-
sist until the fire fighters arrive. This aspect seems cesses to it constrain the possible manoeuvres of
Figure 6
Gas temperature showed in
a longitudinal section of the
multi-purpose conference hall
study case 5 minutes after fire
ignition.
INTRODUCTION
It is generally recognised that architects currently re- Wind engineering has traditionally been within
quire performance information to guide their deci- the remit of engineers or specialists, with numeri-
sions almost from the inception of a project. In fact, cal simulation (CFD) considered a supportive tool to
there is a mentality present of simply trying to col- physical boundary layer wind tunnel (BLWT) testing.
lect as much data as possible with the intention of For instance, in the computational wind engineering
synthesising it into a situated design response. This (CWE) literature there is substantial caution around
presents a problem, especially for computational numerical analysis, namely for Reynolds-averaged
fluid dynamic (CFD) wind simulation, whereby the Navier-Stokes (RANS) and to a lesser extent large-
time required to assess the performance is obstruc- eddy simulations (LES) (Stathopoulos, 1997; Bitsua-
tive to the fast and iterative nature of current para- mlak, 2006; Dagnew et al., 2009; Menicovich et al.,
metric design softwares. This is possibly due to the 2002). However, architects are increasingly getting
tendency for architectural software tools to origi- involved with analysis, where concerns over accu-
nate in engineering fields, without due considera- racy are less paramount since demand is typically for
tion of speed-accuracy tradeoffs to adjust for the ap- relative scenario comparison or general flow behav-
plication requirements (Chittka et al., 2009; Lu et al., iour (Lomax et al., 2001; Malkawi et al., 2005; Chronis
1991). In other words, they are typically too accurate et al., 2012).
and slow for the fast pace of modern conceptual de- The tall building typology has been identified
sign, massing or form decisions. Developing a meth- as a focal area here for a number of reasons. Firstly,
od that can give real-time performance feedback as height increases so too do the wind forces (along
about a form allows for intuitive play of the kind we with seismic and gravitational) which has conse-
are used to with physical models. quences on facade panelisation and structural effi-
in that the most complex flow types (bluff bodies) often difficult since the systems are typically natural,
and therefore most computationally intensive, need and therefore can have randomness, heterogeneity,
to be modelled in a scenario where fast results are multiple causes and effects, and noise. Even when
required. The numerical method must be as accu- they are successfully extracted, they may be be-
rate and fast as possible. In fact, the conclusion is yond our understanding and are held as intractable
reached that the fastest method has poor accuracy computational functions or data structures. Hanna
and the slowest the best accuracy (as would be ex- (2011) tests the hypothesis that it is unnecessary to
pected, considering the speed-accuracy tradeoffs have any understanding of this underlying system
mentioned earlier). There is general agreement be- behaviour, but rather it is possible to make predic-
tween (Lomax et al., 2001) and (Chronis et al., 2012) tions about the system simply by making observa-
that the “level of accuracy of a CFD simulation needs tions. This is demonstrated by learning the struc-
to be compromised with the turnaround time require- tural behaviour of system components and applying
ments of its application.” them to larger-scale scenarios.
Lu et al. (1991) describe the same issue in me- Graening et al. (2008) propose a method that al-
chanical engineering where slow but accurate simu- lows the extraction of comprehensible knowledge
lation makes interactive decision making impossi- from aerodynamic design data (jet-blades) repre-
ble, when only quick estimates are desired at early sented by discrete unstructured surface meshes.
stages. It is only towards the final stages of design, They use a displacement measure in order to inves-
“when the engineer has converged to a small region of tigate local differences between designs and the
decision space, more accurate simulations are needed resulting performance variation. Knowledge, or rule,
to make fine distinctions.” The problem has therefore extraction from CFD data is primarily used to guide
been present since the early 90s, but as a solution human- centred design by improving understand-
they propose integration of simulation, optimisation ing of the system’s behaviour, whether it is for jet
and machine learning. turbine blade optimisation or architectural design.
Whilst the connection between local geometric fea-
Inductive Learning in Design tures and surface pressure has been extended and
Our approach is supported by Samarasinghe (2007), changed here, and used for a different application,
who identifies the best solution to predicting sys- this work is a close precedent.
tem behaviour through observational data. This is
necessary when there is little or no understanding of Problem Hypothesis
the “underlying mechanisms because of complex and It is argued here that approximations of CFD simu-
non-linear interactions among various aspects of the lations can be made with machine learning regres-
problem.” Extracting these complex relationships is sion, using geometric shape descriptors as the learn-
ing features. The entire evaluation process can be The geometry for the training set was gener-
broadly split into five key work areas: i) procedural ated using a procedural tall building model with a
geometry generation; ii) batch simulation; iii) shape select number of key parameters (Figure 2). There
feature generation; iv) machine learning training; are in fact three separate topologies in the proce-
v) prediction and visualisation. Feature generation dural model with their own parameters, since it is
is essentially the core of the process since the solu- difficult to incorporate the entire design space with
tion depends heavily on geometric description so one parametric logic (Park et al., 2004; Samareh,
as to define surface pressure as a function of it. We 1999). Using the unstructured triangulated surface
hypothesise that surface pressure distribution aris- mesh from these means we are not limited by a sin-
ing from wind flow around tall buildings can be gle parametric topology in the learning phase of the
learnt and predicted with an accuracy appropriate method (Graening et al., 2008). Local surface-mesh
to early stage design (feedback from practice indi- shape characteristics are used as input features to
cates <20% error) using shape feature description. the learning algorithm instead of the design pa-
It can be shown that it is possible to combine, with rameters, avoiding reliance on any one parametric
an acceptable error, methods that have the separate model definition.
contradictory objectives of predictive accuracy and
speed. Simulation Method
An established solver, ANSYS CFX 13.0, was used
METHODOLOGY throughout to run the RANS steady-state simula-
tions, with a k-ε turbulence model as it is regarded as
Data Set Generation: Procedural Modelling the most robust. Each simulation, depending on the
The parametric model was created in Bentley Gen- complexity, requires up to 60 minutes to converge
erativeComponents. The goal was to create a gen- (on a 2.66GHz i7). Solver convergence is reached
eralised tower model, with the two properties of when residuals fall below a minimum of 1−6, typically
minimising the number of parameters used whilst at around 100 to 200 iterations. The number of cells
maximising the design representation potential, in the tetrahedral meshes varies between 0.8x106
i.e. the number of possible buildings it could cre- and 1.5x106 depending on the geometry, with pris-
ate. This is important when considering optimisa- matic expansion on surfaces 3 cells deep and a mini-
tion or exploratory design space searches to avoid mum cell size of 0.1m. The wind was applied at an
the curse of dimensionality. This means that as the upstream inlet, with a reference speed (Ur) of 1ms−1
number of variables increases, the design space in- at a reference height (Zr) of 10m. The most common-
creases exponentially by nD, where n is the number ly used distribution of mean wind speed with height
of samples taken per parameter and D is the number is the ‘power-law’ expression:
of parameters, or dimensionality. There is therefore Ux = Ur ( Zx / Zr ) α (1)
clearly a compromise to be made between model The exponent α is an empirically derived co-
efficiency and representability. efficient that is dependent on the stability of the
Table 2 Case Train Sim. Train Feat. Gen. † Train Predict Feat. Gen. * Predict *
Summary of time (seconds) Orientation 21600 9060 2600 1540 < 0.1
required for each case, split Height 18000 2370 720 620 < 0.1
into Training (one-off back- Topology 32400 4670 1060 1750 < 0.1
end time) and Prediction Real 2160000 12000 620 720 < 0.1
(front-end time). Mean feature
generation time is 0.085s/
vertex. *Mean over all test set. Topology Tall Buildings
†
After down-sampling. Here the number of edges was varied from 3 to In the final case, training data was collected from
10, with 0 (circle), diameter 10m and height 50m. simulations of 600 procedural tall building models,
Instead of keeping a complete model separate for with a total of over 4x106 shape features extracted.
testing as in the last two cases, here all cases were This was down-sampled to 105 by removing features
used but only a fraction of the total data set was in close proximity to reduce training time. The test
used. This is varied in Figure 5 (left), with a training set contains 10 real tall buildings from around the
set ranging from 10000 to 50000. world, selected for their range of unique architectur-
Figure 3
(Left) Orientation vs. Error
σ %; (Centre) Training set
regression, R=0.99564; (Right)
Prediction error (25°).
Figure 4
(Left) dHeight vs. Error σ %;
(Centre) Training set regres-
sion, R=0.9992; (Right) Predic-
tion error (25m).
al characteristics. Figure 6 shows predicted surface for less time (Table 2), with the methodology and
pressure distribution in the top row, and the error constraints described. These prediction errors are
distribution for the set in the bottom row. The pres- necessary for the compromise in avoiding consider-
sure range (-5.5 to 2.0 Pa) was taken over the entire ably intensive CFD simulation. Traditionally, for eve-
test set, as was the absolute error range (0 to 65.2%). ry individual CFD simulation the process can take a
The error distribution is shown in Figure 7 (right), minimum of 1 hour, compared to our methodology
which fits a Gaussian normal distribution. Error per- that has a total front-end prediction time of under
centiles: 99th = 35.7%, 95th = 20.0%, 90th = 13.0%, 75th 12 minutes (for feature generation and prediction)
= 6.1%. That is, 75% of the test features have an error and a back-end, one-off training set simulation time
below 6.1%. of 600 hours (for the real case). Once trained, an un-
limited number of predictions can then be made.
CONCLUSION Whilst these preliminary results are outside the
The results show that it is possible to achieve a rela- rigorous accuracy necessary for final engineering
tively small prediction error (Figure 7 and Table 1) analysis, they are within the boundaries acceptable
Figure 6
(Upper) Predicted pressure,
Pa; (Lower) Error, %. Pressure
range is the min. and max. of
the entire set for comparison,
the error range is absolute
max. error of the set (65.2%).
(Left to right) (1) Metlife
Building, NYC; (2) The Shard,
London; (3) Willis Tower
(Sears), Chicago; (4) Euston
Tower, London; (5) Taipei
101, Taiwan; (6) Shanghai
World Financial Centre; (7)
Bank of China; (8) Exchange
Place, NYC; (9) Frankfurter
Buro Centre, Frankfurt; (10)
Washington Street, NYC.
for early-stage concept design for tall buildings, sign. IEEE Expert, 12(3), pp.71–76.
where interactive response time is a significant con- Graening, L. et al., 2008. Knowledge Extraction from Aero-
sideration. The prediction accuracy and response dynamic Design Data and its Application to 3D Turbine
times achieved are promising for further work given Blade Geometries. JMMA, 7(4), pp.329–350.
the well-known complexities of fluid behaviour. Hanna, S., 2011. Addressing complex design problems
The next stages of the work are to consider time- through inductive learning.
dependent simulations to fully consider the approxi- Irwin, P.A., 2009. Wind engineering challenges of the new gen-
mation of turbulence, vortex shedding and gusts, as eration of super-tall buildings. JWEIA, 97(7-8), pp.328–
well as interference from complex urban contexts 334.
on boundary conditions, and further improvement Lomax, H., Pulliam, T.H. & Zingg, D.W., 2001. Fundamentals
to the shape feature selection and generation time. of computational fluid dynamics, Berlin: Springer.
Lu, S.C.-Y., Tcheng, D.K. & Yerramareddy, S., 1991. Integration
ACKNOWLEDGEMENTS of Simulation, Learning and Optimization to Support En-
This research was sponsored by the EPSRC, Bentley gineering Design. Annals of the CIRP, 40(1), pp.143–146.
Systems and PLP Architects. Malkawi, A.M. et al., 2005. Decision support and design evo-
lution: integrating genetic algorithms, CFD and visuali-
REFERENCES zation. AIC, 14(1), pp.33–44.
Bitsuamlak, G., 2006. Application of computational wind en- Menicovich, D. et al., 2002. Generation and Integration of an
gineering: A practical perspective. In Third National Con- Aerodynamic Performance Database within the Concept
ference in Wind Engineering. pp. 1–19. Design Phase of Tall Buildings.
Burns, J.G., 2005. Impulsive bees forage better: the advantage Mitchell, T.M., 1997. Machine Learning, McGraw-Hill.
of quick, sometimes inaccurate foraging decisions. Ani- Park, S.M. et al., 2004. Tall Building Form Generation by Para-
mal Behaviour, 70(6), pp.1–5. metric Design Process. In CTBUH. Seoul Conference, pp.
Chittka, L., Skorupski, P. & Raine, N.E., 2009. Speed-accuracy 1–7.
tradeoffs in animal decision making. Trends in ecology & Samarasinghe, S., 2007. Neural Networks for Applied Sciences
evolution, 24(7), pp.400–7. and Engineering: From Fundamentals to Complex Pat-
Chronis, A. et al., 2012. Design Systems, Ecology and Time. In tern Recognition, Auerbach Publications, NY.
ACADIA. Samareh, J.A., 1999. A Survey of Shape Parameterisation
CTBUH, 2012. Tall Buildings in Numbers : A Tall Building Re- Techniques. In CEAS/AIAA/ICASE/NASA Langley Inter-
view, 2012(1). national Forum on Aeroelasticity and Structural Dy-
Dagnew, A.K., Bitsuamlak, G. & Merrick, R., 2009. Computa- namics. pp. 333–343.
tional evaluation of wind pressures on tall buildings. In Samuel, A.L., 1959. Some Studies in Machine Learning Using
11th Americas Conference on Wind Engineering. the Game of Checkers. IBM JRD, 3(3).
Duffy, A.H.B., 1997. The “what” and “how” of learning in de- Stathopoulos, T., 1997. Computational wind engineering:
Abstract. The paper describes a novel system to alter and redirect sunlight under large
span roofs with the help of a fluid lens system. Focus lies on the computational design,
testing, measurement and evaluation of the performance of a physical prototype.
Keywords. Daylight; large span roofs; optics.
OBJECTIVE
The general aim of the presented research is to deve- comfort but also energetic aspects in terms of heat
lop a design methodology with the help of compu- gains are part of the design and research scope. The
tational tools and prototypes in order to be able to research on the adaptive fluid lens and sunlight re-
design adaptive daylight systems in large span roofs direction system described here is one of the case
under consideration of user and functional require- studies being developed within the framework of
ments and respectively daylight performance as- an ongoing PhD research at TU Eindhoven. In this
pects. Adaptive solutions for vertical facades with specific case the objective is to find a way to capture
relatively small rooms and individual requirements and utilize sunlight to be used under a large span,
of the inhabitants in form of e.g. louvers are well re- horizontal roof in order to be adaptively redirected
searched and applied. In the case of large span roofs where needed and dynamically treated or altered
where the horizontal part of the envelope exceeds in such a way that it can fulfill various functional as-
the vertical one for admitting light, the design re- pects (Figure 1). This horizontal “window” has to con-
quirements and parameters for daylighting are dif- tinuously mediate between the dynamic yet known
ferent. Here collective lighting requirements and path of sunlight in relation to the location and the
adjustability for a higher amount of inhabitants, possible change of interior use or functionality and
diversity of functions, larger spatial entities and the thus lighting requirements.
geometrical relations between sun path and general
roof alignment towards the zenith play a major role. PRINCIPLE
However not only quantities of light according to The adaptive lens and sunlight redirection system
regulations or qualitative aspects in form of visual consists of two major components (Figure 2). Firstly
Figure 2
Description of the proposed
system.
Step 2: Associative 2-D and 3-D models riety of altitude angles. To reduce the height of the
Snell’s laws is most relevant and was translated into mirror system located on top of the lens, an array
associative 2-D sectional drawings of a light con- of mirrors was chosen instead of one larger mirror.
verging/diverging lens and a mirror system in order This approach poses several challenges in term of
to understand the capabilities and performance of overshadowing each other and being able to redi-
the proposal under changing light directions. In rect sunlight in different quantities according to the
this initial step an adaptive lens system was set up sun’s altitude angle and it turned out that there is
which is able to change the radius of an upper lens no universal system which works equally well in for
and Index of Refraction of the contained liquid ac- every sun altitude. Therefore it is necessary to take
cording to material properties of existing fluids and a closer look at several design parameters and be-
Snell’s law in order to focus or diffuse light. Here come specific about location, the respective avail-
an array of vectors is refracted within the lens and able hours with sunshine, the annual and daily sun
made visible via a bundle of lines to serve as design path and the prevalence of certain ranges of sun
and early evaluation tool. The change in altitude an- altitude angles and times of occupancy of the build-
gles of the sun leads to a change of the focal point ing. By matching these parameters it is possible to
of the refracted light and it showed that the redirec- narrow down the target range of altitude angles
tion possibilities of a lens are limited. This means where the redirection of sunlight is working a hun-
in order to keep the light e.g. diverged in terms of dred percent. By choosing a design example in Mu-
a constant area size, the membrane’s geometry has nich and as function a train station which is heavily
to be continuously changed by pumping liquid due frequented during the rush hours, lower sun altitude
to the resulting change of the focal point in relation angle ranges, which occur more frequently during
to the sun’s changing altitude. For this fact and the mornings and evenings but also during spring, au-
lens’ limited possibility of light redirection into the tumn and winter become more relevant (Table 1).
interior a secondary system for light redirection is re- By applying the Galapagos genetic algorithm solver
quired. Therefore several options like trapping light [1] to generate and validate variations of the fixed
by internal reflections (glass fiber principle), a rotat- inclination of the whole mirror array, the distance,
able prism system or plain mirrors were evaluated. sizes and amounts of the individual mirrors, a whole
The system of rotating mirrors was favored, because set of design solutions is produced which redirects
this proofed to be more “straightforward” and prom- sunlight altitudes perpendicularly to the ground by
ising in terms of light redirection under a greater va- a 100% within a Δ range of 30o (Figure 3). It is then a
matter of selecting the configuration which is most as VRay are able to calculate caustic effects [3] with
suitable for the design task at hand. In the example physically correct material properties and Index of
of Munich, a mirror configuration was eventually Refraction but are not able to display physical values
chosen which operates perfectly between 10-40 de- such as Illuminance, etc. It was therefore decided to
grees. After evaluating the design principles in the design and manufacture physical prototypes for the
earlier steps associative three-dimensional files were performance evaluation in accordance with the ear-
set up to further evaluate the behavior of the lens lier findings from the associative 2-D and 3-D mod-
and mirror system and also to have a geometrical in- els.
put for later daylight simulation.
Step 4: Prototype
Step 3: Simulation During the design process for the prototype, re-
The simulations were done via Diva/Radiance [2] search was done for lens diameters, change of vol-
and the VRay renderer [3] also available for Rhino ume and weight on the roof for a 1:1 case (Figure 4).
3D. The Radiance simulation was initially regarded In general it can be said that the higher a roof is situ-
as being important because it is able to show physi- ated above ground the less of a shape change in the
cal values like illumination in lux or luminance in cd/ lens has to occur in order to achieve a desired effect.
m². This would enable to check the performance for By studying theses parameters it was decided that
actual conditions and requirements as stated in e.g. a lens with a diameter of 1m would be optimal for
building codes. However the various simulations many applications in terms of weight and required
done proofed to be not accurate since Radiance volume change within the lens.
for windows is not able to calculate optical effects The final prototypes which serve as proof of
with dielectric material properties properly (Jacobs, concept and are used for daylight performance
2012). This has to be done in the Linux environment measurements were manufactured in the scale 1:10
with the help of a photon mapping module, which (Figure 5). The majority of parts including the me-
was developed by the Fraunhofer ISE [4]. This ap- chanical parts like gears and cograil for the sunlight
proach for simulating several different and adaptive redirection device are made of white ABS plastic and
geometries and the consequence to manually input 3-D printed by a Fused Deposition Modeling (FDM)
the data in Radiance for Linux defies the seamless printer. For the mirrors 3M™ Solar Mirror Film 1100 is
integration of parametric modelling and simula- applied on the rotatable ABS fins. The membrane for
tion. Curiously contemporary render engines such the lens is a self-cast and baked Polydimethylsilox-
ane (PDMS) membrane provided by Michael Debije cast sky, an Artificial Sky Simulator was employed, to
at Functional Devices research group of the Depart- achieve diffuse lighting conditions. Through all test
ment of Chemical Engineering and Chemistry at TU series, illumination measurements were done using
Eindhoven. Water with an Index of Refraction of 1,33 a Hagner Digital Luxmeter EC1 and lumination pic-
(Lide, 2009) is used as optical liquid. Other liquids tures were taken with a Canon EOS 60 D and further
like colorless and transparent oils which generally processed in Photolux 3.2. [5]
have a higher Index of Refraction are also thinkable.
Clear sky with sun, Test 1 and 2 set up
Step 5: Prototype measurements and The first two series of tests (Test 1 and 2) focused on
evaluation the performance of the adaptive fluid lens alone un-
The physical experimentations with the 1:10 scale der clear sky, supposing an ideal situation of 100%
prototype aimed at testing the performance of incoming perpendicular to the floor light, which
the system under clear sky with sun (Test 1-3) and would occur if the sun redirection systems func-
cloudy sky conditions (Test 4) (Figure 6). For the sim- tioned perfectly. To simulate the above, the altitude
ulation of the clear sunny sky, a Solar Simulator was of the solar simulator was set to a 90o angle. Two
used, providing directional light, while for the over- different in size closed boxes (Test 1: 0,5*0,5*0,35m,
Figure 5
Aspects of the adaptive fluid
lens and sunlight redirecting
system prototype in 1:10 scale.
Test 2: 0,7*0,7*1m ) with a circular opening at the that a lens of 1m diameter is more efficient over a
center of their top surface for the 10cm diameter flu- 10m high room than a 3,5m room, as previously es-
id lens to be placed over, were used as room models. timated. To be more specific, in the case of the 1m
Water was pumped in and out of the two syringes box, the removal of 55ml of water from the lens in
connected to the lens, to reconfigure its shape from neutral state causes a circular lit area on the floor of
neutral to convex and concave. These tests approxi- 0,46m diameter (concave lens) while the addition
mate the performance of a 1m diameter fluid lens in of 12ml produce a focal point on the floor of 0,01m
a 3,5m and 10m high room respectively. diameter (convex lens) (Figure 8). In full scale the val-
ues would be 10m, 55L, 4,6m, 12L 0,1m. At the 0,35m
Test 1 and 2 observations and comparison high box r, when an almost equal water volume
The tests showed that under clear sunny sky condi- (58ml) is removed from the neutral lens, a lit area of
tions the light is indeed diverged or converged ac- 0,22m is produced. Furthermore, in order to achieve
cording to the configuration of the membrane and focused light on the floor at a point of 0,005m diam-
similarly to the predictions from the grasshopper eter, 26ml of water need to be added to the neutral
models (Figure 7). Comparing the two room scenar- lens. In full scale the values become 3,5m, 58L, 2,2m,
ios and scaling up the results to 1:1, it is confirmed 26L and 0,05m accordingly. In general, at the con-
Figure 7
Differences in the quality of
shadows from the concave to
the convex mode of the adap-
tive fluid lens.
cave mode, the lens is acting as a spotlight spread- for high contrast ratios in the room that not only
ing out the received sunrays. The light hitting the exceed the acceptable contrast thresholds for visual
floor surface and reflected by it causes the formation comfort but also surpass the 1:1000 ratio which is
of soft shadows by the scaled human figures placed the range of brightness the human eye can perceive
at the periphery inside the box. (Green et al, 2008). The most extreme converging
Considering the focused mode, the flux density mode might not be applicable for daylighting. How-
at the center of the floor surface (63.000lux for Test ever other uses are possible as described in the out-
1 settings) is excessively high in comparison to the look section of this paper.
density measured at the periphery (36lux for Test 1
settings). Given the fact that in Test 1 the sun simu- Test 3 set up
lator produces a value of 908lux at the floor center Test series no. 3 examine the effectiveness of the
and scaling up the findings, we can assume that on a sunlight redirection system on a clear sunny day
clear sunny day in summer where 100.000lux reach (Figure 9). For these tests, the 0,5*0,5*0,35m box was
the ground (Flesch, 2006), the flux density will be used as a room model and the solar simulator was
6.940.000lux at the center and 4.000lux at the pe- set at 30° sun altitude, where the sunlight redirec-
riphery. Such concentration of light is responsible tion system is expected to be 100% efficient accord-
Figure 9
Test 3: different lens configura-
tions.
ing to the Grasshopper/Galapagos models. The sys- the sun’s azimuth alignment until reaching the wall.
tem was placed over the fluid lens at the top of the In the other direction it was possible to reach 90%
box and the mirrors were rotated as such, to direct of the space (0,5m*0,5m) with the spot. The Test 2
the received light perpendicularly to the ground. configuration in combination with the redirection
device did not fit under the sun simulator. However
Test 1 and 3 observations and comparison it should be noted that, the higher the ceiling is lo-
Although the sunlight redirection system manages cated above ground, the more the range and perfor-
to redirect the light perpendicularly to the ground, mance is increased in terms of light redirection. Due
the system in combination with the lens do not suc- to the height of the redirection device itself the area
ceed in bundling all the rays in one focal point but in of illumination is more reduced the further the light
fact a linear series of focal points is noticed. This de- beam is astray from the vertical redirection configu-
viation is caused by imperfections of the mechanical ration. Furthermore it will also be interesting to see
system controlling the rotation of the mirrors. It can the interaction and lighting design possibilities of
be derived that even minor deviations of the mir- several devices together.
rors from the correct inclination can direct the light
in a non-desired direction as well as light scattering General findings regarding the fluid lens
is further enhanced if the film is not evenly applied under clear sky conditions
over the ABS lamellas. The imperfections at the ro- The light and heat absorption by the water volume
tation mechanism are also responsible for the pres- is another issue worth to be discussed as during
ence of more accentuated shadows on the floor cast clear sky conditions, the lux value on the floor sur-
by the row of mirrors. Minor shadows are of course face under the opening is reduced in both Test se-
expected at the neutral and diverging modes of the ries 1 and 2 by 31% when the lens at its neutral state
lens due to the thickness of the mirrors but not to is placed over the opening. The Beer-Lambert Law
the observed extent. explains the logarithmic relationship between the
transmission of light through a substance, the thick-
Light redirecting possibilities ness of the medium and the wavelength of the light,
The prototype showed that initial expectations in proving that the intensity of light decreases expo-
terms of light redirection capabilities of the systems nentially with the increase of the water depth (Ryer,
were by far exceeded (Figure 10). In the Test 3 con- 1997). Taking into account that the light absorption
figuration it was possible to redirect light within coefficient of water for violet light (380nm wave-
INTRODUCTION/RATIONALE
During the design process, architects are asked to product. Specifically, basing design decisions on a
predict and evaluate future building performances set of averaged parameters, in the assumption that
related to a large number of functional, typology- the building will satisfy future users’ needs (Zimmer-
based and organization-based requirements. To man, 2003) much like “similar” buildings have done
support their design decisions, architects usually so in the past, often fails when real users, who may
rely on functional programs which are interpreta- differ from the “average” user in many ways, finally
tions of the requirements of the organization that meet the building.
will occupy and use the building, namely: the main Architects’ ability to predict in which manner
activities of future building’s occupants. their design will be used, and whether it will match
In the past, this interpretation process has been the activities of its intended users, is currently only
mainly supported by normative methods, regula- supported by the architects’ own expertise and
tions and general design rules. Nevertheless, the imagination. Sadly, the consequences are clearly
domination of normative approaches has shown its recognizable in reality: too often buildings do not
limits in light of increasing complexity of building perform as expected after their construction, and
design and typology, and the intrinsic complexity sometimes they completely fail to support the ac-
of human - building interaction (Koutamanis and tivities of the organizations that will occupy them.
Mitossi, 1996). As matter of fact, its high level of ab- The observation and analysis of human behav-
straction is not well-suited to the intrinsic unique- ior in built environments is usually considered the
ness and context-dependence of an architectural best way to understand and evaluate how a building
in Figure 1), a gateway formalizes the necessity of to effectively represent a building use process. In
checking the patient presence and, in case of his/her fact, it is not able to take into account and conse-
absence, it adapts the use scenario by directing the quently simulate how the use process (meant as a
doctor to the next patient. set of activities and actors involved) is influenced by
The BPMN ability to encapsulate activities in the built environment, and how it will be actually be
sub-processes also allows us to manage complex carried out in it. In order to provide architects and
processes and to reuse the same activities struc- clients with a reliable prediction of how the building
ture several times. At the same time, non-structured users carry out the defined activities, we chose to
or intermediate activities (such as “using the re- integrate the BPMN representation with a 3D simu-
strooms”, or “having a walk”) (Tabak, 2008), are rep- lation environment, where the formalized use pro-
resented by means of ad-hoc sub-processes that can cess is effectively simulated within the built environ-
be invoked during the actual simulation according ment provided by the architect. In this environment
to probabilistic curves (Figure 2). (developed by means of the game engine Virtools
In order to make the process representation [2]), the building use process, previously formalized
more flexible and adaptable to different systems in an abstract way in the BPMN system, is connected
(meant as building + activities + users), we also used to the virtual model of the built environment where
BPMN messages and signals to stop and restart dif- its activities are supposed to be performed.
ferent sub-processes depending on specific condi- To compute and simulate the use scenario deve-
tions or events. The BPMN system allows us to actu- loped in the BPMN model, a specific script has been
ally export (via XML) and execute the represented developed in the Virtools game engine by means of
building use process in external simulation environ- behavioral blocks -visual programming blocks that
ments and to use it as input for such systems. For correspond to the different activities represented by
the development of the building use scenario in the them. In Virtools’ scripting environment, we chose
BPMN environment, we chose to use Bizagi, a free- to develop a specific programming level for the
ware business process modelling software [1]. formalization and computation of the use scenario;
its role being to guide and control the execution of
SIMULATING BUILDING USE PROCESSES the sequences of activities, adapting their perform-
IN 3D VIRTUAL ENVIRONMENT ing to the environment and to the status of the us-
The BPMN approach is a valid way to represent and ers’ involved. It also enables control and simulation
simulate processes composed of one or more flows of serendipitous events, triggered by the physical
of activities involving some actors. Nevertheless, the (actually, geometrical) proximity and location of the
BPMN representational system alone is not sufficient actors within the simulated built environment (Fig-
ures 3 and 4). Such chance encounters may trigger agent-based components, intended to control some
different performance paths. In addition, we chose autonomous aspects of virtual users’ behavior (for
to equip activity entities with specific scripts to sim- instance, path decision, walking actions, obstacles
ulate their performing in order to coordinate actors’ avoidance, local interactions with other entities,
actions and cooperation. This is a fundamental dif- such as doors or other agents). This choice has two
ference from previous agent-based models, where main advantages: resemblance to the visual reality
the activities simulation is generated by the sum of of the resulting simulated phenomenon, and im-
autonomous actions and decisions of the users, with proving the manageability of the computation sys-
several limits in terms of manageability and coher- tem. The first consists of the possibility to reduce the
ence of the output. rigidity of a process-driven simulation by including
To improve the adaptation of the scenario sim- variations related to single actors’ behaviors, actions
ulation to the built environment and its status, we and decisions. In that way, we can simulate seren-
chose to integrate the scenario script with some dipitous events generated by the interactions of
Figure 4
The simulation of the build-
ing use process in the 3D
visualization environment of
Virtools.
ing in a defined physical environment, thereby in- tional procedures, and to test different configu-
troducing the environmental constraints that affect ration of human resources such as number of
the process and contribute the important element workers, their profile and specialization, their
of serendipity. The 3D visualization of how such use scheduling.
process is effectively performed by future building So far, the research shown in this paper has
users, is helpful in making the results accessible to mainly focused on simulation of users’ behavior
the experts who must judge the outcomes of the in terms of activities performing and operational
simulation. management. It would be interesting in follow-up
Differently from previous activity-based models research to introduce social and environmental psy-
where the use process is entirely computed before chology data in the simulation model, in order to
and then merely visualized, in the proposed model provide a more comprehensive and reliable predic-
the use scenario is computed in real time during the tion of users’ life and activities in buildings.
simulation, providing a better adaptation of the se-
quence of activities to the built environment and its REFERENCES
occupants and, consequently, a more coherent and Archer, B 1966, Activity Data Method: a method for record-
reliable simulation output. ing user requirements, Ministry of Public Buildings and
By providing a real time simulation of users’ Works, London.
behavior within a defined physical environment, al- Carrara, G, Kalay, YE and Novembri, G 1986, ‘KAAD - Knowl-
though limited to specific use cases and processes, edge-Based Assistance for Architectural Design’ ,
this simulation model would support: Teaching and Research Experience with CAAD - Proceed-
• Architects and clients in evaluating the func- ings of4th eCAADe Conference, Rome, Italy, pp. 202-212.
tional performances of a design solution be- Cohen, U, Allison, D, and Witte, J 2010, Critical Issues in
fore it is actually built, leaving them the possi- Healthcare Environments, Research Report for the Cent-
bility to correct errors and solve critical points; er for Heath Design, Concord CA.
• Process planners, analysts and building manag- Eastman, CM and Siabiris, A 1995, ‘A generic building prod-
ers in testing different workflows and opera- uct model incorporating building type information’,
INTRODUCTION
A better understanding of urban aerodynamics will sues (Gandemer, 1981).
positively influence design decisions in architectural This project was conducted as an intensive
and urban projects. The wind flow and dispersion three-week cross-disciplinary elective in the School
through a city determine environmental air quality, of Architecture and Design, RMIT University, offered
wind pressures on buildings, urban heat islands, pe- to architecture, landscape architecture, and engi-
destrian comfort, and ambient noise level in the sur- neering students. The outcomes of the explorations
rounding environment (Boris, 2005; Zaki et al., 2010). include:
However, only a few existing techniques have been • Wind maps of the sites (a major intersection
developed to deal with the habitability and comfort located at the northern axis of Melbourne
issues due to strong wind conditions on pedestrian and the alleyway at the rear-entry of the RMIT
areas (Cochran, 2004). These are mainly done on the University Design Hub building), derived from
urban planning level or by introducing trees and data captured using handheld probes and a
shrubs as vernacular shelterbelts. Studies have been weather station
done on how aerodynamic characteristics of wind- • Analysis and evaluation of the performance
breaks can be used to resolve pedestrian comfort is- of a series of windbreak options designed for
PROJECTS: ANALYSIS, DESIGN AND sage leading to the entrance changes its direction
OUTCOMES throughout the year. The effect on the entrance is
The two main approaches to develop a wind control the same but the design of the windbreak should
structure were the exploration of porous patterns as deal with both wind directions.
a wind filter and the concept of a shell as a wind de- Using Vasari as CFD software it was possible to
flector. The challenge was to not only design a struc- visualise how the gust of wind had a curved move-
ture to control the negative effect of wind detected ment producing two separation zones with low
in the site, but also produce low impacts in the sur- pressure areas. One of these areas coincided with
rounding space. This meant that the aesthetics is an the entrance of the building, producing an input
important element alongside the functional aspect of wind when the gate was opened. These simula-
of the windbreak. tions were validated with measurements of the
wind speed using the anemometer on the low-cost
Milk.Crate.Break Project (by Tamara Cher, weather station. On the day of data collection it was
Xuan Son Nguyen, Romy Peterfreund) found that the wind coming closed to the façade
The site for prototype 1 Milk.Crate.Break is at the had an average speed of 3.7m/s at 2m height, but
rear entry of the RMIT Design Hub (Swanston Street this velocity increased up to 4.4m/s when was blow-
entrance). It is a narrow alleyway close to an alcove ing through the passage (Figure 2).
with an operable door opening. Through a study of The first approach was to use a shell or canopy
the historic data of the site it was found that there to deflect the wind to maintain the low pressure
is a prevailing wind blowing through the site, direc- area in front of the entrance. Later the design pro-
tion alternating depending on the season (Figure cess progressed to the design of a skin with some
1). When the door is opened the sudden pressure kind of porosity to control the wind speed. This po-
difference produce a noticeable wind gust into the rosity concept evolved from a surface with simple
building. patterns of holes to different patterns with a variable
The study of the site (Figure 1) was focused on density of porosity (Figure 3), following the indica-
the investigation of the differences of wind condi- tions of the studies by Gandemer (1981) on wind-
tions between summer and winter seasons, as well breaks.
as analysing the impact in this area. As part of the project the students were required
The first finding of this study showed that wind to construct a section of their design in full size and
has two predominant directions (north and south). install it on site. The Milk.Crate.Break team decided
This meant that wind passing through the pas- on the front section of the design to be constructed
at the full scale. This had the implication that a new more efficiently to produce an upward air through
structural system is required for the design. Through the wall. The experiments compared these differ-
further exploration the students decided on using ent patterns with 20%, 40% and 60% of porosity.
milk crates as the main building material, factors in- For each case, the pattern moves depending on the
fluenced their decisions included: structural (modu- pressure of the wind, where used to deflect the wind
lar self-supporting, easy to assemble and disassem- upwards as well as absorbs a fraction of the wind en-
ble), sourcing (free and readily available), a cultural ergy (Figure 4 and 5).
significance (Melbourne’s laneway). The inherent The final concept was a structure with a dou-
structure and porosity of the milk crates also offered ble layer of porous skin. The first porosity layer was
the students additional design possibilities: The a regular graph design which ameliorated the wind
design was adapted to take advantage of the crate speed. The second shark-skin pattern layer reduced
volume to produce a design with a double skin. The the wind flow close to zero. Between both layers, the
students explored several options of patterns for an internal chamber in the structure was designed to
adaptable second skin through CFD analysis. The deflect the wind vertically. In this way the pressure
first option had a pattern of Venturi funnels work- on the structure was reduced to maintain the struc-
ing as a diffuser to decrease the wind speed in the tural stability.
outcome side of the wall. The second version was a The students’ final design was a windbreak that
pattern of triangular petals with a more simple sys- spans the full width of the alleyway, 2.5 meters in
tem of petals aperture for the density control. The height, offering protection for a large area in the
final design was a pattern of triangle flaps based on proximity of the door. The design form is symmetri-
the shark skin. This was the options chosen because cal to work with both wind directions. There is one
the parallel triangular surfaces deflected the wind opening at each end to allow access through the
Figure 3
From the left to right: the first
version of shell, the second
version of a porous shell, the
study of porosity density.
site and into the building. The porosity system is de- Lyrebird Project (by Mikhail Kochev, Sara
signed for the surface of the windbreak, with lower Metanios, Daisy Leung, Rico Shuyuan
density at the lower part the design to offer more Zhang)
protection (Figure 6). The site of prototype 2 Lyrebird is located close to a
Further testing (both simulation and in the street level entrance to RMIT University Building 14
physical mini wind tunnel) were conducted to con- (Swanston Street). The onsite data measurements
firm the functionality of the adapted design. Meas- revealed a predominant wind direction from the
urements were taken to evaluate the design. south to north. The students noticed a strong and
Figure 5
From the left to right: porosity
variations and effect of wind
passing through the structure,
final installation.
turbulent wind close to the building facades side tralian Lyrebird. The idea was to mimic the natural
of the pedestrian sidewalk. Although a number of curvature of the feather to form a curved shell for
trees are present on the street near the site, it was the entrance (Figure 8). Design iteration of the form
evident that the wind conditions were not improved and dimension of the windbreak was mainly con-
for pedestrians. Erosion test of the site model was ducted through physical wind tunnel tests and CFD
conducted in the mini wind tunnel to reproduce experiments. These digital tests were focused to find
the more relevant phenomena. The phenomena ob- the more efficient curvature for this roof to deflect
served were: the wind blowing along the street, the the wind. This exploration found that a double cur-
wind effect around the corner on the building, and vature performed a very good protection rather a
the effect that could be produced by placing struc- single curve (Figure 9).
tures over the entrance (Figure 7). These simulations In the wind tunnel test, a simple curved cano-
demonstrated a channel effect occurring in this area py shows a good performance to deflect the wind
with a high level of turbulence from the friction with without significant loan on the structure: the arched
the buildings’ walls. This was identified as an issue shape was able to deflect and guide the wind over
for the building entrances and other points around the building opening. This final version was chosen
the pedestrian circulation routes. The design objec- as the structure shown to be a more effective wind-
tive was to protect the street level entrance from the break. These tests demonstrated that the structure
prevailing and local wind conditions. does not produce lateral strong gusts and the pro-
The student drew their design inspiration from tection area was large enough to provide shelter
bird feathers, in particular the fan-like tail of the Aus- against strong winds around the entrance of the
Figure 7
From the left to right: mini
wind tunnel and erosion test
of the proposed windbreak
design, wind map of the site.
Figure 9
From the left to right: CFD
tests for different curvatures,
first project design and wind
test of curve profiles.
Figure 10
From the left to right: CFD test
and wind tunnel test for the
final design.
on the final installation due to time constraints. tion of this project: to teach students to work with
Many factors such as site permit restriction and fab- methods and technological tools to study complex
rication constraints dictated that only approximate- phenomena.
ly 2 meters by 2 meters by 2 meters of the design This first part of the objective was fully complete
was constructed and evaluated on site. The students for the students. In the short three weeks, through
applied bright paint to observe how colours have the study of the literature available in the field of
a relevance to intensify the visual aesthetic of the wind engineering and the tools such as Vasari CFD
structure in the public area (Figure 12). and a low-tech mini wind tunnel, the students were
able to gain a good understanding of the basic
GENERAL EVALUATION aerodynamic effects to begin their own design ex-
One positive outcome of this academic project was ploration in this field. The potential of these tools for
the opportunity to share knowledge from different pedagogic purposes was evident. The visual interac-
fields that conduct work on the city and its current tive feedback helped the students to gasp with the
problems. The use of technology helps us to under- comprehension of these complex phenomena, and
stand the dynamic phenomena like wind in cities, as a platform for discussion the design performance.
through collecting data (both in the physical and The visual documentation of the design testing pro-
virtual realm, on site and through simulations) and cess (both still images and videos) formed a large
make sense of that information. This was the inten- part of the presentation material for the students
Figure 12
From the left to right: proto-
type, Arduino platform and
installation.
Abstract. It has been argued that traditional building simulation methods can be a slow
process, which often fails to integrate into the decision making process of non-technical
designers, such as architects, at the early design stages. Furthermore, studies have
shown that predicted energy consumption of buildings during design is often lower than
monitored energy consumption during operation.
In view of this, this paper outlines research to create a user friendly design tool that
predicts energy consumption in real-time as early design and briefing parameters are
altered interactively. As a test case, the research focuses on school design in England.
Artificial neural networks (ANNs) were trained to predict the energy consumption of
school designs by linking actual heating and electrical energy consumption data from the
existing building stock to a range of design and briefing parameters.
Keywords. Environmental design tool; energy prediction; artificial neural networks;
building operational performance; schools.
INTRODUCTION
There are many environmental ‘design aids’ avail- • Physical modelling
able, with the objective of helping designers make • Building simulation
sustainable design decisions. These design aids can Given that environmental design problems tend
largely be grouped into the following categories to be ‘wicked’ (Rittel and Webber, 1973), and thus
(Morbitzer, 2003): distinctly novel and unique, rules of thumb, basic
• Design guidelines / rules of thumb calculations and correlation methods are often inad-
• Steady state calculation methods equate techniques (Morbitzer, 2003; Pratt and Bos-
• Correlation based methods worth, 2011) and physical modelling has the disad-
tion from the online map software Digimap [5], Bing tion outputs (Table 2) based on a set of inputs (Table
Maps [6] and Google Earth [7]. 1). A multilayer perceptron network was used for the
The building height was derived by multiplying study - Figure 1 shows the conceptual structure of
the average number of storeys by 3.62m - the aver- this ANN. The hidden layer enables the system to
age floor-floor height of schools in the UK (Stead- generate nonlinear and complex relationships by
man et al., 2000). The building volume was then intervening between the input and output neurons
derived by multiplying the building height with the (Haykin, 1999). Each neuron in the input and output
building footprint area, measured from Digimap [5]. layer took continuous, categorical or binary values
Glazing percentages were measured from Bing Map as outlined in Table 1 and Table 2. Prior to the train-
[6] images using bespoke code developed in the ing of the network, all continuous inputs were nor-
Processing programming environment [8]. malised to values between -1 and 1 to generalise
The construction year of the buildings were col- the calculation process. Two ANN models were con-
lected from each school’s website where available structed, one with heating energy consumption as
otherwise they were derived from historical digital an output and one with electrical energy consump-
map software [5]. Data on schools of varying ages tion as an output - both ANN models included all of
were collected to increase the size of the database, the input parameters (Table 1).
giving the neural network more data to learn from. A Levenberg-Marquardt backpropagation su-
A proportion of the differences in, for example, fab- pervised training technique was used to train the
ric quality and building systems between newer feedforward network to recognise the patterns that
schools and older schools are likely to be picked up exist in the dataset. The prediction performance of
in the construction year neuron. Therefore, this neu- the ANN was assessed by validating the ANN with
ron will exist within the trained network in the final 10% of the gathered database on which the ANN
design tool but fixed to the most recent date. had not been trained - the testing dataset. 10% of
the gathered database was used to stop the train-
ANN ARCHITECTURE ing process before overlearning occurred (Demuth
All ANNs were constructed in Matlab [9]. The aim of et al., 2008) and the remaining 80% of the database
the ANN method is to predict the energy consump- was used to train the network. The number of neu-
rons in the hidden layer were altered between 2, 4, algorithms are integrated into this environment
8, 16 and 32 neurons. Each network configuration with MATLAB Builder JA [10].
was trained five hundred times and the ANN with The tool allows the user to sketch the footprint
the lowest mean squared error (1) was selected for of the building by clicking and dropping vertices
further analysis. Further analysis consisted of calcu- in an input window - these vertices can later be
lating the coefficient of determination (R²) and the dragged or deleted. All other inputs are entered via
below performance indicators, (2) and (3): sliders (continuous inputs) and tick boxes (categori-
cal/binary inputs) thereby encouraging the user to
Mean squared error (MSE) = (same units ‘play’ and test different options, encouraging explo-
as output) (1) ration of ‘in-between’ solutions in the design space.
The ability to gain feedback in real-time results in
Root-mean squared error (RMSE) = the user being able to ‘animate’ the results and learn
(same unit as output) (2) the relationships between the design inputs and
energy outputs by the acceleration of change in the
results as the design space is explored.
Mean absolute percentage error (MAPE) =
(%) (3) RESULTS AND DISCUSSION
Where Yi and Ŷi are the target and predicted outputs ANN configurations with two and eight neurons in
respectively for the training, testing or stopping the hidden layer were found to produce the least
configuration i and n is the total number of configu- prediction errors for heating and electricity energy
rations in the training, testing or stopping datasets. consumption respectively. Table 3 summarise the
results of the errors for the best performing ANN
USER INTERFACE configurations. The electricity output was predicted
Figure 2 shows a representation of the tool user in- with a mean absolute percentage error (MAPE) of
terface. The tool is currently being developed in the 19.3%, while the heating output was predicted with
Processing programming environment [8]. The ANN a MAPE of 20.5%. These errors are an improvement
of 10.0% and 6.7% for heating and electricity energy research in this paper better these errors by 4.6%
consumption respectively, when compared to using and 15.5% for heating and electricity respectively.
the Chartered Institution of Building Services Engi- Figure 3 show scatter plots of the ANN predic-
neers (CIBSE) Technical Memorandum 46 (TM46) En- tions vs actual annual heating and electricity energy
ergy Benchmarks as energy performance indicators consumption from the testing dataset. The coeffi-
(Table 4). As mentioned in the introduction, Hawkins cient of determination (R²) shows that the 23 design
et al. (2012) used an ANN method to assess the en- and briefing parameters (ANN inputs) explain 39%
ergy determinants in higher education buildings in and 41% of the variation in annual heating and elec-
London, UK. The ANN method by Hawkins et al. pro- tricity energy consumption of the schools respec-
duced MAPEs of 25.1% and 34.8% for heating and tively.
electricity fuel use respectively - the results from the From this initial study it appears that the ANN Table 3
Prediction errors of the ANNs
ANN Output RMSE (kWh/m²/annum) MAPE (%) - calculated from the ANN
Heating Energy Consumption 30.5 20.5 testing dataset.
Electricity Energy Consumption 10.8 19.3
Table 4
TM46 Benchmark RMSE (kWh/m²/annum) MAPE (%) Prediction errors of the CIBSE
Heating Energy Consumption 41.3 30.5 TM46 Benchmarks - calculated
Electricity Energy Consumption 16.7 26.0 from the ANN testing dataset.
method is viable for predicting energy consumption errors, as outlined in the previous section. Neverthe-
in existing school buildings. Nevertheless, further less, it is desirable to reduce these errors further and
research is planned to improve the performance of improve the R² values. In order to improve both the
this method and ensure it is viable for new school performance of the ANN method and increase the
designs as outlined in the further work section. relevance of the tool, further design inputs are likely
to be required. The nature of this desktop study was
CONCLUSION to collect as many design and briefing inputs as are
This paper outlines research to create a user friendly freely available. Acquiring further inputs, such as
design tool that predicts energy consumption in building services and fabric data, may require direct
real-time as early design and briefing parameters communication with individual schools or local au-
are altered interactively. As a test case, the research thorities. This process is likely to be time consuming
focused on school design in England. Artificial neu- however is being pursued. Further actions to im-
ral networks (ANNs) were trained to predict the en- prove the ANN performance, as well as ensuring the
ergy consumption of school designs by linking ac- tool is relevant to the design process and applicable
tual heating and electrical energy consumption data to new school designs, are outlined in the following
from the existing building stock to a range of design section.
and briefing parameters. The initial design of the It should be noted that the development of this
user interface was introduced in this paper. tool does not have the objective of replacing tradi-
For the energy consumption predictions, the tional building simulation - instead it aims to act as a
ANN mean absolute percentage error (MAPE) was user friendly sanity check for non-technical design-
20.5% for heating and 19.3% for electricity. The co- ers, such as architects, at the early design stages.
efficient of determination (R²) was 39% and 41% for
heating and electricity energy consumption respec- FURTHER WORK
tively. The aforementioned errors were compared There are a number of developments underway in
with another method and study and produced lower order to make the method of prediction in this re-
INTRODUCTION
The development of DesignScript is intended to be computational design tools easier to use by non-
a response to the following trends in contemporary experts. In addition, there is an assumption or indeed
design practice (Figure 1). These trends can be de- an expectation that once designers have successfully
scribed in terms of the following dimensions. accomplished some initial tasks, they will be interest-
ed in progressing from an exploratory approach to
Scalable to different Computational skills more formal programming methods. Therefore, we
It is generally recognised that there are advantages need to step back from the immediate requirement
in making design computation more accessible to a (to make computational design tools more accessi-
wider audience of designers. As software develop- ble) and consider a more general requirement which
ers, we can interpret this not just as the need to make is to support a progression in the development of
Figure 1
The three dimensions describ-
ing critical aspects of contem-
porary design practice.
Figure 2
Starting with a conventional
graph node diagram…
Figure 4
To turn the “nodes into
code”…
On the one hand there are advantages in having In some cases the ‘concept’ is the overall form
design applications support more domain specific of the building, but increasingly the design concept
functionality: not in isolation, but rather integrated may be an engineering principle or an objective to
into a single common application framework and achieve a particular combination of performance
capable of acting as a computational intermediary metrics, or the use of a particular generative algo-
between the different members of a multi-discipli- rithm distributed within the individual components.
nary design team. In these latter approaches to the design, the archi-
On the other hand there are advantages if de- tectural ‘form’ may emerge as a consequence of the
sign applications could step back from being too design process rather than imposed ‘top-down’. In
domain specific and support a ‘return to first princi- addition, the resulting design representation may
ples’ by exposing both computational and geomet- not be a traditional ‘building model’ primarily in-
ric abstractions directly to the designer. tended to support conventional drawing extraction.
Rather it may be a series of ‘geometric normalisa-
Scalable to projects of different size and tions’ intended to be the minimum information re-
complexity quired for a direct digital fabrication process or ro-
Design concepts often start as disarmingly elegant botic construction. Indeed conventional workflows
and simple ideas which can easily be explored with (or data flows) are being supplemented by innova-
lightweight models or scripts. But to be realised as tive project specific processes. Because of the essen-
a physical building, these ideas necessarily have to tial ‘open-endedness’ of this new form of design, it is
be developed into complex building models with increasingly important that design applications are
hundreds of thousands of individually detailed com- similarly ‘open-ended’.
ponents.
variable might be created (say as a curve) and then a sub collection of members. Modifiers enable these
‘modified’ by being trimmed, projected, extended, special conditions to be identified and the addition-
transformed or translated. Without the concept of al modelling operation applied.
modifiers each state or modelling operation would To give an example, imagine that a building
require to be a separate variable and this would façade is based on a set of polygons. The polygons
force the user to have to make up the names of all will be the ‘support’ geometry for the façade panels.
these intermediate variables. Modifiers avoid impos- However, in this example those polygons which are
ing this naming process on the user. ‘out of plane’ by some critical dimension require a
special modification before being used as the sup-
Combining dependencies, replication and port for the corresponding facade panels (Figure 6).
modifiers The designer may want to apply a special opera-
Dependencies, replication and modifiers can all be tion but only to the ‘out of plane’ polygons and the
combined to represent the typical modeling opera- application of this operation should not alter the
tions found in architecture and constructions. Build- particular polygon’s membership of the collection
ings are composed of collections of components. of polygons. In this context all the polygons have
Typically these collections are often the product of a a common defining operation, but some polygons
series of standard operations across all members. On have an additional ‘modifier’ operation applied.
the other hand, within such collections there may Having applied the special modifier operation
be special conditions where different or additional to the specific polygons, the designer can use the
modeling operations are required to be applied to whole collection of polygons to generate the collec-
Abstract. The paper presents a developed method and algorithm to create environmental
sustainable optimised forms based on the solar energy received in relation to receiving,
containing and distributing energy. Different studies are created based upon this
approach, to which forms are evaluated against conventional building geometries. The
work shows a significant improvement on several aspects of environmental performance.
Lastly the work presents an idea of maximum structures, rather than minimum structures
as a path in future research work.
Keywords. Sustainable environmental architecture; performative generative algorithms;
simulation; material distribution.
INTRODUCTION
The overall form of a building describes not only its of sustainable environmental architectural forms
architectural language, but also to a large degree its and their mass properties. The argument that form
capacity to become environmental sustainable in and distribution of material has the ability to lower
relation to its local climatic environment. Movement the energy used in a building with up to 80% (Pe-
of the sun in relation to the Earth, its surface orienta- tersen, 2012), combined with the urgent need for
tion and its mass constitute the weather condition environmental adapted architecture motivates the
locally and globally as it regulates air flow, heat ac- research.
cumulation, heat transfer etc. (Oke, 1987). These re-
lations, solar geometry and mass, form the environ- Previous work
ment for life and its rhythms. The solar environment A recent trend in sustainable buildings in northern
is thus the singular most important factor in relation climates has largely been designed to minimise sur-
to climatic environmental architecture. face to volume relations, while increasing energy
The creation of architectural building forms as uptake, as seen in the Lighthouse project from 2009
derivatives of the solar environment has thus the by Christensen & Co Architects. It uses its circular
potential of improving the context specific recep- form as an optimum sustainable geometry [1]. While
tion and containment of energy towards environ- currently being an advanced example of sustainable
mental sustainable architecture. If the position of architectural form in the built environment, design
the sun in relation to Earth surface orientation and of architectural forms remains largely based on ‘sim-
its mass properties can determine local environ- ple’ design principles.
ments as briefly described above, then the same Digital techniques in simulation and generation
could be activated as a strategy for the generation of architectural form have evolved rapidly over the
longer distance is between the elements, effectively The factors are plotted into a scheme, Figure
extending the combined surface oriented towards 2, in relation to solar irradiance from above. The
the energy source. The progressive formation cre- scheme is related to the northern hemisphere, in
ates elements (a). Following a similar method but which southern orientations are effected by higher
with a fixed distance between a centre point and a solar gain. In case of high irradiance materials that
created element form a circular form with optimum allow high solar gain can be low etc. The scheme
surface volume relation. The progressive formation could be reconfigured for other locations and paired
creates elements (b). The formation located be- with the above form creation algorithm generating
tween elements (a) and (b), denoted elements (c) other results than presented below.
serve as an equilibrated formation. In the studies Merging the above into one model, we have the
performed in this paper, (c) is always half the dis- following algorithmic procedure:
tance between (a) and (b). From (c) solid matter is 1. Distribute test elements around center (small
distributed according to the description below. inner circle around center point)
2. Calculate element angle vector from center to
Constructor: Matter distribution, creating distributed element
mass algorithm 3. Calculate sun vector
Following the creation of form, material properties 4. Calculate angle between element (a) vector
are applied to the generative model through three and sun vector
different aspects, 1) u-value, the heat transition co- 5. Calculate radiation energy on each element
efficient, 2) g-value, the solar gain coefficient and based on Lambert’s Law
3) thermal mass. These are selected based on their 6. Distribute elements from center based from
direct reference to established architectural termi- quantity of solar energy at each test
nology, and from sensitivity analysis of the most in- 7. Distribute element (b) from center with same
fluential passive factors for sustainable architecture radii creating circle
(Petersen, 2012). 8. Calculate distance between elements (a) and
Abstract. This paper addresses the integration of the daylight effect during the early
stages of the architectural design process. The first part presents a design assistance
method that helps designers to characterize their daylight intentions and materialize them
in architectural solutions. In this part, we describe the implementation of this method in
a design tool, denoted DaylightGen, the implemented process and the different system
components. The second part of this paper focuses on the investigation of the potential of
the proposed method in design process. It was evaluated in educational design case study.
This part integrates the experimentation process and his results.
Keywords. Computer aided architectural design; intentions oriented design; generative
and parametric design tool; daylight simulation tool; design tool experimentation.
Figure 2
DaylightGen implementation
tools.
generation step is organized as an iterative process materialize the “characterize intentions” and the
of a generation, evaluation, mutation and selection “generate solutions” activities and finally the Day-
activities. The fourth step of the DaylightGen meth- lightViewer integrates the “assessing” and the “per-
od process is the “assessing (4)” step where the gen- sonalize solutions” step’s activities (Figure 2).
erated solutions are visualized and presented to the
designer as a result of the “generate solutions” step. Day@mbiance
The “assessing” step integrates five activities where Day@mbiance is a navigation tool in a references im-
the designer visualizes and navigates in the collec- ages base proposed by Salma Chaabouni (Chaabou-
tion of generated solutions, evaluates and compares ni et al, 2008). The images base is structured as a
them and finally selects the best ones. The method MySQL® database and managed by Mamp®. The
process ends by the “personalize solution (5)” step navigation in the images base is realized by a PHP®
where designer could modify the generated solu- application with a Flex® interface. A web browser
tion features and transform them to integrate new (Firefox®) is used to visualize the Day@mbiance func-
ideas. The modified solutions will be visualized and tions and results (Figure 3).
evaluated with an iterative manner. The “personalize Day@mbiance is used to identify the designer
solution” step accompanies the designer and takes daylight intention. Its process starts with a first
end when his is satisfied.
Figure 3
THE DAYLIGHTGEN TOOL Day@mbiance implementa-
This method is implemented in a design assistance tion modules and environ-
tool denoted DaylightGen. This prototype is com- ment.
posed of three tools: Day@mbiance, DaylightBox and
DaylightViewer. Day@mbiance is used to implement
the “declare intentions” step activities, DaylightBox
mosaic of images representing daylight effects. De- image, the pertinence weight of the keywords used
signer chooses images that represent his intentions, to index it increases, when the image is refused the
refuses that are at the opposite and leaves others weight decreases and finally the weight stay the
neutral to finally generate a new mosaic that takes same if the image is neutral. The pertinence Weight
into account his choices (Figure 4). All images are of the keywords is used when Day@mbiance gener-
indexed by keywords that describe the visualized ar- ate a new mosaic to take into account the designer
chitectural configurations and daylight effects. This preferences.
process will be repeated until the designer finds a
collection of relevant images that corresponds to his DaylightBox
intentions. The DaylightBox tool is implemented in Rhinoceros®
All images used by Day@mbiance are indexed modeler environment and his graphical algorithm
using a keyword collection structured in a thesau- editor Grasshopper® (Tedeschi, 2011). Daylight-
rus. The thesaurus is divided in five facets that de- Box is a Grasshopper® definition that integrates six
scribe all images features: the daylight effect type, modules: a referenced images base (Day@mbiance
the quality, the quantity of daylight, the space sur- images base), a daylight effects knowledge base
faces aspects and the space function. The indexation (knowledge base), a parametric model (geometry), a
process is realized by Image (software developed by generative algorithm (Galapagos®), a daylight simu-
Pascal Humbert form MAP-CRAI) (Figure 5). lation tool (simulation) and a solutions database (so-
The user’s choice is then characterized a set of lutions storage) (Figure 6).
relevant keywords. The Keywords used to index the The first module “Day@mbiance imagse base” is
images has a pertinence weight that varies between a cluster that integrates a plug-in to connect Grass-
-1 (not relevant) and 1 (relevant) (Halin, Créhange hopper® to the pictures base used by Day@mbiance.
and Kerekes, 1990). When the designer choose an This module selects the most significant keywords
that characterize represented daylight effects in or- The third module “geometry” is a parametric
der to highlight the designer intentions. After that, model of parallelepiped shape defined by thirteen
the designer selects one of the identified daylight parameters (Table 1). These parameters define all
effects to start a solutions generation process. The the spatial features that influence the daylight be-
second module “knowledge base” is used to iden- havior. The model parameters are implemented in
tify and characterize the designer’s intentions. This sliders that determinate their data types and their
knowledge base contains the quantitative and variation ranges.
qualitative features of different and recognized day- The fourth module is a generative algorithm
light effects. These features are integrated in a fitness (Galapagos®) that controls the parametric model fea-
function attached to each known daylight effect. tures to generate solutions verifying the fitness val-
It is composed of variables that characterize solar ue. The genetic algorithm uses the fitness function
gains and their spatial distributions. and his objective value (fitness value) to optimize
Figure 6
DaylightBox modules and
software environment.
the solution generation process. storage”. This module create a link between Grass-
The fifth module “simulation” integrates the hopper® and MySQL® using the Slingshot® [1] plugin.
plug-in Diva-for-Rhino® (Jakubiec and Reinhart,
2011) to connect Grasshopper® to Radiance® simula- DaylightViewer
tion software. The system process iterate on a cycle The DaylightViewer tool is implemented in Rhinoc-
composed of three main steps: eros® modeler environment and his graphical algo-
1. The genetic algorithm Galapagos® finds archi- rithm editor Grasshopper®. DaylightViwer is a Grass-
tectural parameters values using the selection, hopper® definition that integrates three modules: a
crossover and mutation operators. It optimizes visualization interface defined by the “visualization”
the generated solution behavior and tries to module, a simulation module “simulation_eva” and
reach the fitness value. the parametric model “geometry_per” (Figure 7).
2. The parametric model “geometry” generates ar- The first module “visualization” imports the best
chitectural models defined by the parameters solutions according to their fitness value (solution
values provided by the genetic algorithm. with the lower value of fitness). The selected solu-
3. The simulation module analyzes the daylight tion are visualized and organized in a colored grid
features of the geometry generated by the par- from the best to the worst one. The user selects the
ametric model. The simulation results are used number of solution to visualize and navigate under
to compute the fitness value. the visualized solutions (using Rhinoceros® visualiza-
The generating process ends after a fixed num- tion windows) to select those corresponding to his
ber of generations. All the generated solutions fea- intentions. The second module integrates a simula-
tures (parameters and fitness values) are stored in a tion tool that makes realistic and quantitative simu-
MySQL® database using the sixth module “solution lations in order to verify that chosen solutions pro-
duce the described daylight effect. The third module
Figure 7 is composed of geometrical operators that could be
DaylightViewer modules and used by designer to transform the proposed solu-
software environment. tions. The module “geometry” presents the features
of the selected solution and the list of sliders to
modify the parameters values. The transformed so-
lution could be evaluated (realistic and quantitative
simulations) and exported it in 3D geometrical ob-
jects (bake them from Rhinoceros® to Grasshopper®)
(Figure 8).
sign sessions videos. We identify the major design identifying and selecting daylight effect, generating
activities that participate to the design process and solution and proposing a spatial configuration for
their chaining. All design activities and the design the project. The second approach (used by group
supports used are transcribed in time line diagrams 4-5-6-7) starts by formulating design problem, im-
(Figure 10). The diagrams analyze reveals three de- planting the project, proposing a spatial configura-
sign approaches (Figure 11). The first one (used by tion, formulating and declaring daylight effect inten-
group 1 and 2) starts by formulating design prob- tions using Day@mbiance, identifying and selecting
lem, implanting the project, formulating and declar- daylight effect, generating solution. The third one
ing daylight effect intentions using Day@mbiance, (used by group 3) starts by formulating design prob-
Figure 10
Example of design activities
diagram.
Figure 11
Design approaches.
lem, formulating and declaring daylight effect inten- aperture in the corner or “this is a jail effect” to de-
tions using Day@mbiance, identifying and selecting scribe a solution with two long and fine vertical ap-
daylight effect, generating solution, implanting the ertures in the right and left side of the aperture face.
project and proposing a spatial configuration for The students use the evaluation function to visualize
the project. All design approaches ends by the same the daylight effect generated by the selected solu-
way: the designers evaluate and personalize the tions and verify if they corresponds to the described
generated solutions, reshape the proposed project, intentions. They operate different simulations at dif-
finalize the project and debrief the experimentation ferent times and for different sky conditions.
session. The navigation on the solution grid and the
The participants on the experimentation use evaluation of the selected solutions gives to par-
different design supports to formulate design inten- ticipant’s new ideas that were integrated using the
tions and materialize them in architectural solutions. modification and personalization functions pro-
They associate mosaic navigation activity, manual posed by the DaylightViewer tool. They used these
sketches and oral expression to precise the daylight functions to combine different configurations and
effect intentions that correspond to the project exceed the parametric model limits. The group 3
constraints. The solutions grid generated helps stu- used the modification, the evaluation functions and
dents to explore and define new design issues. The Photoshop® to create a new architectural solution
navigation in solutions grid helps students to locate with apertures on three faces that could not be real-
and identify interesting solutions that could be im- ized by the generative model (aperture only on one
plemented in their projects (Figure 12). Students ex- face) (Figure 13).
press their interest by manual gesture and some oral The best-generated and personalized solutions
expressions like “this is small, it concentrate daylight” was integrated and implemented in the project by
to identify and describe a solution with one small analogy. The final daylight effect generated was
Figure 13
Combine generated and
personalized solutions.
evaluated using shadow visualization functions of mentation process used to validate the DaylightGen
Sketchup® and daylight simulation plug-in (Diva-For- method and tools targets. The experimentation re-
Rhino®) of Rhinoceros®(Figure 14). sults validate the capacity of the proposed method
2. Questionnaire answers analyze and tool to assist the daylight integration during
The questionnaire answers was used to identify early design steps. These results confirm the possi-
what users think about the design assistance meth- bility to use the design intentions as basic informa-
od and about the functions proposed by the dif- tion in design assistance tools. The experimentation
ferent tool participating on the process. The main results reveal some limits that concerns the number
part of participants declares that the use of images of identified daylight effects, the fitness function
to identify the daylight effect intentions is really precision and parametric model possibilities that
adapted to the conceptual design steps. They say could be developed in future work. These results
that images constitute a fist level of the implemen- show that the proposed method could be amelio-
tation process of design intentions. The students are rated and adapted to a professional design context.
satisfied by the generated solutions that verify at
different levels of accuracy the described intentions. ACKNOWLEDGEMENT
Student’s answers reveals that the parametric model The authors are grateful to Alstan Jabubiec and Jeff
used for the generation activities needs to integrate Niemasz to their help in the Diva-For-Rhino plug-in
more apertures types and more precise functions integration. This research work is funded by La Ré-
(multiple aperture with different sizes and shapes, gion Lorraine, France.
aperture on different surfaces, integrate personal ar-
chitectural configuration in the generation process). REFERENCES
The majority of participants consider evalua- Chaabouni, S, Bignon, JC and Halin, G 2008, ‘Supporting
tion and personalization functions as very useful ambience design with visual references’, in Architecture
because they allow users to reshape proposed solu- in Computro, presented at the Education and research
tions and to integrate new ideas. Students consider in Computer Aided Architectural Design in Europe,
that the evaluation of these new solutions helps de- Antwerpen (Belgium).
signer to create an iterative process that makes the Gallas, M, Halin, G and Bur, D 2011, ‘A “green design” method
project design progress. to integrate daylight in the early phase of the design
process: The use of intentions knowledge base to gen-
CONCLUSION erate solutions’, in Respecting Fragile Places, presented
This paper presents the implementation process of at the Education and research in Computer Aided Ar-
the DaylightGen method, the choice of the software chitectural Design in Europe, University of Ljubljana,
environment, the modules and the component used Faculty of Architecture (Slovenia).
to create the design assistance tool. It presents also Halin, G, Créhange, M, and Kerekes, P 1990, ‘Machine learn-
the different steps, devices and results of the experi- ing and vectorial matching for an image retrieval mod-
INTRODUCTION
One of the challenges in performance driven de- fective knowledge integration in Computer Aided
sign methodologies is the way that designers can Design (CAD) techniques and methods (Cavieres et
effectively integrate simulation and optimization al., 2011). In design practice, theoretically this gap
techniques with parametric design and generative is bridged via simultaneous consultations with en-
procedures (Oxman, 2008). This challenge can also gineers and specialists. However, for many design
be attributed to as the lack of tools to support ef- problems this concurrency might not be achievable
• The decision(s) or input(s) of each sub-pro- matically be processed as the input of the next
cedure are used as common inputs for more sub-procedure(s).
than one of the sub-procedures, whenever and • After each single measurement or evaluation
wherever needed. module there is either a visualization for alert-
• The final translated output in each of the sub- ing or a feedback loop to the previous stages.
procedures would automatically or semi-auto-
The detailed descriptions of each of the sub-proce- tion. This step is done through visual programming
dures are as follows: using Grasshopper in Rhinoceros. The generated
geometry attributes and alert messages (if either
Definition, design domain, discretization the geometry or resolution is not within some pre-
and load condition [A] defined range) are simultaneously visualized (Figure
In this phase, the designer defines the geometrical 3b and 3c).
properties on which the supports and loads can be
parametrically added and modified. These proper- Material distribution (MD); topology opti-
ties are, so far, the span and the height of a canti- mization [B]
lever or a beam with either upper or lower distrib- In this stage, the goal is to find the optimal mate-
uted or point loads on sides. However, the process rial distribution of the discretized generated design.
in this stage and other stages is designed in a way This step is in Matlab and is based on the implemen-
that more irregular initial shapes are also possible to tation and development of a topology optimization
implement, by just removing some portions of the code, originally written by Sigmund (2001) with the
initial planar design domain. The main inputs in this purpose of solving linear compliance minimization
sub-procedure are the dimensions, the magnitude using an optimizer and finite element subroutine.
and coordination of loads, supports and the mesh Modifications in the code are set up, with the objec-
resolution (Figure 3a). Since this mesh resolution is tive of making it compatible with the input data files
indeed the discretization of the design domain for and supports interoperability of the output for the
the following FEA, the acceptable resolution is a next sub-procedures.
variable depending on the available computation The geometrical properties, DOFs and loads will
time, power and the desired refinement. The output be automatically called in the code and what has
is a two-dimensional matrix or data list in .txt format to be defined by the designer is the percentage of
that contains the relative dimensions of the geom- total remaining material. Consequently, two paral-
etry based on the discretization resolution, magni- lel results are the outputs of this phase, one a set of
tude, the relative coordinates and calculated DOFs images that in real-time illustrates the results of ma-
of each load positions based on the defined resolu- terial distribution simulation and the other, a set of
excel spreadsheets, in which numerical values rang- (Figures 5a and 5b). Although in the initial visualized
ing from zero to one are stored. In the tested cases, topology the lines are detectable with the eyes of
four spreadsheets, respectively, with 30, 40, 50 and the designer, they are not automatically distinguish-
60 percentage of remaining material have been the able for the CAD platform. So one of the main crucial
final outputs. In order to make this process more challenges here was to extract the nodes and define
semi-automatic, further modifications can also be the bars by using and developing appropriate algo-
done in the code to pre-define the range for remain- rithms in a way that the topologies do not change.
ing material in previous sub-procedures (Figure 4). This implies that if in a resultant image we see nine
white polygons in the resultant vector geometry we
Typology, defining the type of structure [C] should have also the same condition. Finally, the
The goal in this sub-procedure is to translate dis- output is a matrix as .txt file with the required infor-
crete or pixelated geometry, which is the result in- mation of nodes, bars and load conditions in the de-
formation from the topology optimization to a vec- sired format (Figure 5c).
tor-based geometric system with nodes and lines Figure 6 illustrates the applied and developed
Figure 5
a: the converted spreadsheet
to discrete geometry, b:
extracted nodes and bars
of same design domain, c:
output set containing infor-
mation on nodes, connectivity
and load condition.
methods for extracting the nodes from the result- gos (evolutionary solver) for finding the shapes with
ant discrete geometry. After reading the float values optimum areas. By having the straight lines of the
on the spreadsheets and re-visualizing the results positive shapes (Figure 6f ), it would also be possible
through using visual programming in Grasshopper, to develop and apply a skeletonization technique
and tagging each cells with its corresponding zero based on Voronoi algorithms (Aurenhammer and
to one value, a filter separates the cells into two lists Klein, 2000) to get axial curves with similar original
of data. The reason for having this buffer is to let the topology (Figure 6g). Then by means of a Boolean
designer find the appropriate continuous topology gate the generated points through skeletonization
similar to the image result but this time composed algorithm can be achievable in a separate point
of surfaces with the size of defined resolution. For cloud list (Figure 6h). After connecting the points
instance, in the Figure 6 this filter value is 0.3, which to their neighboring, the nodes are those which has
means that all values less than this would be within three or more connections. Therefore, another algo-
a list to create the negative shape and those cells rithm is developed to automatically detect nodes
with values equal or more than this threshold will based on the numbers of connected neighbors
create the positive shape (Figures 6a-c). (Figure 6i and 6j). Subsequent to this step another
In the next step, after joining the negative optional procedure is also developed in which the
shapes and retrieving the outer boundary curves, detected nodes would be anchor points of physical
the goal is to transform the jagged edges of these spring systems and other points will be stretched
shapes into straight lines extract polygons. This is while having the fixed nodes as their supports.
done through minimizing the difference between Therefore, with this method the poly-lines, which
the areas of shapes with straight lines from the are not geometrically straight lines, will be stretched
original one with jagged edges. (Figure 6d and 6e). to form the bars.
This part is mainly done through visual and script Using this sub-procedure for all cases would al-
based programming in Grasshopper, and Galapa- low us to have a persistent method to retrieve four
set of nodes and bards for each of the volume frac- further visualization and profile assignation in 3D
tions for any parametrically defined design domain design environment. Further information for evalua-
with distinct load and support conditions in the first tion and comparisons for different input parameters
sub-procedure. After having the nodes and bars the and topologies like total volume, maximum and
structural determinacy of the each vector-based to- minimum length of the elements can be extracted
pologies will also be measured in advance through from the optimum result depending on the design
putting the numbers of the bars nodes and supports requirements.
conditions in static equilibrium. The fitness criteria in the search process are al-
lowable stress of the bars and global displacement.
Analysis, structural behavior and search The search process finds the minimum required
for optimal solution [D] cross sectional area from the defined input sets for
This stage starts with reading the input file in Mat- each of the bar elements and simultaneously check-
lab with the information on nodes, bars and load ing the allowable global displacement. This part of
conditions from the previous step coming from the the process is mainly done implementing a code in
Rhino/Grasshopper. By having this information set Matlab for cross sectional optimization. Moreover,
for each of the four topologies, a static structural in order to check the reliability of the process, some
analysis will be run for obtaining local stresses and results have been compared with the results in the
global displacement of the truss with the initial load GSA suite. Figure 8 represents an overview for a can-
and support conditions. Other variables such as ma- tilever that has started from the discrete geometry
terial properties and available profiles can also be to vector geometry with nodes and lines in which
parametrically defined or extracted form a data set the cross sectional optimization results are directly
in this stage. Figure 7 presents an overview of this used as input data for tubular profile assignment, re-
sub-procedure for a beam case. Here, the gener- sults in differentiation in the size of the each profile.
ated data list store the results that will be used for
TESTS AND CASES similar design problems but with different sizes and
In addition to separate examinations inside each of proportions. This means that parametrically defin-
the sub-procedures to improve and test the func- ing the initial design domain while having concur-
tionality and generalizability of the applied methods rent performance measurements would add to the
are conducted, two A-to-Z cases have been tested efficiency of the design process itself. Additionally,
which will be briefly reported and shortly discussed as it is illustrated in Figure 10, for each design do-
here. First one is a cantilever case with one point main with different load conditions we have four
load at its end (Figure 9). As it is illustrated here the optimized topologies in vector format with nodes
results of optimization based on the initial design and bars that can be translated to steel, wood or
domain and load conditions are translated to a set any other profiles. Moreover, based on cross sec-
of optimized truss structure. In this case and for any tional optimization we will have a differentiation in
of its variation, besides the topological difference profile properties which might be a source of new
between the final topologies, the corresponding performance driven design idea for designers. In
information sets pertaining to the structural per- other words, in addition to automatic evaluations
formance and geometric properties of elements are and comparisons based on the generated and
also available for further evaluation and comparison. stored quantitative information, the developed de-
The second case is a beam but in this case with- sign system might also suggest some implicit hints
in a real world background design scenario for fur- based on the visualized information and rough per-
ther validation of developed methods. This exercise formance estimation. For instance in this case the
builds upon a featured connecting bridges based architects might decide just to have one support for
project by Steven Hall Architects (Figure 10). the roof of the bridge at a specific coordinate and
One of the benefits in this case is that there are have lateral beams to support the walking deck at
Figure 9
An overview of a tests on a
cantilever case.
Abstract. This paper investigates the design process of a performance based pavilion
from concept towards construction phases, by challenging conventional form and
fabrication techniques. The proposed project is considered as a temporary structure,
located in Antalya, Turkey. A free-form structure and a parametrically defined cladding
are designed to serve as an installation unit, a shading element and urban furniture.
The pavilion geometry, performance assessments and proposed fabrication schemes are
clearly described in the paper. The method integrates form, performance, material and
fabrication constraints and exposes how environmental and structural performances,
including Solar Access Analysis and Static Structural Analysis, may inform the design
project.
Keywords. Parametric design; performance; architectural geometry; material;
fabrication.
INTRODUCTION
Design process consists of various phases from con- chitectural design process, which investigate form,
ceptualization to construction, including structuring performance and material aspects of design. The
of the problem, preliminary design, refinement and rationalization process of free-form surfaces towards
detailing (Goel 1992). Towards manufacturing of ar- fabrication is widely investigated along with pan-
chitectural form, different parties are involved in the elization tools, in which architectural geometry is
design process, including the design and consultant subdivided into smaller components. The number
teams. Architectural geometry needs to incorporate of unsolved tasks is enlarged by the number of dif-
many requirements of aesthetic, programmatic, ferent materials being used, because their perfor-
functional, technical and environmental aspects mance and manufacturing technology have to enter
(Holzer and Downing, 2008). However, performance the panel layout computation (Pottman et. al. 2008).
simulation of buildings is mostly undertaken in a The statics-aware initialization procedure for the lay-
later stage and cannot be integrated into design- out of planar quadrilateral meshes is approximated
decision making (Schlueter and Thesseling, 2009). a given free-form surface, by obtaining the me-
This issue reduces the efficiency in the design pro- chanical properties of the initial mesh (Schiftner and
cess radically. Balzer, 2010). A recent study aims to explore fun-
There are methods and tools developed for ar- damental principles of a system, in which a perfor-
Performance Assessments through Static the time period from September to October. The ge-
Structural Analysis and Solar Access Analy- ometry is converted into a mesh, which consists of
sis 1500 objects, in order to undertake the calculations.
The proposed pavilion is located in Antalya Turkey. The orientation and tilt angles of individual objects
Antalya has a typical Mediterranean climate, which are identified. Based on the relationship between
is characterized by warm to hot dry summers and the positioning of one piece and the angle of the
mild to cold wet winters. Because of this reason, sun light, the radiation value of that particular piece
the pavilion has the task of being used as a shading is calculated. Thus, the relationship of the objects
element. Although the pavilion does not obtain a and the sunlight can be established. The total radia-
specific site in the city, it is considered be to be in- tion values ranges from 102992.422 to 871177.938
stalled in north-south or northwest-southeast axes Wh / m2 (Watt hour/ per square meter) (Table 1).
to eliminate the undesirable effect of the sun radia- Through the solar access analysis, it is identified that
tion, through the design of its cladding. the surface pieces which are almost vertical, which
Following generation of the initial form, Solar works as a wall, obtain higher radiation values com-
Access Analysis is operated by Autodesk Ecotect, in pared to the other pieces. Reducing the area of sur-
order to evaluate the total radiation values affect- faces closer to verticality is an important parameter
ing the geometry. Solar access analysis indicates in- in design of the pavilion. The cross section rail of the
cident solar radiation on the surface. The radiation geometry is slightly adjusted through its control
calculations use direct or diffuse radiation data from points to accommodate a better solution for the
the weather file of the city of Antalya, specified for solar access analysis. Therefore, free-form geometry
obtain the advantage of responding to the perfor- Comprehensive boundary conditions would
mance issues related to the solar access analysis bet- have a significant impact on the simulation, such as
ter, as well as to the design issues by working both considering the impact of earthquake in the region.
as an urban furniture and a semi-enclosed space. However, the fact that the pavilion is designed to be
In order to assess the structural performance of a temporary and preferably a lightweight structure,
the geometry, Static Structural Analysis by FEM is only the self-weight of the structure is taken into
undertaken with the Scan & Solve plug-in for Rhino. consideration for the analysis, in order to reduce the
The plug-in works with NURBS surfaces without the time of computation and simplify the simulation. By
need for converting the geometry into a mesh, un- the FEM analysis, the problem areas on the geom-
like many other software used for the FEM analysis. etry are identified. Although using different materi-
Following the assignment of material to the geome- als such as stainless steel or aluminum would influ-
try; boundary conditions and loadings are imposed. ence the numerical stress values of the simulation,
Numerical stress values and total deformations on because the boundary conditions are less compli-
the geometry can be identified, by using different cated, in which the geometry is rigidly fixed in three
materials and altering the geometry. By running the positions to the ground, the geometrical properties
simulation for various scenarios; the geometry can obtain the most important role to accommodate the
be to be adjusted based on performance require- structural performance requirements. Especially, the
ments. larger curvature which defines the semi-enclosed
space plays the critical role for the overall form with One of the proposed structural elements ob-
its height exceeding 3.00 m and span of approxi- tains varied thicknesses in profile ranges from 0.07
mately 5.40 m. m to 0.40 m in order to increase the strength of the
In order to generate the semi-enclosed space, structure on the necessary parts, such as the larger
a symmetrical geometry, which can be constructed curvature which defines the semi-enclosed space.
by two rail curves, is selected for this study. If there Additionally, in order to obtain a lightweight struc-
is no symmetry; the structure would be unstable, ture, the sections of these structural elements are
require comprehensive structural solutions and ad- enhanced by introducing holes in them (Figure 2).
ditional structural issues may need to be addressed
in the design process, which are considered against Fabrication Scheme of the Pavilion
the design intent. If the section of the semi-enclosed Because the pavilion is a temporary structure, pavil-
space is closer to a circle, then the stresses on the ion pieces are designed to be easily demountable
geometry are equally distributed. By modifying the and light. Geometrical description of form is clearly
cross section rail of the geometry through its control identified for the fabrication purposes. In terms of
points, the numerical stress values can be adjusted. constraints related to the transportation and as-
Altering the geometry has also a significant impact sembly, structural elements are considered to be
on the panel layouts of the cladding, in terms of the fabricated in pieces. Because of this reason, two
panel sizes and numbers material systems are tested for the structure; as steel
tion of individual pieces by UV parameter, the surfac- parts, they are designed to be mounted on feasible
es are exploded. The extracted points are evaluated positions of the actual pavilion structure (Figure 3).
and used for regeneration of the panels. The panels
should be unrolled to be fabricated in 2D by laser RESULTS AND EVALUATION
cutting or CNC machine. The panel sizes range from The proposed method of the performance based
23 cm to 52 cm and the total number of panels are pavilion design exposes how different performance
2000, based on the design intent. A relatively denser requirements can influence a design project. The so-
pattern increases the number of panels by alteration lar access analysis has proven that there is a direct
of the U-V curves of the surface. Although, various relationship between the geometry and the solar ra-
panel shapes such as quadrilateral, triangular or cur- diation. For the given free-form surface, the vertical
vilinear panels can be created via the definition, cur- surfaces gain the most of the sun radiation for the
vilinear panels are selected and developed for the time period from September to October for Antalya.
pavilion, due to its possibility for creating better de- Therefore, the geometry needs to be slightly al-
sign solutions for the opaque and transparent areas tered to reduce those affected areas. Working with
of the cladding. The panels should obtain flexible a free-form NURBS surface enables to modify the
connections to allow movements. The connection geometry through its control points, by maintaining
elements are introduced in four sides of each panel. its coherence and continuity. In addition to the so-
Because the panels are considered as non-structural lar access analysis, static structural analysis by FEM
INTRODUCTION
Since the requirements on the actual performance grounds and exemplifies the framework, by discuss-
of buildings are becoming ever tighter, accurate ing one specific case study on numerically assessed
data regarding the performance of the buildings is design alternatives for achieving indoor thermal
becoming increasingly important in the early phas- comfort. The analysis of alternative design solutions
es of design. This paper tackles the role of digital is presented by showing the learning process of the
modelling and engineering performance simula- designer through a comparative study. One chosen
tions in the conceptual phase of architectural de- alternative is then presented in details, by under-
sign. The first part of the paper focuses on a theoret- taking the integration of parametric modeling and
ical framework for performance oriented parametric performance simulations during the design process.
design, in which the design process is decomposed The parameterization process of the design concept
into and related to the design knowledge available is discussed based on the analysis previously illus-
during the design conception and its parameteri- trated; focusing on design innovation, emphasis is
zation process; moreover, this part describes some given to the importance of extracting knowledge
general case studies. The second part of the paper from the numeric analysis.
analyses showed also that it is possible to reduce evant part of the strategy definition phase focused
thermal discomfort by means of passive strategies, on thermal mass. The following sections summarize
both in summer and in winter. Specific sub-goals its main aspects.
were identified. Considering the local climate, cali-
brating the design first based on the cold winter Additional analysis on thermal mass
period was recommended. This clearly included in- A set of additional analyses were carried out re-
creasing the insulation, air tightness and solar gain garding the effects of quantity and distribution of
of the building as much as possible. However, this thermal mass within the atrium. The effect of dif-
challenged summer thermal comfort. As also con- ferent distributions of additional thermal mass was
firmed in the preliminary analyses, thermal mass analyzed for four vertical (virtual) thermal zones of
and summer ventilation positively impacted sum- the atrium, with and without natural ventilation and
mer comfort. Among these factors, the work illus- shading. Among the analyzed options, the one with
trated in the following sections focuses on the dis- external shading, diurnal and nocturnal ventilation
tribution of thermal mass, natural ventilation and (10ac/h), and higher concentration of thermal mass
shading, since these factors highly depend (also) on the top part of the atrium showed the best per-
on the geometry of the overall spatial configuration formance for summer thermal comfort. The results
of the atrium. Specifically, investigations on ther- are visible in table 1 and clearly show the accumu-
mal mass were taken as starting point for the next lation of heat in the thermal mass and the cooling
phase of the strategy-definition phase, in which the effect of ventilation, as well as the reduction of over-
parameterization strategy was more specifically ad- heating through the addition of external shading on
dressed. the glazed roof. Additional tests were run accentuat-
ing the uneven distribution of thermal mass across
THERMAL MASS AS DESIGN DRIVER the levels. These analyses showed that additional
The principles described above were investigated as thermal mass on the top level leads to beneficial ef-
design drivers, by making use of digital simulations fects, while changes in the bottom level had minor
to study their thermal behavior in conjunction with effects on the thermal performances. Since mini-
the design exploration of a large range of design mizing the use of additional material and structural
possibilities. Especially when considering the di- load is generally desirable, the option of reducing
mensions of the atrium and its value as representa- the additional thermal mass on the bottom level
tive space for the new office building, conceiving and distributing it more on the top level was used
such a thermal system with emphasis on its iconic for further investigations. External shading further
value (in addition to its technical thermal function) reduced the maximum temperatures as can be seen
was proposed as beneficial for the project. A rel- from Table 2.
Based on the preliminary analyses, geometric the areas irradiated in summer were distributed
properties were extracted for the aspects having along all the levels of the atrium on its north, east
positive impact on the design goals; for different pri- and west sides; while the areas of the atrium irradi-
mary generators, the attributes of these geometric ated in winter were located on the north side of the
properties were parameterized in order to investi- top level of the atrium only. These latter areas were
gate geometric alternatives. Examples are provided therefore chosen for distributing the thermal mass.
in the following section. The other criteria were addressed within the subdo-
mains of this design space (detailed arrangement,
Primary generator and parameterization form, material and construction of the system),
process based on the absence of significant degrees of con-
Focusing on the satisfaction of the primary goal of flict with the main objective (thermal performance).
the design at hand (namely the improvement of the Among the explored directions, one is exemplified
thermal performance of the atrium), the numeric here following, in which a set of sliding panels was
analyses described above enabled the quantifica- proposed for the atrium; this resulted in a set of ver-
tion of a suitable distribution of thermal mass across tical panels in concrete, anchored along the north
the vertical levels of the atrium. This information al- side and the top part of the south side. In this design
lowed to identify a first numeric rule based on which option, the effect of thermal mass was focused on
geometric options were to be designed. Various pri- the diurnal fluctuations, leading to an active thick-
mary generators and related parameterization pro- ness of 10 to 15 cm for concrete. Considering that at
cesses were developed to explore different design the back of a 5 cm thick concrete panel the fluctua-
directions responding to this rule. Within the bound- tion is 72% of the fluctuation at the front and at the
aries of this rule, additional aspects were considered back of a 10 cm panel it is 51%, the need of releasing
in order to enhance the thermal benefits and to in- heat toward the back areas was to be addressed. In-
clude other criteria, such as structural performance stead of rotating the heavy panels, fixed panels were
and daylight. The primary generators were deve- combined with sliding thermal insulation to prevent
loped considering the thermal benefits of exposing nocturnal release of accumulated heat toward the
the mass to winter solar radiation and protecting it atrium; and to favor the thermal behavior at the
from the summer one. Additionally, they were deve- back of the panels. Figure 1 illustrates the principle.
loped considering that the heat accumulated during Given the suitable distribution of thermal mass
the winter days from the atrium should be released across the vertical levels, the general layout of the
toward the surrounding areas (back areas), which is panels was treated as a layout problem, in which the
where the thermal benefits are especially required. requirements for mass distribution may correspond
Based on a shadow analysis in Ecotect (Autodesk), to several panel layout solutions. A parametric
REFERENCES
Alexander, C., Ishikawa, S., Silverstein, M., Jacobson, M.,
Fiksdahl-King, I., Angel, S., 1977, A Pattern Language,
Oxford Univ. Press.
to all goals, as well as according to aesthetic prefer- ASHRAE, 2010, ASHRAE Standard 55 Thermal Environmental
ence. Figure 3 exemplifies the panels. Conditions for Human Occupancy, ASHRAE Atlanta.
Broadbent, G., 1969, Design Method in Architecture, in De-
CONCLUSIONS sign Methods in Architecture.
The paper presented the studies for an atrium in Caldas, L.G., Norford, L.K., 2003 Genetic algorithms for op-
Shenyang, for which a number of design proposals timization of building envelopes and the design and
were developed based on performance-oriented control of HVAC systems. Journal of Solar Energy Engi-
parametric investigations. The process was exempli- neering, 125(3), pp. 343-351.
fied according to a parametric framework in which Darke, J., 1979, The Primary Generator and the Design Pro-
aspects affecting the thermal behavior of the atrium cess. Design Studies, 1(1), pp. 36-44.
were discussed as design drivers. The process in- Friedhoff Calvo M.A., 2010, ‘Investigations on a Parametric
cluded an extensive number of performance simu- Double Component Structure and its design oppor-
lations, whose role regarded both the strategy-def- tunities in the field of natural illumination’, Report for
inition phase and the solution-assessment phase. Design Informatics Study, MSc Course 2010, TUDelft.
Larger emphasis was given to the strategy-defini- Goldschmidt, G., 2001, Visual analogy – a strategy for de-
tion phase, in order to highlight the relevance of sign reasoning and learning. In: C. Eastman, W. News-
preliminary knowledge. Additionally to this aspect, letter and M. McCracken, Editors, Design knowing and
a conclusive remark is proposed on the crucial role learning: cognition in design education, Elsevier, New
played by performance simulations in enhancing York, pp. 199–219.
the interdisciplinarity of the process, also by height- Goldschmidt, G., Smolkov, M., 2006, Variances in the impact
ening the brainstorming across the various disci- of visual stimuli on design problem solving perfor-
plines involved in the design process. mance, Design Studies, 27(5), pp. 549-569.
Kalay Y.E., 1999, Performance-based design. Automation in
ACKNOWLEDGEMENTS Construction, 8(4), pp. 395-409.
The project was initiated as part of the Urban Knowl- Mahdavi, A., Lam, KP., 1991. Performance Simulation as
edge Network Asia (UKNA); as such, it benefitted of a Front-end Tool for Integrative Conceptual Design
a Marie Curie Actions International Research Staff Evaluation. In: Proceedings of the 2nd World Congress on
Exchange Scheme (IRSES) grant, as part of the Euro- Technology Improving the Energy Use, Comfort and Eco-
pean Union’s Seventh Framework Program. The hos- nomics of Buildings.
pitality given by the Beijing University of Technol- Roozenberg, 1993, On the Pattern of Reasoning in Innova-
ogy and the collaboration with the staff members of tive Design, Design Studies, 14(1).
Green World Solutions in Beijing were crucial. Spe- Roozenburg, N. F. M., Cross, N. G., 1991, Models of the de-
cial acknowledgments are given to Xiao Zhongfa, sign process: integrating across the disciplines, Design
GWS responsible for the re-design of the tobacco Studies, 12(4), pp. 215-220.
factory. Contributions by former TUDelft MSc stu- Setaki, F. 2012, ‘Acoustics by additive manufacturing’, MSc
Abstract. Buildings with scales, buildings that sweat: this paper proposes two strategies
for a materially grounded, performance-based architecture which leverages the strengths
of computation and CNC fabrication against the basic properties of traditional ceramics.
Keywords. Building performance, CNC tooling, computer aided manufacture, ceramics,
passive energy design
TWO APPROACHES
Two prototypes for passive, energy saving devices warm and dry; rainwater is typically displaced from
involve the use of ceramics to create high perfor- the building footprint and channeled away with gut-
mance building envelopes. One strategy uses the ters, swales, and ultimately retention or detention
tendency of porous ceramic materials such as clay to strategies. Recent popular attention to sustainable
wick moisture to the exterior of a building causing building has increased the use of cisterns to collect
it to “sweat” and thus to cool itself passively in dry and reuse water, but these uses remain relegated
climates. A second strategy uses bi-colored ceramic to watering landscapes and flushing toilets. Biomi-
“scales” to create an array of solar collector/diffus- metic strategies, however, have been proposed to
ers which can be used to shuttle heat energy either mimic beneficial cooling effects caused by sweating
into our out of a building. Both strategies take a cue in mammals (Lilley, et al., 2012). Prototypes are un-
from systems in nature to leverage a material-based derway for an envisioned “Sweaty Façade” which will
strategy for thermoregulation in buildings. These use the natural osmotic characteristics of clay to al-
devices are the result of computer modelling and low buildings to sweat, thus taking advantage of the
CNC fabrication to mill positive forms for two types heat of evaporation of water to passively cool build-
of plaster mold making for ceramic slip casting. One ings in warm, dry climates. Water collected at the
technique is industrial, using pressurized multi-part roof can be used to fill façade components which
forms, while the other method uses traditional run- will sweat to save energy.
through slip casting molds. These intentionally wet façade modules will
take forms characterized by highly textured or fold-
The Sweaty Facade ing surfaces to create large surface areas for the
The built environment as we know it is characterized evaporation of water. Using simple parametric rep-
by constructions which function to keep occupants etition in Rhino/Grasshopper, along with Kangaroo
Figure 2
Sweaty Façade Components
showing large surface area.
for shape optimization, prototype forms were then cade based on repetitive ceramic elements. These
carved from polystyrene on a three-axis CNC router. elements are spaced to allow for air flow necessary
Such complex objects present a distinct difficulty for for evaporative cooling between individual units,
traditional casting techniques. The more complex a while the size and shape of the elements creates
desired form becomes, the more complex the form- maximal surface area. The spacing and shape of the
work. Multi part formwork in ceramics is used tradi- units is designed to minimize „slow“ spots in the air
tionally to cast parts with large undercuts, or with flow, but also to provide for diffused daylight to pass
surface area large enough to create impossible sce- through the screen to the building interior beyond.
narios for demolding due to friction (Reijnders and
EKWC, 2005). Scaly Exteriors
The creation of these highly articulated façade The “bubble tile is envisioned as a ceramic heat
prototype units was approached through a tradi- exchange component which has a rough, darkly
tional mold making technique. A twelve part plaster colored surface on one side, and a smooth reflec-
mold was made by hand, including captured interior tive surface on the other side (Figure 4). Tiles can
pieces held by ties piercing larger parts. The size of be used alone, or in conjunction with a liquid heat
the pieces required a so-called “run-through mold” transfer system to move heat energy through an
due to the sheer weight of the plaster. In order to array of tiles. In an architectural building façade,
empty the mold of liquid clay, a drain is placed at the heat exchange tiles can be placed in a mechanism
bottom, which when opened allows remaining clay which will allow the tiles to reverse front to back.
to drain from the form (Figure 1). This can allow infrared energy to be either reflected,
The resultant parts were deliberately designed absorbed, or diffused as necessary for environmen-
to maintain the lines left behind by the seams of the tal conditions in order to reduce heating and cool-
mold in order to give clues to the making of the ob- ing loads on a building. By motorizing each tile, an
ject (Figure 2). The large surface areas exhibited by array of heat exchange tiles can be used as a solar
these complex forms are impossible to accomplish collector, heat exchanger, as a light shelf, or as sig-
using non-sculptorly techniques. Non-developable nage. The technology takes advantage of the ther-
surfaces, in other words, must be carved or cast (Du- mal properties of ceramics to modulate the heating
arte, 2004). and cooling loads on buildings in various climates.
A rendering of an architectural corner condi- The function of the façade system is inspired by the
tion against the sky (Figure 3), shows a sensual fa- Namaqua Chameleon (Benyus, 1997) which changes
color, depending on its needs, to either reflect or casting. The scale of the surface articulation is such
absorb the heat of the sun in the harsh and widely that traditional mold making techniques become
varied temperatures of the desert. Following this impractical; a multipart mold for this tile would con-
biomimetic strategy, the façade system will be pro- tain many thousands of parts. The bumps on the
grammable to alter its orientation to the sun based surface are pyramidal in shape, so designed as to
on material color, climate, and the needs of building avoid micro shading of the component surface. In
occupants. terms of the intricacies of mold making, this is the
The creation of the positive forms for the bub- perfect shape for demolding, as it offers no under-
ble tiles relied on a simple parametric box-morph cuts to impede mold removal. Unfortunately, the
repetition of pyramidal forms over a simple shape bumps create such a massively increased surface
using Rhino/ Grasshopper in order to achieve a de- area that the force of friction between the part and
vice with maximal surface area on one side without the mold makes removal of the part impossible. Ad-
creating micro shading conditions. The shape of the ditionally, large flat, hollow pieces such as this tend
Bubble Tile poses an interesting problem for slip to collapse in the mold from the weight of unsup-
ported wet clay. To solve these issues, an industrial
Figure 4 slip casting technique was adopted.
Scaly Façade Component In the creation of the four part mold, a manifold
showing highly textured of perforated air tubing was embedded in side of
surface. the mold corresponding to the highly textured sur-
face of the part to aid in demolding (Figure 5). The
mold is also pierced at the end by a plastic tube for
pressurizing the interior of the part to stop it from
collapsing.
The mold is filled with clay, and after sufficient
thickness has developed in the interior of the mold,
the remaining liquid clay is emptied. Immediately,
the interior of the mold is pressurized through a
short tube in the cap of the mold. Air pressure forces
Figure 6
The highly textured mold
interior.
the clay against the sides of the mold allowing the CONCLUSION
clay to harden without collapsing. As water is slowly The incredible complexities offered by the possibili-
absorbed from the clay into the plaster, the part be- ties of advanced computation create opportunities
comes “leather hard” and is able to support itself. At for new advances in building performance. These
this point, the interior pressure is released. In order complex forms, however, present unique challeng-
to now demold the part, three bars of pressure is es to traditional forms of manufacture for materi-
pumped into the perforated tube in the plaster. This als such as clay. A hybrid approach to the creation
air is forced through the pores in the plaster mold, of complex ceramic parts leverages traditional and
causing water absorbed by the plaster to diffuse industrial techniques to produce manufacturing
outward, ultimately creating a molecular mist of wa- possibilities suitable for industry. This approach can
ter at the exterior surfaces of the mold. This water, allow designers to maximize material performance
extruded at the interior surface of the mold (Figure previously inaccessible in traditional materials, while
6), creates sufficient lubrication against the rough simultaneously tapping into otherwise economi-
surface of the part to allow smooth part removal cally unfeasible material palette which, while firmly
without breakage. rooted in a progressive digital materiality, neverthe-
On a mockup of an architectural facade, (Figure less recollects a hand-made past. Furthermore, pro-
7), tiles are arranged horrizontally on the southern gressive strategies for performance based design
side of a building, and vertically on the east/west need no longer be the static fixed elements of our
sides of the building. This redering places the con- design past; instead, using biological models as a
cept in a generic glass box facade folly representa- platform, architects have the opportunity to create
tive of a default retail or office condition suggestive buildings which sweat, change color, or otherwise
of a energy sensible retrofit to an existing building. adapt to their immediate environment with biologi-
Tiles would be operable, allowing for maximization cal precision.
of absorbtion, reflection, or diffusion of heat energy,
and to admit daylight and allow views as desired. ACKNOWLEDGEMENTS
The ceramic character of such a facade would allow Special thanks to The University of South Florida
for the creation of an architectural space reliant on College of The Arts and the European Ceramics
rich materiality, while simultaneously providing a re- Workcentre for generous funding and support of
gionally adaptable high performance building. this project.
REFERENCES
Benyus, J. (1997). Biomimicry: Innovation Inspired by Na- tion: Multi-Scalar Development of Ceramic Material.
ture. New York: Morrow. San Francisco: ACADIA 12: Synthetic Digital Ecologies
Duarte, J. P. (2004). Free-form Ceramics - Design and Pro- [Proceedings of the 32nd Annual Conference of the As-
duction of Complex Architectural Forms with Ceramic sociation for Computer Aided Design in Architecture
Elements. Copenhagen: Architecture in the Network (ACADIA).
Society [22nd eCAADe Conference Proceedings]. Reijnders, A., and EKWC. (2005). The Ceramic Process: A
Lilley, B., Hudson, R., Plucknett, K., Macdonald, R., Cheng, Manual and Source of Inspiration for Ceramic Art and
N. Y.-W., Nielsen, S. A., et al. (2012). Ceramic Perspira- Design. London: AC Black.
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 153
154 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Automated Simulation and Study of Spatial-Structural
Design Processes
Juan Manuel Davila Delgado1, Herm Hofmeyer2
Eindhoven University of Technology, Netherlands
1
http://www.tue.nl/en/university/departments/built-environment/the-department-of-the-
built-environment/staff/detail/ep/e/d/ep-uid/20104850/
2
http://www.tue.nl/en/employee/ep/e/d/ep-uid/19941803/ep-tab/3/
1
j.m.davila.delgado@tue.nl, 2h.hofmeyer@tue.nl
Abstract. A so-called “Design Process Investigation toolbox” (DPI toolbox), has been
developed. It is a set of computational tools that simulate spatial-structural design
processes. Its objectives are to study spatial-structural design processes and to support
the involved actors. Two case-studies are presented which demonstrate how to: (1) study
the influence of transformation methods on design instances and (2) study the influence
of transformation methods on the behavior of other transformation methods. It was
found that in design instances with the same type of structural elements the influence
of a specifically varied transformation method is more explicit; while, when different
types are present this influence is more undetermined. It was also found that the use of
two specifically different structural modification methods have little influence on the
sub-sequential spatial transformation method.
Keywords. Design process research; design process simulation; spatial design; structural
design.
INTRODUCTION
In the Architecture, Engineering and Construction organize the process of designing and (2) the gen-
(AEC) field, design processes are complex and mul- eration of support methods or tools to aid in the
tidisciplinary undertakings in which designers and design process. In the last category computational
engineers work together on the same problem to tools have been developed to increase productivity
come up with feasible solutions. The final solution is (Grobman et al., 2010), to ease the communication
usually the result of a cyclic process, in which the ini- and the exchange of information between parties
tial solution undergoes several changes and adapta- within the design process (Haymaker et al., 2004)
tions to meet pre-defined and arising requirements. and to take an active role on the design process and
It is assumed that by improving the design generate design solutions (Shea et al., 2005). How-
process, the design outcomes will improve as well ever, little research has been carried out in which the
(Cross, 2008; Brooks, 2010; Kalay, 2004). Consequent- computer is used to study the design process itself
ly, efforts have been carried out on the research of (Kalay, 2004; Coates, 2010).
design processes, roughly subdivided in two cat- The objective of the project presented in this
egories: (1) the development and study of design paper is to increase the knowledge on spatial-struc-
models, which is the formulation of frameworks to tural design processes and consequently to support
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 155
the involved actors. To that end a computational Figure 1
toolbox, a so-called “Design Process Investigation” Design Process Investigation
(DPI toolbox) has been developed. More concretely, toolbox framework.
the DPI toolbox as presented in this paper, seeks to
fulfill the following two aims: (1) to study the influ-
ence of a selected transformation method on design
instance evolution; and (2) to study the influence of
a selected transformation method on the behavior
of the other transformation methods. The next sec-
tion will briefly explain the DPI toolbox. Then a dem-
onstration of the types of investigations which can
be performed is shown, and lastly a short discussion
and an outlook on further work are presented.
DESIGN PROCESS INVESTIGATION tered into a Modified Structural Design (MStD). Af-
TOOLBOX ter that, the MStD is transformed into a New Spatial
The DPI toolbox framework (Figure 1) prescribes Design (NSpD) that finally is altered into a Modified
specific and identifiable steps to reach a design New Spatial Design (MNSpD), completing one full
solution. In that sense it could be categorized as a cycle. This cycle can be repeated causing the spa-
prescriptive design model (Cross, 2006). However, tial and structural design instances to co-evolve.
the objective of prescriptive design models is to en- For co-evolutionary designs, no classical conver-
sure successful and consistent results; whereas the gence criteria can be used to stop the process; but,
objective of the DPI toolbox is to simulate design if requirements (spatial design instances) and solu-
processes so its outcomes and more importantly the tions (structural design instances) do not change
process itself can be studied. anymore a (local) optimum is believed to be found
Design processes are cyclic and multidiscipli- (Maher et al., 1996).
nary tasks where both design solutions and design Two other relevant characteristics of the DPI
requirements undergo changes and adaptations toolbox framework are: the “transformation and
before a definitive solution is achieved (Maher et al., modification selection switches” and the “gauges”
1996; Haymaker et al., 2004). Also, design require- (Figure 1). These components have the objective of
ments are usually “ill-defined” and the design pro- facilitating the study of the simulated design pro-
cess is often not recorded properly, so it is difficult cesses. The idea is to use the DPI toolbox to simulate
to trace back or investigate the process later on. different design processes, each with different trans-
The DPI toolbox framework is developed to address formation procedures, and to measure the resulting
those characteristics and problems of a design pro- design instances, by the gauges, through the cycles
cess. for later comparison. In this way, it is possible to
The DPI toolbox framework defines the process study the influence of transformation procedures on
to be followed. During this process a design instance design instances and on sub-sequential transforma-
is subject to four different transformation phases tion procedures.
acting within or between the spatial and structural Note that the DPI toolbox framework only pre-
domains. It works as follows (Figure 1): First, a Spatial scribes the existence of a set of transformations,
Design (SpD), in the spatial domain, is transformed relations, and measurements (by the gauges) be-
into a Structural Design (StD) in the structural do- tween two different domains within a cyclic design
main. Then, within the same domain, the StD is al- process; it does not define specific transformation or
156 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Figure 2
(a) Example of the zoning
algorithm; (b) Two structural
grammars used in the DPI
toolbox.
measurement procedures. Thus, the selected trans- at a right angle. Furthermore, the right cuboids or
formation and measurement procedures used in the spaces should be aligned with the global coordinate
DPI toolbox are not unique in any way, and these system. The Spatial Design undergoes several trans-
used in this paper were chosen primarily for their formations by procedures that are grouped in the
availability. The procedures could be changed in the following categories: (a) proposal of the structural
future to further study spatial-structural design pro- design, (b) preprocessing, and (c) structural calcula-
cesses. tions.
As mentioned before, the DPI toolbox consists The proposal of the structural design consists
of four transformation phases and these phases will of two procedures: first structural zones are created
now be shown to consist of several stand-alone pro- and then, based on them, structural elements are
cedures, put together in a seamless process. Some of generated. For the first procedure, the DPI toolbox
the used procedures have been widely studied and uses an in-house developed automated 3D zoning
utilized in the AEC field, e.g. shape grammars, pat- algorithm (Hofmeyer and Bakker, 2008) (Figure 2a).
tern recognition, and FEM simulations; others have It defines structural zones (elementary structural en-
been developed specifically for the DPI toolbox, e.g. tities) based on sets of spaces. This procedure sub-
geometrical redefinition and kinematic stabilization. divides the Spatial Design into a number of zones,
Next, the four phases of the DPI toolbox, as imple- (grouped spaces) and these are used as a basis to
mented, will be briefly described. generate structural elements. For the next proce-
dure, structural grammars (Shea and Cagan, 1999)
Spatial to Structural Design Transforma- are used to generate structural elements. Struc-
tion (SPT) tural grammars resemble shape grammars used
The first phase generates a structural design in- in the AEC area (Stiny, 1980). They prescribe which
stance and performs a FEM simulation with it, all structural elements can be used depending on the
based on the spatial design instance as used for in- geometrical properties of the previously generated
put. The generated structural design only intends to zones (Figure 2b).
formulate a starting point for the design cycles, and Regarding (b) the preprocessing category, once
it does not intend to be an immediate optimal so- a structural design has been generated, it has to un-
lution for the inputted spatial design. Likewise, the dergo several procedures to be able to be simulated
FEM simulation is not meant for stress engineering, by a Finite Element Method (FEM). First, the geom-
but is merely used to give an indication of the struc- etry of the structure has to be redefined to ensure
tural behavior of the proposed structural design. that all the finite element nodes will be coincident
The Spatial Design consists of a set of volumes or and to determine the wind loaded surfaces. Then
“spaces”. So far, the DPI toolbox is restricted to work the structure should be made kinematically deter-
with right cuboids, parallelepipeds bounded by six mined, loads and constraints should be applied, and
rectangular faces, so that each adjacent face meets a meshing algorithm has to be performed.
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 157
Lastly, regarding (c) the structural calculations, Structural to Spatial Design Transforma-
the following procedures should be mentioned: A tion (STT)
first-order linear elastic FEM simulation is carried out In this phase the MStD, an arrangement of structural
to predict nodal displacements in the structural de- elements, is transformed into the NSD, an arrange-
sign; then, the strain energy of each finite element is ment of spaces. This is currently implemented as
calculated. A clustering algorithm groups the finite follows: First, it is indentified which finite elements
elements into clusters based on their strain energy have been deleted in the previous phase and to
and a color-coded visualization is generated. The which space from the inputted Spatial Design they
data obtained during this step will be the basis belong to (i.e. which deleted elements are contained
for the next phase’s procedures, presented below. within which space). Based on that information the
More information on this procedure can be found in spaces that contain many deleted (under-utilized)
(Hofmeyer and Davila Delgado, 2013). finite elements are removed. In other words, spaces
that contain less elements contributing to withstand
Structural Design Modification (STM) the applied loads, are in a structurally-seen less im-
Having generated a Structural Design, the next step portant zone, and are thus deleted (Hofmeyer and
is to improve its structural behavior. Even though Davila Delgado, 2013).
the procedures implemented in this phase follow For the current implementation, the first 30%
closely those of traditional structural optimization, of spaces with most deleted elements are removed,
their objectives are slightly different. The objective and then the remaining spaces are investigated. If
of this phase of the DPI toolbox is not to obtain an spaces with the same number of deleted elements
optimal structural design per se, as in the traditional as the already removed spaces exist, they are re-
way, but to modify the structure only into the direc- moved as well. Note that in almost virtual case that
tion of an optimal design. Thus this phase is called all spaces have the same number of deleted ele-
Structural Design Modification rather than optimiza- ments, then only the first listed 30% of the spaces
tion. are deleted. This implementation is referred in this
This structural modification is based on mini- paper as “Delete Spaces”.
mizing strain energy. A structure that deforms under
a given case of loads and constraints shows strains Spatial Design Modification (SPM)
in its finite elements. The amount of strain energy In this process, the NSpD will now be modified into
in a finite element is a measure of its participation a MNSpD that will serve as the input for a next cycle
in bearing the applied loads. So, finite elements of the DPI toolbox. The main objective of this phase
showing low strain energy can be regarded as be- is to modify the NSpD for the next cycle such that at
ing under-utilized and thus may be deleted. Two least some of the properties of the SpD, which may
versions of existing structural optimization methods have disappeared during the transformations of the
have been implemented in the DPI toolbox namely: cycle, are restored. For example, in the end of the
Evolutionary Structural Optimization (ESO) and To- previous phase, spaces were deleted from the SpD
pology Optimization (TO). A detailed explanation and thus the NSpD has less volume and fewer spac-
of this phase can be found in (Hofmeyer and Davila es. Therefore, in this phase, the NSD could be scaled
Delgado, 2013). Note that the version of ESO used up to the same volume as the SpD and then some
has been modified so that only a single iteration spaces within the NSD could be subdivided in order
is run in the optimization procedure (in this paper to restore the initial number of spaces. This phase
referred to as 1ESO). This is done because accurate is explained in more detail in (Davila Delgado and
enough results can be obtained and computation Hofmeyer, 2013) and it is referred to in this paper as
time is reduced. “Re-scale and Subdivide”.
158 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Figure3
A typical DPI toolbox run.
Figure 4
Initial Spatial Design used for
case-studies I and II; Tables
list the respective DPI toolbox
settings.
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 159
Figure 5
Resulting design instances for
run I-A (which are the same
as for II-A).
160 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Figure 6
Resulting design instances for
run I-B.
in design instances with several levels the structural sequent cycle. The initial increase can be explained
elements at the lower part of the structure have to by two reasons: (1) after the first cycle the design
withstand their own weight plus the weight of the is divided into four fragments. In these fragments
structural elements on top of them and thus show fewer columns have to support more roof-slab area
higher strains. and (2) the roof-slabs in StD.2 are rectangular, which
In the last cycle the design instances have the deform more than square types. In both runs, I-A
same number of levels as in the previous cycle. Con- and I-B, dmax is always located at middle of the high-
sequently Ut does not reduce significantly, but dmax est roof-slab so their dimensions (ratios) have a high
does. This is because the horizontal structural plate influence on dmax and Ut. The second cycle’s decrease
elements that form StD.4 are rectangular, instead of could be explained due to the decrease in the num-
square, and such elements tend to deform more. ber of levels, as observed in the previous run. Finally,
For run I-B, using a different structural grammar, the last decrease is due to the square shape of the
the evolutions of dmax and Ut follow the same pattern; resulting roof-slabs which deform less and thus yield
but they do not correspond so clearly to the evolu- less Ut.
tion of the number of levels, as in run I-A. In Figure 7 In summary, during the evolution of run I-A a
it can be seen that dmax and Ut increase seriously af- continuous decrease for dmax and (partly) for Ut can
ter the first cycle, even though the number of levels be observed. This decrease is directly linked to the
remains the same, and then decrease in each sub- number of levels of the design. Conversely, in the
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 161
Figure 7
Result tables and graphs of
runs I-A and I-B.
evolution of run I-B, no pattern can be recognized. penalty) which cannot be compared directly with
This might be explained because in run I-A all the the physically realistic Ut from 1ESO. For that reason
structural elements are the same; whereas for run I-B the Ut values from run II-A were adjusted. This was
it is a mixture of flat-shells and columns. done by (a) matching the density of the structural
elements in the 1ESO calculations to the density of
Case-study II the first iteration of the TO procedure, and (b) by cal-
Also for case-study II, two different runs have been culating the energy of the 1ESO calculations taking
performed: II-A and II-B, using 1ESO and TO for STM, into account the power of the penalty. In this way,
respectively. All the other settings were kept the even though the Ut values are not “physically accu-
same (see also Figure 4, table: Case-study II). Figure rate” comparisons between the two procedures can
8 shows the resulting design instances of run II-B. be made.
Figure 5 presents the resulting design instances of The results tables of Figure 9 present the strain
run II-A, as they are the same as for run I-A. Figure 9 energy of the structural design before and after the
presents the two measurements taken in each cycle: STM procedure is performed, Ut and Ut-final respec-
the reduction of Ut and the difference between the tively.
number of spaces of the SpD and the NSpD. They Note that the Ut values of both runs are very
were chosen because they are indicators of the per- similar. This is because they both have a similar StD
formances of STM and STT respectively. Note that (Figure 5 & 9) with the exception of the last cycle in
the TO procedure optimizes the structural design which the StD -and thus the Ut- differs. Even though
by decreasing the density of the less strained finite for both runs Ut-final decreases at approximate the
elements and increasing the density of the most same rate, Ut-final in run II-B is always lower. This is
strained ones. During this process, a “pseudo-Ut” is because TO minimizes Ut, while 1ESO minimizes
used (in fact a strain energy to the power of a certain structural mass, by deleting the structural elements
162 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Figure 8
Resulting design instances
from run II-b.
with less Ut. So in 1ESO, Ut is hardly reduced. It is also influence on the behavior of STT.
noticeable that the reduction of Ut-final between two
runs diminishes for every cycle. This is because in DISCUSSION AND FURTHER WORK
design instances with more levels U values among The DPI toolbox framework and its current im-
finite elements differ more, because due to grav- plementation were briefly presented. It simulates
ity loads, finite elements at the bottom part of the spatial-structural design processes to: (1) study the
structure yield more strain than the ones at the top influence of a selected transformation method on
part. So there is more opportunity for optimization design instance evolution; and (2) study the influ-
in a structure with very dissimilar U values among its ence of a selected transformation method on the
elements. behavior of the other transformation methods. Two
However, it can be seen as well that this differ- case-studies were presented, which illustrate the
ence in performance has little effect on the behavior DPI toolbox’s potential to aid in the study of design
of the subsequent transformation method (STT). For processes.
both runs, the specific spaces and the total number The first case-study investigated the influence
of spaces deleted by STT are the same during the of using a different structural grammar (a differ-
first three cycles and it only slightly changes in the ent transformation method) in the evolution of the
last cycle. Thus it can be said that within the current structural design, via the maximum nodal displace-
implementation, a different STM seems to have little ment (dmax) and the total strain energy (Ut). It was
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 163
Figure 9
Result tables and graphs of
runs II-A and II-B.
found that that in design instances with the same Linear Architectural Design Process’, International Jour-
type of structural elements the influence of trans- nal of Architectural Computing, 8(1), pp. 41–54.
formation methods is observed to be more explicit Haymaker, J, et al. 2004, ‘Engineering Test Cases to Motivate
while, when different types are present, the influ- the Formalization of an AEC Project Model as a Direct-
ence is more undetermined. ed Acyclic Graphs of Views and Dependencies’, Journal
The second case-study investigated the influ- of Information Technology in Construction, 9(January),
ence of using different Structural Modification pp. 419–441.
Methods (i.e. 1ESO vs. TO) on the behavior of the Hofmeyer, H and Bakker, MCM 2008 ‘Spatial to kinemati-
subsequent Structural Transformation Method cally determined structural transformations’, Advanced
(STT). It was found that even though TO generates Engineering Informatics, 22(3), pp. 393–409.
better structural designs than 1ESO, this has little ef- Hofmeyer, H and Davila Delgado, JM 2013, ‘Automated de-
fect on the behavior of the sub-sequential STT. sign studies: Topology versus One-Step Evolutionary
In the future, a further set of rigorous academic Structural Optimisation’, Advanced Engineering Infor-
and real-life case-studies will be devised to bench- matics, Available online 28 April 2013.
mark the DPI toolbox. New transformation methods Kalay, YE 2004, Architecture’s New Media: Principles, Theories
and amendments to the existing ones will also be and Methods of Computer-Aided Design, MIT Press.
implemented to further study the design processes. Maher, M L, Poon, J and Boulanger, S 1996, ‘Formalising De-
sign Exploration as Co-evolution: A Combined Gene
REFERENCES Approach’, Advances in Formal Design Methods for CAD:
Brooks, FP 2010, The Design of Design, Addison-Wesley Pro- Proceedings of the IFIP WG5.2 Workshop on Formal De-
fessional, Boston, MA. sign Methods for Computer-Aided Design, Springer, pp.
Coates, P 2010, Programming.Architecture, Routledge. 3–30.
Cross, N 2006, ‘Designerly Ways of Knowing’, Design Issues, Shea, K, Aish, R and Gourtovaia, M 2005, ‘Towards integrat-
17(3), pp. 49–55. ed performance-driven generative design tools’, Auto-
Cross, N 2008, Engineering Design Methods 4th ed., John mation in Construction, 14(2), pp. 253–264.
Wiley & Sons, Chichester, UK. Shea, K and Cagan, J 1999, ‘Languages and semantics of
Davila Delgado, JM and Hofmeyer, H 2013, ‘Research En- grammatical discrete structures’, Artificial Intelligence
gine: A tool to simulate and study spatial-structural de- for Engineering Design Analysis and Manufacturing,
sign processes’, Proceedings of CAAD Futures 2013, July 13(04), pp.241–251.
3-5, Shanghai, China. Stiny, G 1980, ‘Introduction to shape and shape grammars’,
Grobman, YJ, Yezioro, A and Capeluto Guedi, I 2010, ‘Non- Environment and Planning, 7(3), pp.343–351.
164 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Generative Agent-Based Design Computation
Abstract. Agent-based systems have been widely investigated in simulation and modeling.
In this paper, it is proposed that agent-based systems can also be developed as generative
systems, in which different aspects of performative design can be defined as separate
drivers in a proper computational framework. In this manner constrained generating
procedures (CGP’s) are studied to integrate the discrete design processes into one system.
Subsequently, this generative agent-based design tool is accompanied with generating and
constraining mechanism which are informed by material characteristics and fabrication
constraints, bringing to the forefront emergent complexity.
Keywords. Computational design; agent-based system; robotic fabrication; constrained
generating procedures (CGP’s).
INTRODUCTION
Performative design, as a design process, can be ment, and exchange that ultimately increase the
described along with several principles. Integrating complexity of the system as a whole.
such principles into a cumulative system is to in- One procedural approach, is to organize such
volve different key aspects of performance in a pro- complexity through a computational framework
cess of formation. The integration process of these that incorporates its own elements, rules and inter-
aspects requires designing a convenient generative actions (Holland, 2000). In some circumstances, this
system to explore performative approaches of form computational framework can exhibit emergent
generation. In terms of computation, form can be phenomena. In fact, the proper generative compu-
defined as an interaction between internal compo- tational framework includes both mechanisms to
nents and external forces (Kwinter, 2008). Similar to generate possibilities and constraints to limit the
natural morphogenesis, in computational design range of possibilities (Holland, 2000). Moreover, this
modeling the development of form can be informed computational framework requires to be further
by the process of materialization, production and specified during the problem solving design pro-
construction (Menges, 2008). Each one of these in- cess; developing such computational framework
ternal components can be described as a separate involves three key aspects: generation mechanisms,
driver, which in turn, can be synthesized into an in- test mechanisms, and a control strategy (Mitchell,
tegral computational design tool. These integrated 1990). Furthermore, based on constrained generat-
drivers interact with each other within an environ- ing procedures (CGP’s), the computational frame-
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 165
work should have mechanisms to progressively the topological space is described by a surface with
adapt, or learn, as its components interact (Holland, positive Gaussian curvature and by the fabrication
2006). A particularly promising method of modeling tools, which consist of a KUKA KR 125/2 (6-Axis), and
and simulating such complex adaptive systems a KUKA KPF1-V500V1 turntable (1-axis). The fabrica-
(CAS) is agent-based system (Holland, 1995). tion configuration also includes a HSD ES 350 spin-
dle unit as an effector.
RESEARCH OBJECTIVES: GENERATIVE
AGENT-BASED COMPUTATIONAL DE- GENERATIVE AGENT-BASED SYSTEM
SIGN Agent-based systems as a computational method,
Recent advances in computational capacity open facilitates for researchers the study of various fields
new perspectives into the implementation of agent- of science. An agent-based system consists of large
based systems as generative tools within compu- number of agents that follows simple local rules
tational design in architecture. The purpose of this and interacts within an environment (Gilbert, 2008).
paper is to investigate the possibility of integrating Agent-based modeling consists of defining both the
generative systems properties and constraint pro- agents and the relationships between them (Bona-
cedures into real-time computational form finding, beau, 2002); this can collectively exhibit a complex
which are coupled together to exhibit complex behavior pattern which leads to a global emergent
emergent behavior. behavior as a result. The individual autonomous
In this paper the development of this generative agent, as a self-contained learning unit, perceives its
system is investigated through constraints generat- environment and takes actions (Mellouli et al., 2004).
ing procedures (CGP’s). This approach gives the pos- Accordingly, the agent can learn from its surround-
sibility to link simultaneously different mechanisms ings by permanently repositioning itself within the
to generate and constraint possibilities, which allow overall agent-system and its environment - while ad-
for the exploration of emergent architectural solu- hering to a set of flexible behavioral rules. A system
tions. These mechanisms contain discrete design of agents thus has the ability to learn and adjust its
elements and behaviors wherein bottom up meth- behavior over time (Figure 1).
odology of behavior-based systems can be useful In social science, Gilbert (2008) illustrated that
to organize emergent complexity. This integration agent-based system can be classified into urban
is followed by a generative approach of material models, opinion dynamics, consumer behavior, indus-
properties to explore performative formation in ar- trial networks, supply chain management, electricity
chitectural practices, allowing form to emerge from markets, and participative and companion modeling
the interaction between agent systems and their (Gilbert, 2008). On the other hand, Bonabeau (2002)
surrounding environment. In this investigation form categorized the agent-based system in a business
generation is affected by different attributes, which context into flows (evacuation, traffic), markets (stock
are implemented inside the agents’ data structure. market, shopbots and software agents), organizations
For this investigation, the agents’ data structure (operational risk and organizational design), and dif-
is described by the specific geometrical behavior fusion (diffusion of innovation and adaptation dy-
of bio-inspired plate structures based on the sea namics) (Bonabeau, 2002). These two classifications
urchin. To achieve this, the agents are distributed represent the application of agent-based system for
on the topological space of UV map parameters; simulation and modeling in any behavioral systems.
the relations between agent-agent and agent-en- In the field of sociology, a generative agent-
vironment are derived from this topological space based approach has been regulated in two steps:
e.g., it describes the conceptual neighborhoods Situating agents in a relevant spatial environment
along with its topological relations. In this context, and after that utilizing agents’ interaction based on
166 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Figure 1
A: A Complex Adaptive System
similar to that presented by
Holland (1995); B: Agent
distributions on the topologi-
cal space; C: An Agent-Based
System, topological interac-
tions between agent-agent
and agent-environment.
specific rules to generate another level of bottom- to exert the implemented constraints. This real-time
up organized regularities (Epstein, 2006). In this interaction is relied on the agents’ data structure;
generative method, the systems’ behavior cannot be the agents perceive the environment as well the
deduced to behavior of their components, whereby other agents, and based on their defined ontology
it disregards some of the interactions between the compute the proper response to any stimuli (Pfeifer
elements (Squazzoni, 2012). Accordingly, the gen- and Scheier, 2001). However, the ontology level also
erative method in agent-based system is a bottom- depends on the circumstances that will apply to
up approach to take advantage of low-level features the generative system. This knowledge distribution
e.g., material properties, in a manner that enables among agents could be specified locally in order to
emergent phenomena. avoid unnecessary computation.
In relation to architectural design, developing Consequently, the bottom-up knowledge distri-
such generative computational framework is easily bution provides agent-based system with behavior-
associated to the different methods for establishing based computation rather than knowledge-based
effective organized complexity. One of the features computation. In behavior-based computation, the
of such adaptation in complex system is emergent topological space is explored with agents along
properties, which can be obtained through Con- with their specific behaviors to behave in this prob-
strained Generating Procedures (CGP’s) (Holland, lem domain, rather than with a specific system that
2000). The advantage of CGP’s in agent-based sys- know about the problem domain (Maes, 1993).
tem provides the ability to define agents-based sys- However based on emergence properties, this
tems on mechanisms and constrains - in one specific tool has difficulty approaching a precise behavior.
system. This local generative system as a building Therefore, the underlying elements of this tool need
block has been implemented in the computational enough flexibility to emerge an approximate behav-
framework as an overall generative system which ior, as a cloud (Miller, 2007).
can be identified as a system property. However,
each one of these building blocks or agents has a METHODOLOGY
data structure, in which the mechanisms and con-
straints have a great role to find an optimal solution. Agent-Based system: Defining Mechanisms
Accordingly, the definition of mechanisms and and procedures
constraints are critical in defining real time interac- In order to investigate a generative approach for an
tions within agent-based systems, whereby this defi- agent-based system, a CGP framework is developed
nition must prepare the possibility for a system to with both generative mechanisms and constraints.
become both generative and also have the capacity This method maintains a generative computational
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 167
framework to generate all the future possibilities, chronized to the agents’ internal rules; in the steering
while maintaining specific constraints or limitations. level, the goal is decomposed into the sub goals that
The mechanisms of these generative agent-based can be represented by the steering behaviors. This in
systems are bound to material properties, fabrica- turn can become steering signals, which are intelligi-
tion and construction constraints. ble for the locomotion layer; in the locomotion layer,
Material properties in particular have the ability these signals will be converted into motion param-
to characterize geometrical behavior mechanisms. eter of the agent’s locomotion (Reynolds, 1999).
In addition, motion behavior mechanisms have the The agent-based system with Motion behavior
ability to perform as a sensory motor for each one of mechanisms can be influenced by the other steer-
these agents, where if the desired conditions are not ing behaviors, at any moment; this is, to change
being met, then the responsible mechanism will re- the agent’s location and orientation. The behaviors,
lease an appropriate response to change the agents’ which relate to the agent’s motion, have to be trans-
behavior. This reaction can be differentiated by the lated to the steering behavior parameters. The steer-
agents’ situation, which can be varied from splitting, ing behavior gives the possibility to accumulate
eliminating, or re-orientation and relocation of the different type of control behavior procedures and
agents’ situation. Predicting a proper mechanism based on weight of parameters, they can change the
for each situation or problem is not possible in a agent’s motion behaviors. Therefore, the locomotion
behavior-based bottom-up system, due to the low- mechanism must be completely independent from
level ontology that is used in it. For this purpose an steering behaviors (Reynolds, 1999), in which the
agent-based system has to deal with only primitive steering behaviors convert control signals into mo-
ontology to solve the problems, wherein it has been tion of agents (Figure 2).
situated. In the following sections some mecha- Geometrical Behavior Mechanisms: Geometrical
nisms related to this generative agent-based com- behaviors are directly affiliated to material proper-
putational design tool will be investigated. ties which are used in the process of design, fabri-
Motion Behavior Mechanisms: According to cation and construction. Therefore, the geometrical
Reynolds (1999), the motion behavior mechanisms behavior mechanism is reflection of material proper-
can be defined in three layers: action selection, steer- ties. In fact, this mechanism defines interaction ef-
ing and locomotion (Reynolds, 1999). These three fects between geometrical characteristic of agents.
behavior layers are applicable for a wide range of Since, this investigation is about plate-like structures;
autonomous motion behaviors, however, it is neces- therefore this mechanism is limited to the planes
sary to mention here, that this behavioral hierarchy geometry. Hence, geometrical interactions between
is not accessible to all range of autonomous agents agents are related to geometrical planes intersec-
e.g., it is not appropriate for chatterbot (Reynolds, tion; wherein the intersection between a selected
1999). In the other hand, the motion behavior mech- agents with surrounding agents, generates a cell
anism is specialized in specific behaviors, which is with a polygonal structure. The distribution of agents
imitated and modeled from certain behaviors of on the topological surface, defines the final shape
natural entities to relocate autonomous charac- of agents’ cell. The polygonal shape of this cell (e.g.,
ters. Therefore, this mechanism is suitable only for convex or concave polygon) is closely related to the
changing the motion behavior of the system. curvature of the surface (Troche, 2008), which the
In action selection, agents observe the state agents occupy tangentially. Due to the surface syn-
of their environment and that of the other agents clastic definition, the result will be a convex polygon.
in order to perceive their changes. After this initial The geometrical interaction between agents
perception, agents set appropriate goals, which are has been related to the tangent plane intersection.
proportion to the change of system state and syn- However the tangent plane intersection algorithm
168 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Figure 2
Motion Behavior mechanisms
(the attraction and repulsion
steering behaviors).
(TPI) (Troche, 2008) is not appropriate for defin- agents. In that case, the generating agent need to
ing geometrical behavior mechanisms, due to the send a steering signal to change its state in relation
knowledge-based structure which has been used to the neighboring agents and environment.
inside the TPI algorithm. Instead of operating locally,
the TPI algorithm works globally. Therefore an algo- Agent-Based system: Defining Constraints
rithm that is based on a bottom-up approach has As it is mentioned in CGP’S, the generative mecha-
been developed in order to calculate the real-time nism is coupled with constraints. In terms of archi-
intersection between the plate-like structures of the tectural design, constraints can be associated with
agents’ geometry. geometrical and fabrication requirements, which
Accordingly, the intersection mechanism has lead the generated outputs from interactions be-
been developed to find the intersection vertices of tween mechanisms toward desired possibilities. It
a generating agent with other neighboring agents is critical to find a method to relate these intercon-
(Figure 3); these vertices lay on the tangent plane, nected design parameters as a part of the genera-
which is approximately located on the surface. Fur- tive tool. In term of mathematical biology, the con-
thermore, if the agent cell edges (with adjacent straints can be described by morphological spaces,
agents) are naked and not connected to them, then or morphospaces as mathematical spaces (Mitteroe-
it indicates that the agent cell relations are interrupt- cker and Huttegger, 2009).
ed with self-intersection or interpenetration of other
Figure 3
left: The generate agents’ cell
right: The intersection mecha-
nism by slicing algorithm.
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 169
The term morphospace, describes the mor- Figure 4
phological features of generative variations within The geometrical and fabrica-
a solution space or landscape of possible outcomes tion constraints: polygon
(Menges and Schwinn, 2012). In this generative tool, radius, connection angle, and
the constraints are considered from the geometric polygon angle; similar to that
limitations and possibilities of the material charac- presented by Menges (2013).
teristics. In the fabrication phase, these constraints
can be described by the morphospace of the fabrica-
tion tool. In general, the agents’ geometric attributes
dictate the need for various procedures to utilize the
generative interaction among the agents. However,
the morphospace’s definition overlaps with the dif-
ferentiation between the geometrical possibilities
and also being producible by the fabrication tool
(Menges, 2013). Therefore, the constraints of this by the specification of the effector and the length of
investigation are derived from the morphological the tool; 3) the polygon angle is indirectly influenced
space, which is categorized in geometrical, fabrica- by the fabrication tool, in which the constraints are
tion and construction constraints. related to the depth of joints who, in itself is deter-
Geometrical Constraints: Since this genera- mined by the connection angle (Menges, 2013).
tive tool is designed for plate-like structures, its
geometrical parameters are applicable to the most RESULT: COMPUTATIONAL DESIGNING
probable range of plate structures. According to TOOL
Menges (2013), the plate morphology is identified
in three major features (Figure 4): 1) the polygon ra- Agent-Based system: Agent-based Pro-
dius, which is defined as the area of the plate that gramming
is calculated based on the polygon vertices, and the A generative agent-based computational frame-
perimeter circle which is bounding these vertices; work is established by identifying the agent types
2) the connection angle, is defined as the angle be- along with their attributes (Macel and North, 2009).
tween connected plates, which is calculated based This identification will be followed by defining the
on the angle between the normal of each connect- boundaries within the surrounding environment
ed plates; 3)the polygon angle, which is defined as that the agent will explore as a topological and
the angle between the polygon edges and is related morphological solution space. After the agents and
to the shape of the polygon (in the polygon convex environment are defined, this framework will simul-
segment(0° to 180°) and in the polygon concave taneously compute all parallel interactions between
segment(-180°,0°)) (Menges, 2013). agent-agent and agent-environment. These parallel
Fabrication and construction constraints: The interactions will be associated by sending and receiv-
morphospace of the fabrication tool, in relation to ing through a feedback loop (Holland, 2006). Accord-
morphological geometry, represents the producible ingly, in the complex system behavior, convergence
parameters of fabrication. As Menges (2013) men- to the desired performance criteria is dependent on
tioned, with the fabrication tools for this investiga- the positive and negative feedback loops.
tion, the morphospace region determines the pro- This generative agent-based tool is initialized
ducible of geometrical parameters: 1) the polygon with agents (plate-like structures) and specific en-
radius depends on the distance between the robot vironment (synclastic surface). After initiation, the
and the turntable; 2) the connection angle is limited motion behavior mechanism is added to identify
170 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Figure 5
left: Edge adhesion (the attrac-
tion to the edges); right: Cell
adhesion (the attraction and
repulsion between agents).
the attraction and repulsion between agent-agent intersections between connected cells. This prob-
and agent-environment. Additionally, this mecha- lem occurs when the edge intersection lies outside
nism is coupled to other control mechanisms for the overlapping boundary area between the two
the rotation and repositioning of agents. However, cells. In this case, an algorithm controls that the right
after the agent distributions on the surface and the intersection between cells exists, it does this by ro-
process of finding geometry interaction between tating the cell or by relocating it on parallel to its
agents takes place (generating cell for each one of normal (Zimmer et al., 2013); through this process
agent), attraction and repulsion algorithms define the intersection point will gradually change its loca-
coherency between the agents’ cell and its environ- tion until it fits inside the defined area (Figure 6).
ment. This coherency is defined by cell adhesion and The main functional component of any gen-
surface edges adhesion. By increasing the value of erative system is it capacity to constrain the pos-
the cell adhesion, agents begin to present flocking sibilities, which are emerged from the generation
behavior and by decreasing it, agents start to avoid mechanism. According to the defined constraints
each other within the bounded surface. It should for this investigation, the generated cells need to
be noted that the agent-to-agent interaction is ex- be limited by two aspects: size of the agent’s cell or
pressed between one agent and its closest neighbor polygon radius, and the angle between agents’ cell
or one agent and a range of its closest neighbors, or connection angle. The cell size can be deduced by
in which each one of these can represent different a regular polygon area formula for convex polygons,
behaviors. In the edge adhesion, by increasing the after which the radius polygon can be obtained; this
adhesion value, agents will be attracted to the edg- radius will be stored in the agents’ data structure to
es and by decreasing it they will gather in a central be accessible by the agents during the computation.
position - away from all edges (Figure 5). However, the polygon radius must be in the specific
Consequently, in geometrical behavior mecha- range imposed by the fabrication constraints, in or-
nism, it would be necessary to avoid inappropriate der to change the size of the cells, cell division and
Figure 6
left: Rotating the cell to find
the right intersection; right:
Relocating the cell on parallel
to its normal.
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 171
Figure 7
Finding the right polygon ra-
dius through the cell division
and cell growth mechanism.
Figure 8
controlling the connection
angle by generating a steering
signal to rotate the cell.
172 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Figure 9
the final result of the genera-
tive agent-based computation
tool.
narily result of such generative system presented tool can be accommodated within computational
joint conditions that were similar to the Sea urchin, design-, it is imperative to differentiate the aspect of
where three plates meet each other at one corner the generative agent-based computation that con-
point – rather than four. However, although this be- tribute to integrate material system as mechanisms
havior was anticipated, it is also discernible that the with robotic fabrication constraints (Figure 9).
lack of construction mechanisms (which naturally Some of the consequences of this implementa-
has been used in the plate structure), along with in- tion might steer in a different direction expanding
sufficient construction constraints caused the initial further our understanding into the Morphospace
result to be far from what was expected. The initial of robotic fabrication (Menges, 2013). For example,
results might be enhanced by further analysis of angle and plate control mechanisms empower the
biologic model, the fabrication space and the agents design construct in a way that facilitates access for
emergent behavior so that additional mechanisms the designer to methodologies that allow him to
and constraints can be subsequently implemented achieve an optimized plate formation; they also re-
into the tool. duce the need to recourse to design process during
It is also possible to speculate that the results construction phase.
are indicative of the specific means in which agent
based tools process the input data. Unlike “Motion REFERENCES
behavior” (Reynolds, 1999), the generative agent- Bonabeau, E 2002, ‘Agent-based modeling: Methods and
based deals with the implementation of material techniques for simulating human systems’, Proceedings
characteristics, geometrical behavior and construc- of the National Academy of Sciences, vol. 99, no. 90003,
tion constraints; this implementation affects agents’ pp. 7280–7287.
behavior locally and globally. In this manner, agents Epstein, JM 2006, Generative social science: Studies in agent-
become a complex adaptive system of systems. It based computational modeling, Princeton University
is speculated in this paper that although the pre- Press, Princeton.
sented case studies in the generative agent-based Gilbert, GN 2008, Agent-based models, SAGE Publications,
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 173
Los Angeles. Foundations of an Agent Modelling Methodology for
Holland, JH 1995, Hidden order: How adaptation builds com- Fault-Tolerant Multi-agent Systems’ in D Hutchison, T
plexity, Addison-Wesley, Reading, Mass. Kanade, J Kittler, JM Kleinberg, F Mattern, JC Mitchell,
Holland, JH 2000, Emergence: From chaos to order, Perseus M Naor, O Nierstrasz, C Pandu Rangan, B Steffen, M
Books, Cambridge, Mass. Sudan, D Terzopoulos, D Tygar, MY Vardi, G Weikum, A
Holland, JH 2006, ‘Studying Complex Adaptive Systems’, Omicini, P Petta and J Pitt (eds), Engineering Societies in
Journal of Systems Science and Complexity, vol. 19, no. the Agents World IV, Springer Berlin Heidelberg, Berlin,
1, pp. 1–8. Heidelberg, pp. 275–293.
Kwinter, S 2008, ‘Who’s Afraid of Formalism’ in C Davidson Miller, JH and Page, SE op 2007, Complex adaptive systems.
(ed), Far from equilibrium: Essays on technology and de- An introduction to computational models of social life,
sign culture, Actar, New York, pp. 144-149. Princeton University Press, Princeton, New Jersey.
Macal, C and North, M 2009, ‘Agent-based modeling and Mitchell, WJ, 1990. The Logic of Architecture: Design, Compu-
simulation’, Proceedings of the 2009 Winter Simulation tation, and Cognition, Cambridge, Mass: MIT Press.
Conference (WSC), pp. 86–98. Mitteroecker, P and Huttegger, SM 2009, ‘The Concept of
Maes, P 1993, ‘Behavior-Based Artificial Intelligence’ in J Morphospaces in Evolutionary and Developmental Bi-
Meyer, SW Wilson and HL Roiblat (eds), From Animals to ology: Mathematics and Metaphors’, Biological Theory,
Animats 2. Proceedings of the Second International Con- vol. 4, no. 1, pp. 54–67.
ference on Simulation of Adaptive Behavior, MIT Press, Pfeifer, R and Scheier, C 2001, Understanding intelligence,
Cambridge, Mass. [u.a.], pp. 2–10. MIT Press, Cambridge, Mass.
Menges, A 2008, ‘Integral Formation and Materializa- Reynolds, CW 1999, ‘Steering behaviors for autonomous
tion: Computational Form and Material Gestalt’ in B. characters’, Game Developers Conference, vol. 13, pp.
Kolarevic and K. Klinger, (eds), Manufacturing Material 763–782.
Effects: Rethinking Design and Making in Architecture, Squazzoni, F 2012, Agent-based computational sociology,
Routledge, New York, pp. 195-210. Wiley and Sons, Hoboken, N.J.
Menges, A 2013, ‘Morphospaces of Robotic Fabrication. Troche, C 2008, ‘Planar hexagonal meshes by tangent plane
From Theoretical Morphology to Design Computation intersection’, Advances in Architectural Geometry, Vien-
and Digital Fabrication in Architecture’ in S Brell-Cok- na, pp. 57–60.
can and J Braumann (eds), Rob/Arch 2012. Robotic fab- Zimmer, H, Campen, M, Herkrath, R and Kobbelt, L 2013,
rication in architecture, art and design, Springer, Wien, ‘Variational Tangent Plane Intersection for Planar Po-
pp. 28–47. lygonal Meshing’ in Advances in Architectural Geom-
Menges, A and Schwinn, T 2012, ‘Manufacturing Reciproci- etry 2012, eds L Hesselgren, S Sharma, J Wallner, N
ties’, Architectural Design, vol. 82, no. 2, pp. 118–125. Baldassini, P Bompas and J Raynaud, Springer Vienna,
Mellouli, S, Moulin, B and Mineau, G 2004, ‘Laying Down the Vienna, pp. 319–332.
174 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Evolutionary Energy Performance Feedback for Design
(EEPFD)
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 175
ers must balance the needs of multiple competing mance” which is defined in this research as the idea
objectives, often through inefficient and imprecise of utilizing performance feedback to influence de-
means, to identify the best fit design through an sign exploration and subsequent decision making
understanding of trade-offs between energy perfor- under the assumption of pursuing higher perform-
mance and other design objectives. ing designs much earlier in the design process and
The motivation of this research stems from the arguably intrinsically coupled, not the norm in con-
potential of multidisciplinary design optimization temporary practice.
(MDO) methods to alleviate issues between the
design and energy simulation domains. MDO is a PROBLEM STATEMENT AND RESEARCH
general term used in reference to the method of OBJECTIVE
coupling parametric design and optimization algo- While current precedents in the building design
rithms in an automated or semi-automated design industry demonstrate the potential of MDO as a
process framework or workflow with the intent of means of solving performance feedback issues,
identifying “best fit” solutions to complex problems there are several inherent and unique challenges for
with competing criteria. MDO methodologies have MDO to be more robustly and pervasively applied
been successfully adopted in the aerospace industry in architectural practice. For example, when MDO is
and other engineering fields and have been gradu- applied to the aerospace industry an identified “best
ally explored in the building industry as a means fit” solution can be mass-produced once it has been
of potentially mitigating existing issues between fully optimized. In comparison, to apply MDO to find
building design and other performance analysis a best fit for building design problem always with
domains. Current research into applied MDO has a unique set of requirements, circumstances, and
initially demonstrated a capability to overcome preferences appears less cost effective by nature.
interoperability issues between domain specific In addition, the objective nature of evaluating de-
platforms. Optimization algorithms automated by sign in other engineering industries provides more
MDO have also been identified as being capable of suitability towards MDO application than the more
increasing feedback results and designer interac- subjective nature of building design, where archi-
tion. By virtue of the automation and optimization tecture is inclusive of aesthetic motivations as well.
more efficient access to performance evaluations Furthermore, a deep rooted disconnection between
of design alternatives inclusive of trade-off studies design and energy simulation domains, enumerated
between competing design criteria in support of previously adds to the factors impeding the applica-
design decision-making is also indentified (Flager et tion of MDO to be fully explored and implemented
al., 2009; Yang and Bouchlaghem, 2010). Given the within the design and energy simulation domains.
trend of computing availability, e.g. cloud comput- Another of our research observations is that the ma-
ing our research into MDO is becoming more obvi- jority of the MDO applications in architecture relat-
ously suitable to the particularities of the architec- ed to building energy performance are conducted
tural practice. We hypothesize this computing trend by researchers predominantly engineering based
results in an exponentially expanding potential of with a focus on optimizing mechanical systems or
MDO applicability. When observed in the context façade configurations, typically much later in the
of this expanded computing capability, the plausi- design process after the building envelope has been
ble bridging of the observed gap between energy finalized (Wright et al., 2002; Adamski, 2007). The im-
performance and design through MDO serves as portance of form exploration during the early stages
another driving force behind this research. MDO is of the design process is to date seldom addressed
therefore understood as a key component to achiev- and typically through overly simplified geometry for
ing the research motivation of “designing-in perfor- proof of concepts observed to be due to the limited
176 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
flexibility of existing frameworks (Tuhus-Dubrow ceptual design stage where overall building form
and Krarti, 2010; Janssen et al., 2011). Furthermore, has not been finalized. EEPFD utilizes an automated
there is only a limited number of published MDO evolutionary searching method and a custom ge-
frameworks for building energy performance that netic algorithm (GA) based multi-objective optimi-
have been fashioned and explored through a de- zation (MOO) algorithms, to provide energy per-
signer’s perspective (Caldas, 2008; Yi and Malkawi, formance feedback (i.e. energy use intensity (EUI))
2009; Janssen et al., 2011). Yet, within these applica- to assist in design decision making. Also included
tions, emphasis on the applicability and designer in- are spatial programming compliance (SPC) and a
teraction of MDO frameworks for the early stage de- schematic net present value (NPV) calculations for
sign process have not been adequately researched. consideration in performance trade-off studies. The
In response to this existing gap –emphasizing automation engine of EEPFD was developed as a
the early stage of design and design exploration sta- prototype plug-in for Autodesk® Revit® (Revit), ti-
ges- and the potential of technological affordances tled H.D.S. Beagle, to integrate design, energy, and
and trends, a design centric MDO framework, titled financial domains. The integrated platforms are Re-
Evolutionary Energy Performance Feedback for De- vit, Autodesk® Green Building Studio® (GBS) and Mi-
sign (EEPFD) was developed and has been initially crosoft® Excel® (Excel) respectively. The three com-
tested and benchmarked against conventional de- peting objectives in the algorithm are to maximize
sign processes to understand applicability to the spatial programming compliance (SPC), minimize
early stage of design (Gerber et al., 2012). The ob- energy use intensity (EUI), and maximize net present
jective of this research step presents a focus on the value (NPV). The detailed functionality of each plat-
issue of designer interaction within EEPFD through form, objective functions, and GA-encoding method
observation of two case studies: 1) a practice based can be found in previously published work (Gerber
case study involving a K-12 facility; and 2) a design et al., 2012).
studio based case study involving a single fam- The process of applying EEPFD to obtain perfor-
ily residence. To provide a consistent point of com- mance feedback for design decisions is illustrated
parison a series of measurements regarding design in Figure 1. The first step has two subcategories: the
alternative performance, process efficiency, as well generation of the initial design and the generation
as designers’ interaction and communication with of design alternatives. In EEPFD, the initial design is
EEPFD are established, collected, then discussed. generated by the user through a parametric model
Through a comparative study of these two process- in Revit and a constraints and parameter range
es adopted by these designers, the applicability and file in Excel. At this point the initial geometry, pa-
impact of EEPFD during the early stage of the design rameters and ranges, site information, program re-
process is then presented. quirements, and available financial information are
provided manually by the user. As a result, in order
THE FRAMEWORK: EVOLUTIONARY for designers to use EEPFD, it is essential for them
ENERGY PERFORMANCE FEEDBACK FOR to have the ability to formulate their design prob-
DESIGN lems in the form of a parametric model in Revit with
Evolutionary Energy Performance Feedback for De- their exploration interests translated into a series of
sign (EEPFD), a design centric MDO framework, is parameters and ranges. An understanding of and
developed to incorporate conceptual energy analy- capability with parametric practices, solution space
sis and design exploration of simple to complex thinking, and design exploration is an essential pre-
geometry for the purpose of providing early stage requisite in the implementation of EEPFD (Gerber,
design performance feedback (Gerber et al., 2012). It 2007). The generation of design alternatives is part
is intended to be used by designers during the con- of the automated process driven by the customized
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 177
Figure 1
EEPFD’s illustrated simulation
process in accordance to the
identified six step convention-
al energy simulation process.
Highlights are the observation
foci of this paper, emphasizing
the interfaces inclusive of the
interaction between designers
and EEPFD.
GA-based MOO in EEPFD. Once the initial design is has been engaged, as highlighted in Figure 1.
modeled and entered into the automated system,
the following steps are then cycled through until the RESEARCH METHODS AND EXPERIMENT
automation loop is terminated either by the user or DESCRIPTIONS
by the meeting of the system’s termination criteria. To explore the applicability of EEPFD to the design
Once the automation loop is terminated, there are process this research provides an environment in
two ways of proceeding: 1) a design alternative is se- which the interaction between designers and EE-
lected based on the multi-objective trade off analy- PFD during the early stages of design is observed.
sis provided by EEPFD and the design proceeds to This research presents two case studies observed
the next stage of development or; 2) the user manu- in this manner; Case Study I as a practiced based
ally implements changes in the initial design or con- study involving a K-12 school design, Case Study II
straints file based on the provided feedback before as a design studio based study involving a single-
reengaging the automation loop. A detailed descrip- family residential design. In both cases the general
tion of each step and the process of applying EEPFD program layout and overall building envelope de-
implemented by users can be found in previously sign concept has be decided upon, as illustrated in
published work (Gerber and Lin, 2013). Currently, Figure 2.
EEPFD has demonstrated the ability to automatically Through these case studies, two methods of
breed, select, evaluate and identify better fit design implementing EEPFD were explored with a diver-
alternatives for varying degrees of building typolo- gence occurring during the two steps of EEPFD that
gies and geometric complexity. EEPFD has also been require human interaction. While both case studies
validated against the human decision making pro- followed the previously described six step process,
cess and is able to provide a solution space with an Case Study I requires a consultant to provide techni-
improved performance over a manual exploration cal expertise while Case Study II requires only minor
process (Gerber and Lin, 2013). This paper further technical support. In both cases the authors served
validates EEPFD with a focus on understanding the as the technical process experts, thereby bypassing
usability of the framework by designers, which is any technical complications encountered through
described and measured through their interaction the prototype’s use, and were available throughout
with EEPFD prior to and after the automation system the process to provide necessary technical support
178 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Figure 2
Case Study I and Case
Study II conceptual design
development before initial
engagement of EEPFD. Image
courtesy of Xinyue Ma (Case II)
and Swift Lee Office (Case I).
and to enable direct observation of EEPFD on the functions when compared with the initial design.
early stages of this design process and design team. This represents the affordance of the current tech-
The specific focus of our research observation nology and the built in evolutionary search method
is the interaction between the designer and EEPFD of EEPFD. The second performance definition is
in the initial problem formulation and utilization of overall quantity and quality of feedback generated
generated data, steps 1 and 6 as shown in Figure 1. through EEPFD. In this research the qualitative and
During this study three aspects of performance are quantitative analysis data regarding the design
considered and discussed. The first performance problem, process, and product was collected and
definition is that of the generated design alterna- compiled into the metrics defined in Table 1, which
tives as measured through the set of three objective summarizes the recorded data during the explora-
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 179
tion processes. Overall, quantity is defined as the feedback was made available through the prior two
number of design alternatives analyzed and time approaches, the ability of these approaches to pro-
required for each analysis, while overall quality is de- vide relevant information at the speed necessary
fined by the number of Pareto solutions generated for supporting the designers’ rapid determination
by EEPFD. The final performance metric is that of the of optimal configurations for different site condi-
observed design process itself when compared and tions was still in question. As a result, the imple-
contrasted with the six step simulation format of the mentation of EEPFD was explored and researched
experimental design scenarios. Particular emphasis by the designers and research team to understand
is placed on the observed interaction between the whether EEPFD could provide a suitable alternative
designers and EEPFD and their ability to 1) identify approach.
and translate their design objectives and intentions In Case Study I the design problem itself was
into a functional parametric model for the system, limited to optimizing one standard classroom unit
and 2) the perceived relevance of the overall results using the defined kit-of-parts through manipula-
by the designers to assist in their early stage deci- tion of varying façade elements. As parametric
sion-making. design had not been a part of the designers’ prac-
tice prior to this experiment, the authors served as
DESIGN PROCESSES AND RESULTS consultants to assist in the translation of the design
into a parametric model. Due to unfamiliarity with
Case Study I: Practice Based Project parametric modeling, the Revit design platform, and
Case Study I focuses on a K-12 school design with the inherent limitations of both, a week and four it-
approximately 30,000 square feet of usable program erations were needed before the parametric model
space using a method allowing for easy adaptability could be finalized. The parameters explored for fa-
to multiple sites throughout the greater Los Ange- çade configurations included customized opening
les area. Due to flexibility requirements by the cli- sizes, solar screen depth, density, and mounting
ent, a kit-of-parts design concept was developed to distance from the building, as illustrated in Figure 3.
allow for multiple site adaptability and to allow for Following the completion of the parametric model,
future reconfiguration for various educational uses. necessary supplemental information regarding fi-
In addition the pursuit of a net Zero Energy Building nancial estimates, material properties, etc. was com-
configuration for each site was added to the design piled by the authors. In order to closely emulate the
goals by the designers. future implementation process, the financial model
For Case Study I, the designer role was filled by of this experiment was calibrated according to the
the two principal architects whose design philoso- cost estimation of the project. Also the material as-
phy of doing “the most with the least” focuses on signment and HVAC assignment were based on pri-
economical and sustainable qualities as prerequi- or guidelines provided by the MEP consultant.
sites to design. While the designers for Case Study Figure 3 illustrates the collected data and result-
I demonstrated an interest in utilizing innovative ant solution space in a quantified format. Through
technology and methods, neither designer had any the GA run by EEPFD a total of 384 design alterna-
experience with parametric modeling or the Revit tives were generated over a period of 4 hours with
platform prior to this case study. Prior to this case an average speed of less than a minute per result.
study, however, the designers did have experience The solution space improved from the initial EUI =
with attempts to integrating performance feedback 70.08 to 69.30 kBtu/sqft/yr and NPV from -0.51 to
as part of the design process with both in-house -0.48 million dollars. Since the program explored
performance analysis and collaboration with an was fixed in value, the SPC score remained consist-
outside MEP consultant. While energy performance ent throughout the generated solution pool. After
180 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Figure 3
(Top Left) parametric model
of Case Study I. (Top Right)
parametric model of Case
Study II. (Bottom) the collected
quantitative measurements of
Case Studies I + II according to
the established metric.
the completion of the runs, the authors provided or through the MEP consultant. As a result, the de-
to the designers the final trade-off analysis along signers were able to include aesthetic preference as
with 3D design visualizations for their final deci- part of their trade-off analysis when examining the
sion making purposes. After the generated data had generated results.
been provided more guidance was requested from
the designers to discern desirable results from the Case Study II: Design Studio Based Project
abundantly populated solution pool provided by In Case Study II an architectural design student was
the Beagle. However, the designers indicated a posi- provided a single family residential design problem
tive response to inclusion of 3D imaging of all the located along Wonderland Park Avenue in Los An-
design alternatives along with the energy perfor- geles, CA. The program requirements for the single
mance feedback, which was not available through family residences are designated as including: 4
their prior experience with either in-house analysis bedrooms, 3 full bathrooms, 2 car garage, and liv-
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 181
ing, dining, and kitchen areas not to exceed a total tor can be identified in the trial and error period nec-
of 3,000 sqft. All room areas are subject to designer essary to define the design’s constraint file so as to
preference, with a maximum being placed on bed- maintain both design intent and model robustness
room dimensions as not to exceed 20’x20’. A 10’ set during the optimization process as the current ver-
back from all site boundaries is also specified. The sion of H.D.S. Beagle will terminate if the geometry
overall design goals are defined to include a meet- breaks.
ing all design requirements combined with con- Figure 3 illustrates the collected data and result-
sideration for a maximum decrease in energy con- ant solution space in a quantified format for Case
sumption. The designer for Case Study II is a master study II. A total of 1,010 design alternatives were
architectural candidate with 6 months instructional generated over the course of 17.8 hours. After all
experience in use of Revit but no prior experience data had been generated, the designer did not limit
of actual application of Revit to a design project or their analysis to the design alternatives receiving the
parametric design in general. The designer’s prior highest ranking from the provided data set. Instead,
environmental design experience is limited to the the designer proceeded with their own design deci-
building physics context within the typical architec- sion making strategy, taking into consideration the
tural education curriculum with no environmental context of the generated solution pool. Overall the
simulation tool use experience or as part of the de- generated solution pool through EEPFD provided
sign requirements typical to her studio design briefs. an improvement in EUI from 59 to 44 kBtu/sqft/yr ,
For Case Study II the EEPFD development and in NPV from -2.92 to -1.86 million dollars, and in SPC
research team acts as both owner and consultant, from 92 to 99. From the full data set the designer
providing all necessary project requirements and first narrowed the solution pool according to EUI
technical support as needed. After the determina- performance. The solution pool was then further
tion of her design intent to explore shading, open- narrowed to only include design alternatives with
ing and each space’s spatial compositions through an SPC score greater than 95. From this narrowed
the parametric model, the designer then proceeded solution pool the final design was selected based on
to define the parametric model in Revit according to aesthetic properties through the designer’s analysis
the proposed parameterization logic and initial de- of the provided 3D images of each design alterna-
sign concept. The final parametric model is illustrat- tive. The objective scores of the final selected design
ed in Figure 3 and was generated over the course of were: NPV = -2.38 million dollars; EUI = 52.04 kBtu/
2.5 months. This recorded time includes the design- sqft/yr; and SPC= 99.29. Once this final selection was
er’s required time to familiarize herself with the use made, the designer proceeded to the next stage in
of Revit for conceptual design through a trial and design development with the generated Revit mass-
error period. As one of the goals of this case study ing model. In this case study, despite the dominance
is to observe the ability of a designer to translate of aesthetic preference as the determining factor for
their intended design concepts into a parametrically the final design, an improvement in all three objec-
oriented mathematically defined form, the designer tive scores over the initial design was observed.
was asked to avoid any geometric simplifications
from their original design geometry for the purpos- CONCLUSION AND DISCUSSION
es of expediency. As such the complications of the EEPFD is a framework that provides a new method
original design geometry and the designer’s unfa- of applying MDO techniques through a custom-
miliarity with parametric design and use of Revit in ized GA to integrate previously inaccessible per-
application to parametric design can be considered formance feedback into the early stage building
as contributing factors to the extended experienced design process. While EEPFD has been validated
parameterization process. Another contributing fac- through tests of accuracy and efficiency, the devel-
182 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
opment of best practices through the key metrics Case Study I this led to undesirable window sizes, in
of interaction, communication, and designer ease Case Study II this led to undesirable ceiling heights.
of use is the focus of continued research. Bridging Since EEPFD possesses neither aesthetic preference
the gap between the energy domain and geomet- nor prejudice when generating design alternatives,
ric exploration remains the motivating challenge of consideration must be made when formulating the
the research that begins to address the previously parametric model for maintaining of design intent
established gaps. Secondary contributions of this or an exhaustive exploration of design alternatives
research include the demonstrated usability of EE- is desired. It can be noted that EEPFD is adaptable
PFD by designers through direct interaction during to either scenario, broad or specific, dependent on
the early stages of design. This addresses in part the designer preference. Overall, in both case studies
disconnection of domain expertise as an issue for the final result was observed to be a broader based
the integration of energy simulation for early stage design solution pool with an overall improved multi-
design. Through a comparative study of the two objective performance to enable more informed de-
processes implemented in the case studies, with sign decision making inclusive of a more expansive
specific focus on observing the interaction between simulated aesthetic and formal range. While these
designers an EEPFD, several general observations case studies provide initial observations regarding
can be made. First, in both cases, designers were the impact and interaction of EEPFD on the early
observed to have difficulty with translating their de- stage design process when implemented through
sign intent into a viable parametric model. This may the designer, a subject for future research is the en-
in part be due to unfamiliarity with both the design gagement of a more extensive experimental user
platform, and parametric modeling and parametric group so as to further observe the impact of EEPFD
design methods. While these issues remain, they on the design process. Another subject for future
may be mitigated through increased experience research is the inclusion of additional performance
and industry trends indicating an increased used considerations, such as structural and daylighting,
of parametric design. Secondly, while the design- so as to meet the complexity demands of design
ers in both case studies acknowledged the potential problems through applied MDO.
applicability of the EEPFD generated results, Case
Study II utilized the results in both steps 1 and 6 REFERENCES
more completely. In Case Study I, a net zero energy Adamski, M 2007, ‘Optimization of the form of a building
building objective was desired, and therefore the on an oval base’, Building and Environment, 42(4), pp.
scope was an over extension of the capabilities of 1632-1643.
the current form of the prototype used by EEPFD. Of Attia, S, Hensen, JLM, Beltrán, L and De Herde, A 2012, ‘Se-
particular note, there is a need to include daylight- lection criteria for building performance simulation
ing strategies as part of their analysis. In the current tools: contrasting architects’ and engineers’ needs’,
implementation of the EPFD daylighing is aggregat- Journal of Building Performance Simulation, 5(3), pp.
ed within the more generic EUI calculation handler 155-169.
GBS. As a result Case Study I was not able to fully uti- Augenbroe, G 2002, ‘Trends in building simulation’, Building
lize the generated solution pool, however the frame- and Environment, 37(8-9), pp. 891-902.
work as it is intended is extensible and conceived to Caldas, LG 2008, ‘Generation of energy-efficient architec-
include other tools and design objectives. Finally, in ture solutions applying GENE_ARCH: An evolution-
both case studies the generation of unexpected re- based generative design system’, Advanced Engineering
sults occurs in part based on the designer provided Informatics, 22(1), pp. 59-70.
parametric ranges and there lack of expertise in de- Flager, F, Welle, B, Bansal, P, Soremekun, G and Haymaker, J
sign intent to parametric modeling transcription. In 2009, ‘Multidisciplinary process integration and design
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 183
optimization of a classroom building’, Information Tech- tectural Computing, 6(1), pp. 1-17.
nology in Construction, 14(38), pp. 595-612. Radford, AD and Gero, JS 1980, ‘Tradeoff diagrams for the
Gerber, DJ 2007, ‘Parametric practices: Models for design integrated design of the physical environment in
exploration in architecture’, Dissertation, Architecture, buildings’, Building and Environment, 15(1), pp. 3-15.
Harvard Graduate School of Design, Cambridge, MA. Tuhus-Dubrow, D and Krarti, M 2010, ‘Genetic-algorithm
Gerber, DJ and Lin, S-HE 2013, ‘Designing in complexity: based approach to optimize building envelope design
Simulation, integration, and multidisciplinary design for residential buildings’, Building and Environment,
optimization for architecture’, Simulation. 45(7), pp. 1574-1581.
Gerber, DJ, Lin, S-HE, Pan, BP and Solmaz, AS 2012, ‘Design Wright, JA, Loosemore, HA and Farmani, R 2002, ‘Optimiza-
optioneering: Multi-disciplinary design optimization tion of building thermal design and control by multi-
through parameterization, domain integration and au- criterion genetic algorithm’, Energy and Buildings, 34(9),
tomation of a genetic algorithm’ in L Nikolovska and R pp. 959-972.
Attar (eds), SimAUD 2012, Orlando, FL, USA, pp. 23-30. Yang, F and Bouchlaghem, D 2010, ‘Genetic algorithm-
Janssen, P, Chen, KW and Basol, C 2011, ‘Iterative virtual based multiobjective optimization for building design’,
prototyping: Performance based design exploration’ in Architectural Engineering and Design Management, 6(1),
29th eCAADe Conference: Respecting Fragile Places, Uni- pp. 68-82.
versity of Ljubljana, Faculty of Architecture (Slovenia), Yi, YK and Malkawi, AM 2009, ‘Optimizing building form for
pp. 253-260. energy performance based on hierarchical geometry
Oxman, R 2008, ‘Performance-based design: Current prac- relation’, Automation in Construction, 18(6), pp. 825-
tices and research issues’, International Journal of Archi- 833.
184 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Cloud-Based Design Analysis and Optimization Framework
Volker Mueller1, Tiemen Strobbe2
1
Bentley Systems, Incorporated, USA, 2University of Gent, Belgium
1
http://www.bentley.com
1
volker.mueller@bentley.com, 2Tiemen.Strobbe@ugent.be
Abstract. Integration of analysis into early design phases in support of improved building
performance has become increasingly important. It is considered a required response to
demands on contemporary building design to meet environmental concerns. The goal is
to assist designers in their decision making throughout the design of a building but with
growing focus on the earlier phases in design during which design changes consume
less effort than similar changes would in later design phases or during construction and
occupation.
Multi-disciplinary optimization has the potential of providing design teams with
information about the potential trade-offs between various goals, some of which may be
in conflict with each other. A commonly used class of optimization algorithms is the class
of genetic algorithms which mimic the evolutionary process. For effective paralleliza-
tion of the cascading processes occurring in the application of genetic algorithms in
multi-disciplinary optimization we propose a cloud implementation and describe its
architecture designed to handle the cascading tasks as efficiently as possible.
Keywords. Cloud computing; design analysis; optimization; generative design; building
performance.
INTRODUCTION
During the last decades an increased emphasis on have most impact on performance of the building
parametric design approaches is noticeable in early and least impact on implementation effort. Achiev-
architectural design phases. One of the opportuni- ing the best possible overall performance of a pro-
ties of parametric design is that it is possible to gen- ject will allow a response to the challenges posed
erate many instances of a model described in a para- by climate change, resource depletion, and unequal
metric model system, allowing the exploration of distribution of opportunities across the globe.
a large number of design variations. The challenge
is that it is not possible to examine all these design CHALLENGES
variations thoroughly enough to determine which This paper presents work towards an implementa-
ones to develop further. Therefore, integration of tion of a design performance optimization frame-
performance evaluation into the design process work that over time attempts to respond to as many
during early stages could help supporting the selec- of the following challenges as possible (Mueller et
tion process of high-performing design variations. al., 2013).
Obviously, the aim is to enable designers to make 1. Interoperability: the building design software
important decisions about their designs when they industry is similarly fragmented as the building
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 185
industry at large. There are many incompatible able, including a mix of automated iterations
software programs and data formats. Various and iterations performed by the design team
approaches have been proposed to overcome (Geyer and Beucke, 2010).
or bypass this obstacle to seamless collabora-
tion between design team members (Flager et FIRST PROTOTYPE
al., 2008; Janssen et al., 2012; Toth et al., 2012). The proposed system is composed of an analytic
2. Data equivalency: design tools may not have framework which connects tools used to generate
sufficient capabilities to create all data required design or analysis models (authoring tools) on one
by analysis tools. side and analysis or simulation tools (analysis tools)
3. Data discrepancy: conceptual design is less on the other side and of an optimization framework
concerned with the detailed information re- which connects the design and analysis system to
quired by analysis software (Bleil de Souza, optimization engines. Initially the data flow uses a
2012). Therefore, analysis opportunities in early mix of proprietary and publicized file formats. The
design may be limited by the information avail- specific components in the prototype implementa-
able to the design team or made available by tion are:
the design team. • GenerativeComponents (GC) [1] as parametric
4. Speed of feedback: design is an iterative pro- design authoring tool with add-ins extending
cess, with fast and frequent iterations. Analysis GC’s classes with structural and energy model
feedback into these design iterations has to be components;
fast enough to remain relevant for the current • STAAD structural analysis engine [2];
iteration (Hetherington et al., 2011). • EnergyPlus analysis engine [3];
5. Performance proxies: there is only insufficient • DARWIN optimization framework including
research to permit use of performance proxies two genetic algorithms [4]; and
to bypass lengthy execution times of estab- • Bentley Analytical Services Framework (ana-
lished analysis methods. Performance proxies lytic framework).
could use either simplified analysis methods, or The utilized file formats are:
simple analyses of a different type indicative of • EnergyPlus’s IDF file format for energy model
future performance. information sent from GC to the analytic frame-
6. Results display: visualization of analysis results work;
is not visually related to the geometric model • STAAD.Pro’s STD file format for structural model
(Dondeti and Reinhart, 2011). This prevents information;
designers from quickly gaining insight into • GC’s GCT file format for parametric model infor-
where in the design deficiencies are located mation;
and thus delays or prevents design improve- • XML file format for extraneous process infor-
ments through human intervention in reaction mation;
to analysis results. • TXT file format for extracted results.
7. In-context results: analysis results are not avail- This solution was introduced and tested at the
able in the digital model for access that would SmartGeometry event in 2012 in Troy, NY [5] in a
enable automation of refinement iterations or prototypical implementation (Mueller et al., 2013). It
multi-objective optimization routines. included energy analysis and structural analysis plus
8. Human-machine balance: not all design goals a genetic algorithm (GA) (Figure 1). All of the system
are measurable. How are “hard” computed per- architecture was implemented on a client system
formance metrics balanced with “soft” qualita- (desktop computer). The analysis engines were also
tive aspects. Several approaches are conceiv- implemented as analysis services on the cloud with
186 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Figure 1
Software architecture for the
prototype implementation
at SmartGeometry 2012 with
only the analysis engines in
the cloud.
the analytic framework establishing the connection software standards, including replacement of the
between the client and cloud applications based on GAs with ones that have been in commercial use
user selection of “local” or “cloud” execution of analysis. for several years, extending them to multi-objective
Limitations of this test implementation were: optimization while still complying with production-
the implementation did not progress beyond pro- level software standards. This includes use of inter-
totype stages causing several deficiencies; analysis application communications that are robust, secure,
models were kept at the minimal implementation and prevent the problems encountered with the
necessary to allow the analysis algorithms to ex- prototype. Most importantly in the context of this
ecute while possibly achieving sufficient complete- paper, this second version of the design analysis and
ness of the models for conceptual design; there was optimization system includes all necessary compo-
only a partial deployment of the system in the cloud, nents running as services on the cloud (Figure 2).
particularly of the analysis engines, leading to accu-
mulated latency issues; and lack of robustness. The User Workflow and Software Components
conclusions of this prototype implementation were: On the surface, the user workflow and involved com-
• Increase robustness of the software compo- ponents are the same as in the previous implemen-
nents and their communications. tation: the user designs a parametric model includ-
• Increase “completeness” of analysis and simula- ing analytical model components in GC. The model
tion models without increasing required model needs to be driven by parameters so that changes
complexity. to the model can be applied in response to analy-
• Develop the system architecture in a way to sis results or any other computations evaluating
minimize negative side-effects of a deploy- the performance of the current instantiation of the
ment in the cloud while maximizing the de- parametric model (Mueller et al., 2013). The analyti-
sired positive effects. cal model components provide input to analytical
nodes which connect to external analysis engines
SECOND VERSION OF ANALYSIS AND (STAAD and EnergyPlus) via the analytic framework.
OPTIMIZATION FRAMEWORK Analysis results are returned to the parametric model
The improved second version of the analysis and for any subsequent computations to extract or de-
optimization framework responds to the limitations termine characteristic performance values. These are
by replacing the prototype optimization framework passed as fitness values to an optimization node in
with a version rewritten to meet production-level GC.
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 187
Figure 2
Cloud components of the
software architecture for the
improved implementation.
The optimization node in GC is the interface to performs any subsequent computations, resulting
the optimization framework which in turn interfaces in fitness values for the specific phenotype that are
with the optimization engine. The user also identi- fed back into the optimization node and from there
fies those parameters or design variables that the to the optimization framework. The fitness values
optimization process may manipulate. The optimi- are associated with the specific individual’s chromo-
zation framework converts parameter ranges and some and communicated back to the GA. Once the
their discrete increments (resolution or granularity) GA has received all fitness values for an entire popu-
into binary chromosomes (or the genome) for the lation it evaluates that population in order to deter-
GA in order to generate individual design solutions mine those sets of chromosomes that it will use to
or phenotypes in analogy to evolutionary processes generate the genome for the next generation, i.e.
in nature. the next set of genotypes.
When the optimization is processed locally on
MDO Process the client computer, the optimization framework
When the optimization process starts, the GA gen- pushes each phenotype’s chromosome into the
erates a first set of chromosomes to create a first parametric modeling engine and then waits for the
generation of individuals (a generation’s population) corresponding fitness values, which means that all
based on the GA’s implementation of stochastic processes are executed sequentially, starting with
principles applied to an evaluation of the nature of the parametric model update, including the analysis
the genome. The optimization framework interprets requests and subsequent evaluations, and ending
the chromosome into the set of design variables or with the return of fitness values to the optimization
parameters and pushes those into GC as engine for node. This repeats for each individual in a genera-
the parametric modeling service with the applicable tion until the entire population is processed. After
parametric model active. The parameter changes the GA evaluates the results for the generation it
propagate through the model to create the corre- creates a new set of genotypes for the optimization
sponding model instance (individual or phenotype). framework to start the process for the next genera-
Any analyses included in the model are request- tion.
ed as cascading processes via the analytic frame- When the user requests execution in the cloud,
work while the parametric modeling engine waits in contrast to the prototype this implementation
for the results to return, fully dependent on suc- processes an optimization request entirely in the
cessful termination of the analysis processes. It then cloud. The user designs and develops the paramet-
188 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Figure 3
Software architecture of client
and cloud components of
improved implementationes.
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 189
Figure 4
Simple cloud scheduling
service schematic.
putational tasks and dispatches them to available er, including failure of the respective compute node.
compute nodes [7] (Figure 4). This is a very straightforward implementation
for massively parallel processing tasks (“embarrass-
“Simple” Cloud Task Scheduling ingly parallel”) which are the ideal use case for the
The sequence of cloud processing is that a client re- cloud. However, the most important implication
quests a processing job from the client-facing web of this type of cloud scheduling regime is that task
service of the cloud (1). The web service starts the sequencing as required for the MDO process can-
requested job (2). The job generates as many tasks not be guaranteed, so that process dependencies
into the task queue as needed (3). The scheduler need implementation of specific measures to ensure
polls the task queue regularly (4) and pulls the first proper sequence to avoid extended wait times in
queued task and distributes it to an available com- the best case and deadlock in the worst.
pute node (5). The task processes on the compute
node (6), retrieves any data it needs from storage MDO Cloud Process
usually via look-up in some table or database, posts For the multi-disciplinary optimization case there
or updates any task states to a table (7), and returns are various levels of dependencies that might suf-
any process results to storage (8). It indicates its or- fer substantially from the “state-free” and fail-any-
derly termination to the scheduler. The job process time premise of cloud resources. If the task is the
polls the table (9) to assess progress of individual optimization process itself, then any failure during
tasks or the overall job status. The web services polls the process will void the entire process and requires
the job for job progress or completion (10). On re- the entire optimization to restart. This suggests
quest from the client (11), the web service pulls any that the optimization framework would need to be
results from storage (12) and displays them to the state-aware. Similarly, the optimization engine (de-
client (13) or makes them available for download to pending on its architecture) needs to be state-aware
the client. Compute node fail-over is implemented or needs to store its intermediate results in such a
by the scheduler hiding a task that has been distrib- way that it could pick up the process at any point,
uted to a node. A time-out limit makes visible any perhaps from conclusion of a generation. Any para-
hidden task that has not indicated orderly termina- metric model (e.g. individual in a population) is of
tion to the scheduler within its time-out, so that the course one of the parallel processes in a generation
scheduler will distribute it again on the assumption that benefits from the virtually unlimited resource
that the computation failed for one reason or anoth- concept of the cloud. However, during several as-
190 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Figure 5
Cloud scheduling service sche-
matic for MDO services.
pects of the MDO process various states are reached MDO web service on the cloud and sends the re-
in conflict with the state-free concept. This requires quired data package (1). The MDO web service
some additional steps in the process sequence in or- starts the MDO job (2). The MDO job comprises the
der to be handled by a standard scheduler. optimization framework and the optimization en-
gine (the GA). It extracts and applies or distributes
MDO Cloud Task Scheduling the relevant data from the data package, handles
An improved system of MDO cloud task sched- the parameter set to chromosome conversion, etc.,
uling is used to overcome the issues described generates all parameter sets for a generation, and
above (Figure 5). It uses tables to preserve states in generates phenotype tasks PN into the task queue.
an otherwise state-free system that can fail at the The scheduler polls the task queue and pulls task
parametric model and analysis level. Model gen- PN (4) for distribution to compute nodes (5). The PN
eration and analysis tasks are executed “in parallel”, task processes the appropriate parameter set (phe-
with the analytical framework as part of the para- notype) in an instance of the parametric modeling
metric model task (i.e. on the same node, because engine (GC) on the compute node (6) and posts or
these processes are sequential anyway). However, updates its execution state to the table (7).
the analytical framework can start one or multiple When the parametric model includes analysis
analysis tasks that will be queued and handled by nodes, these request analysis tasks from the analyti-
the scheduler. Possible approaches are single-queue cal framework instance AFN which runs on the same
or dual-queue, separating modeling and analysis node as the parametric model engine (8). This does
tasks. The advantage of a dual-queue system is that not impact processing speed because the paramet-
it could be designed to handle cascading depend- ric model engine needs to wait for the cascading
encies without any danger of resource deadlock; analysis processes to terminate and for the analyti-
however, its implementation is more complex. The cal framework instance to return the analysis results.
current implementation uses a single task queue The analytical framework instance AFN posts and up-
for both, modeling and analysis, tasks and is based dates any processing states in the table (9) and adds
on the premise that adaptive scaling (marshalling of analytic tasks AN1 and AN2 (etc.) to the task queue
additional resources when tasks are waiting in the (10). The scheduler pulls the analytic tasks AN1 and
queue) will prevent resource deadlock. AN2 (etc.) from the queue (11) and distributes them
The sequence of the MDO cloud processing is to available compute nodes (12) where they start
that the GC client requests an MDO job from the processing, pulling any data from storage, updating
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 191
Figure 6
Cloud software architecture
including the additional task
management elements.
their processing state to the table (13), and deposit- display them in the client context and/or instantiate
ing any results back to storage (14). the corresponding solutions.
Meanwhile, analytic framework instance AFN
polls the table for the analysis tasks’ states (15), and CONCLUSION
when they have successfully terminated it pulls the Cloud computing provides access to ubiquitous
results from storage (16). The analytic nodes in the and virtually unlimited resources. It permits accel-
phenotype task PN in the parametric modeling en- eration of processes that include tasks that can be
gine poll process AFN for analysis results and post- performed in parallel but are predominantly per-
process them to convert them into fitness values formed sequentially in conventional desktop imple-
(17). PN also computes any other fitness values and mentations. As demonstrated, even more complex
passes them to the optimization node in the para- processes like multi-dimensional optimization can
metric model. The optimization node passes the fit- be successfully handled with basic task scheduling
ness values to an optimization framework instance if any cascading and dependent tasks are set up
on the compute node which posts the results and in ways that allow the proper management of se-
task completion to storage and table, respectively quencing (Figure 6).
(18). Even though cloud resources need to be ac-
The optimization framework instance in the cessed through internet connections and the com-
MDO job polls the table for completion of all tasks putational resources available in the cloud are con-
PN in a generation (19), and pulls the fitness values sumer grade rather than high-end, cloud computing
from storage (20). The optimization framework in is advantageous whenever massive parallelization
the MDO job prepares the generation data for the of tasks can be utilized. In the case of MDO using
GA which then generates the next generation’s pa- GAs, it is obvious that the individuals in a generation
rameter set and starts scheduling a new set of phe- can be processed in parallel leading to acceleration
notype tasks PN (21). The MDO service polls the MDO of the process by approximately the population size.
job for completion of the entire optimization run In addition, any analysis processes included in the
(22) and notes completion, if applicable. The opti- MDO job could be processed in parallel, leading to
mization framework client polls the MDO service for additional time savings depending on the number
job completion (23). Upon request, the MDO service and length of the analysis processes.
pulls the optimization results from cloud storage The cascading nature of the processes and their
(24), and the client downloads them (25) in order to dependency pose a difficult challenge if in contra-
192 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
diction to the base premise of virtually unlimited and T Williamson (eds.), Proceedings of Building Simula-
resources, the computing resources are artificially tion 2011: 12th Conference of International Building Sim-
limited, e.g. if for an MDO job that includes analysis ulation Association, Sydney, Australia, pp. 1250–1257.
tasks fewer compute nodes are allocated than the Flager, F, Soremekun, G, Welle, B, Haymaker, J and Bansal, P
population size in a generation. This would cause 2008, ‘Multidisciplinary process integration and design
an irresolvable deadlock of resources because the optimization of a classroom building’, CIFE Technical Re-
parametric modeling or phenotype processes port TR175, Stanford University.
would occupy all the available compute nodes and Geyer, P and Beucke, K 2010, ‘An integrative approach for
any remaining phenotype tasks as well as the cas- using multidisciplinary design optimization in AEC’, in
cading analysis tasks would be queued up without W Tizani (ed.), Proceedings of the International Confer-
any chance of additional nodes becoming available. ence on Computing in Civil and Building Engineering,
If the set up on the cloud permits limitation of the Nottingham University Press.
number of compute nodes additional precautions Hetherington, R, Laney, R, Peake, S and Oldham, D 2011, ‘In-
need to be put in place to reserve compute nodes tegrated building design, information and simulation
for cascading processes. modelling: the need for a new hierarchy’, in Proceedings
of Building Simulation 2011, Sydney, pp. 2241–2248.
FUTURE WORK Mueller, V, Crawley, DB and Zhou X 2013, ‘Prototype imple-
With completion of the improved implementation mentation of a loosely coupled design performance
imminent, use of the system in user case studies is optimisation framework’, in R Stouffs, PHT Janssen, S
next. This will also allow benchmarking of desktop Roudavski and B Tunçer (eds.), Open Systems: Proceed-
implementation and cloud implementation to as- ings of the 18th International Conference of the Associa-
sess the impact of parallelization using “virtually tion of Computer-Aided Architectural Design Research in
unlimited” resources. Additional work will be docu- Asia CAADRIA 2013, CAADRIA, Hong Kong, pp. 675–684.
mentation and publication of APIs to allow third Toth, B, Boeykens, S, Chaszar, A, Janssen, P and Stouffs, R
party development to add analysis engines and op- 2012, ‘Custom digital workflows: A new framework for
timization engines, as well as, add-ins for design au- design analysis integration’, in T Fischer, K De Biswas, JJ
thoring tools to connect to the optimization frame- Ham, R Naka, and WX Huang (eds.), Beyond Codes and
work. Pixels: Proceedings of the 17th International Conference
on Computer-Aided Architectural Design Research in
ACKNOWLEDGEMENTS Asia, CAADRIA, Hong Kong, pp. 163–172.
This work was supported by contributions from vari-
ous groups within Bentley Systems, Incorporated, [1] GenerativeComponents from Bentley Systems: http://
especially Applied Research, Special Technology www.bentley.com/en-US/Products/GenerativeCom-
Projects, Design and Simulation’s Structural Analysis ponents/, accessed May 24, 2013.
team, and the BentleyCONNECT team. [2] STAAD.Pro user interface to the STAAD analysis engine
from Bentley Systems: http://www.bentley.com/en-
REFERENCES US/Products/STAAD.Pro/, accessed May 23, 2013.
Bleil de Souza, C 2012, ‘Contrasting paradigms of design [3] EnergyPlus from the U.S. Department of Energy: http://
thinking: The building thermal simulation tool user vs. apps1.eere.energy.gov/buildings/energyplus/, ac-
the building designer’, Automation in Construction, 22, cessed May 23, 2013.
pp. 112–122. [4] Darwin Optimization (version 0.91) by Dr. Zheng Yi
Dondeti, K and Reinhart, CF 2011, ‘A “Picasa” for BPS – An Wu, http://communities.bentley.com/communities/
interactive data organization and visualization system other_communities/bentley_applied_research/w/
for building performance simulations’, in V Soebarto bentley_applied_research__wiki/6584.aspx, accessed
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 193
Dec 7, 2012.
[5] http://www.smartgeometry.org/, accessed April 15,
2013.
[6] http://www.windowsazure.com/, accessed June 20,
2013.
[7] http://msdn.microsoft.com/en-us/library/
hh560251(v=vs.85).aspx, accessed June 20, 2013.
194 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Graphical Smalltalk with My Optimization System for
Urban Planning Tasks
Reinhard Koenig1, Lukas Treyer2, Gerhard Schmitt
ETH Zurich, Chair of Information Architecture
http://www.ia.arch.ethz.ch/
1
reinhard.koenig@arch.ethz.ch, 2lukastreyer@student.ethz.ch
MOTIVATION
For many computer scientists the programming lan- integration of generative methods in planning pro-
guage Smalltalk was the most pioneering human- cesses is their complicated handling. Typically they
computer interaction language of the 1970s. It was require extensive input of abstract technical rules
designed to be so simple that even children could and parameters that are unfamiliar and daunting for
program. It is one of the first totally object oriented planners.
languages – everything is an object. While today The situation is further complicated by the fact
many ideas from Smalltalk have since been adopted that planning projects typically consist of a mixture
by other languages, the visionary thinking of the of contradicting and non-contradicting criteria as
time when it was developed can still inspire us today well as of directly measurable criteria and only in-
to strive for flawless human-computer interaction in directly interpretable measures. The lack of suitable
the development of design optimization systems for optimization methods hinders a systematic evalua-
architecture and urban planning. tion of possible compromises between contradict-
ing planning requirements.
PROBLEM STATEMENT
A number of promising generative algorithms are STATE OF THE ART
available today, but none are currently employed to In their seminal book, Radford and Gero (1988) show
enhance and simplify the day-to-day work of urban various examples of how optimization strategies can
planners. Computer support for urban planning pro- be used to solve design problems. Although today
jects is usually restricted to basic CAD drawing tools. we can use more flexible evolutionary optimization
In the authors’ opinion, one reason for the lack of methods (Deb, 2001), the concept for their applica-
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 195
Figure 1
Planning scenario divided into
three levels of abstraction.
Left: Topological relations
between elements and basic
properties. Centre: Geometric
distribution of the elements.
tion and the role of pareto-optimal fronts has not tion as a basis for evolutionary optimization strate- Right: Geometric representa-
changed a lot over the past few decades (Bentley gies is as an issue that is yet to be resolved. tion of a possible planning
and Corne, 2002). A good example for state of the solution [6].
art interactive generative planning systems is the CONCEPTUAL FRAMEWORK
work of Derix (2009). Dillenburger et al. (2009) have To address the aforementioned problems, our first
also presented an interesting system for creating task is to develop a conceptual framework that in-
building designs using a weighted-sum optimiza- cludes a combination of various interaction strate-
tion algorithm. gies for the user interface, different generative tech-
Current commercial solutions for generative or niques, and some optimization methods. We have
procedural modeling, for example Grasshopper [1], approached this concept from two perspectives:
GenerativeComponents [2], or CityEngine [3] exem- from that of a planner and from that of a software
plify the problems with such systems: they require developer.
intensive training before they can be used efficiently To meet the planner’s requirements we sepa-
and though sometimes attractively designed, their rate the problem representation and the definition
user interfaces are not intuitive for urban planners. of requirements by at least two levels of abstraction
Furthermore it is not efficient to couple them with (Figure 1): The first holds the topological relations
optimization tools, because of the increased com- between various elements and basic properties (Fig-
puting time and restricted possibilities offered by ure 1 left). The graphical objects on this level can en-
their corresponding APIs. Although Galapagos [4] code parameter values e.g. by their size, position or
provides an optimization method for Grasshopper, colour, etc. The second abstraction level comprises
Rutton (2010) notice that it is only useful for simple the geometric representation of possible planning
problems. solutions (Figure 1 right). One can interact with all
the graphical objects of a current planning proposal
AIMS on each level to test different options and to refine
Our main goal is to use graphical objects to repre- a planning iteratively. From the software developers
sent a planning problem and to control an optimiza- point of view we develop a framework for combin-
tion algorithm using primarily these objects. A fur- ing evolutionary optimization techniques. These in-
ther challenge is to translate the planner’s partially clude generative algorithms and evaluation mecha-
vague qualitative requirements into a precise quan- nisms to analyze the generated variations. As a basis
tifiable problem representation for an algorithm. for this framework, we use state-of-the-art evolu-
Translation problems are one reason why planners tionary multi-criterion optimization methods. For
rarely embrace computer support. To improve this a comprehensive and easily understandable intro-
situation we aim to develop an interactive system duction to evolutionary algorithms, see Bentley and
for supporting the urban planning process with a Corne (2002). In the following description, we focus
more constructive and intuitive interface for plan- only on the essential aspects that are necessary for
ners. The combination of well-designed interaction our purposes. We take the AForge.Net Framework
strategies and planner-friendly problem representa- [7] as the starting point for the implementation of
196 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Figure 2
Mapping process from an
instruction tree to a street
network. The grey dotted
street segments on the right
side illustrate the adaption of
instructions how to add a new
segment to a existing network.
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 197
Figure 3
On the left, the tree shows
the main instructions of an
instruction node. In the box on
the right one can see how the
instructions are assigned to a
street network.
The instruction tree can be mutated and used betweenness centrality (choice) of a network. For
for crossover operations as illustrated in Figure 4. this we need to calculate the all-pair shortest paths
Since the instructions of a node are always relative for the network, and compute this following the
to the existing street network, new combinations af- concept elaborated by Hochberg [8] using a paral-
ter the crossover always work. The main reason why lel GPU implementation of the Floyd-Warshall algo-
we use a tree structure for the chromosome is that rithm to calculate shortest paths. For the weight-
it ensures that after the crossover and mutation op- ings in the corresponding graph we use angular
erations, the corresponding street network remains distances instead of metric distances as introduced
connected (if the initial network was connected). by Turner (2001; 2007). The choice value for a spe-
The mutation operator simply takes (e.g. 1-10%) in- cific street segment equals the number of shortest
dividual nodes of an instruction tree and assigns a paths from all street segments to all others that pass
randomly generated value to one of its parameters. through that segment. For the sake of simplicity, we
The frequency of the execution of these operators at use only the choice value in the following examples
one iteration (or generation) is defined by the cross- to characterize street networks, but other centrality
over and mutation rate. measures would be useful too.
One of the most important properties of a gen-
erative mechanism, as part of an optimization pro- EXAMPLE SCENARIO
cess, is its ability to generate very different network As a starting situation for the following examples we
topologies. It is this property that allows an optimi- use an area with the dimensions 3000m × 2000m
zation system to find interesting and surprising so- that needs to be filled with streets (Figure 5). The po-
lutions for a given set of restrictions and goal func- sitions of the existing street connections are marked
tions. by nodes with underlined numbers. The areas de-
fined in the right-hand image in Figure 5 will be
Evaluation Mechanism used in later examples to define a central sub-area
As a goal function (or fitness function) for the evalu- (red, dashed) and a quiet sub-area (blue, checked).
ation of the generated street network we use the The red center is placed near the coordinates
Figure 4
Creation of new child variants
(C and D) by a crossover op-
eration applied to two parent
instruction trees (A and B).
198 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Figure 5
Initial planning situation: The
border defines the planning
area, and connections to the
existing street network are
represented by the nodes with
underlined numbers. The col-
oured areas in the right-hand
image denote areas where the
new street network will have
defined properties.
750m/1500m (coordinate origin in the left bottom 6 and Figure 7 show three resulting street networks
corner) while the blue quiet area fills the bottom with the corresponding diagram of the develop-
right-hand quarter of our planning area. For the fol- ment of the fitness values.
lowing examples we use the following initial param- To achieve very high choice values, the most ob-
eters: generations = 50, population size = 50, muta- vious strategy is to design a network, that is separat-
tion rate = 0.25, crossover rate = 0.75, tree depth = 8. ed into two parts which are connected by the most
It is import to select a tree depth high enough to en- used street, the one with the highest choice value. In
sure that the complete area can be filled with streets cities we find this situations, for example, in places
and that there is no indirect restriction for the opti- where a bridge crosses a river or a narrow valley di-
mization algorithm. The values for the nodes of the vides a settlement. If we consider the three street
initial instruction trees were initialized with random networks in Figure 6 we can see this structure in the
values with the intervals: αi =[-10, 10], li =[10, 40], κi network in the right-hand image. Of the three net-
=[1, 4] works in Figure 6, however, the network in the mid-
First we consider a simple basic example scenar- dle has the best fitness value, although there are no
io, where we use the aforementioned optimization two separate parts. This results from the fact that for
method to maximize the maximum choice value the calculation of the trips we use the shortest an-
(Figure 6) and the sum of all choice values of the gular distance and not the shortest metric distance.
generated street network (Figure 7). We start with an Because of this, there is one street segment at the
empty area as shown in Figure 5 on the left. Figure top-center which is used very often. If we look at the
Figure 6
First example scenario. For
each of the street networks
shown, the maximum choice
value is maximized. Red
represents street segments
with high choice values and
blue low choice values. The
diagrams in the bottom row
show the development of
the fitness values over 50
generations.
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 199
Figure 7
Second example scenario: For
each of the street networks
shown, the sum of all choice
values is maximized. Red
represents street segments
with high choice values and
blue low choice values. The
diagrams in the bottom row
show the development of
the fitness values over 50
generations.
left-hand network in Figure 6 we find that there is no the diagrams in the bottom row in Figure 7 is similar
single bottleneck, and at least one other populated to that of the corresponding diagrams in Figure 6.
route. As a result this network has the worst fitness This indicates that both fitness functions direct the
of the three. search process in a similarly efficient way.
The fact that the three best networks have differ- Our third example is based on the initial sce-
ent maximal fitness values (diagrams in the bottom nario with two defined sub-areas as shown in the
row of Figures 6 and 7), indicates that the evolution- right-hand image of Figure 5. In Figure 8 the central
ary optimization process explores different parts of sub-area is shown as a dotted ellipse and the quiet
the search space each time it is run. But despite the sub-area as a dotted rectangle. To include the spatial
small differences in the maximum fitness values of aspect in the fitness function, we have to define how
the variants, they all fulfill the requirements relative- to represent the graphical objects that represent the
ly well. When we consider the random points (repre- sub-areas with the corresponding specified proper-
senting randomly generated variants) we can clearly ties.
see the advantage of using the evolutionary search First we consider the central sub-area. To
process compared to randomly generated solutions. achieve a highly-populated center in the defined
The best variants are improved continuously over sub-area we want to locate the street segment
the 50 generations and reach a level, which cannot with the maximized choice value in it. Therefore we
be achieved by a random generation process. measure the distance dcmax of the street segment
In our second example we use the same initial with the maximum choice value cmax to the center.
scenario as in the first one, but we adapt the fitness This distance can be used as a weight so that we can
function to maximize the sum of all choice values of decrease the fitness of a network according to the
the street network. The topologies of the resulting distance dcmax:
networks in Figure 7 are clearly different to those
in Figure 6. Here we cannot see separate network , (1)
parts and the streets segments with the highest where D is a constant which denotes the maxi-
choice values are not concentrated at one location mal possible distance. In our examples, this is the
but distributed across the network. This difference diagonal of the border rectangle D = 3606m.
proves that our optimization system is working as Secondly, we consider the quiet sub-area. To
expected. The development of the fitness shown in achieve an area with as little traffic as possible, e.g.
200 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Figure 8
Third example scenario. For
each of the street networks
shown, we use a combined fit-
ness function: one factor is the
maximized maximum choice
value that is weighted with the
distance to the central area
(dotted ellipse), while the sec-
ond factor is the average sum
of all choice values which are
assigned to street segments
inside the quiet area (dotted
rectangle). Red represents
street segments with high
choice values and blue low for residential usage, we want to have only streets solution in the left-hand image of Figure 8. Here the
choice values. The diagrams with low choice values in it. Therefore we sum all maximum choice value is at the edge of the central
in the bottom row show the differences of the choice values from the street seg- area and there is a relatively populated road in the
development of the fitness ment ci that are located inside the quiet area A and quiet sub-area. Maybe the optimization algorithm
values over 50 generations. the maximum choice value of the network cmax. could have found a better solution with more gen-
The average of this sum is used as the second part of erations. But the combined fitness function may be
our fitness function: hindering the improvement of this variant. We will
discuss this problem in the next section.
. (2)
From the two fitness values F1 and F2 we calculate DISCUSSION
the final fitness value as the sum of both: As outlined in the description of our framework, we
Fitness = F1 + F2. (3) have demonstrated a method of representing plan-
The street networks resulting from this optimi- ning requirements using graphical objects that can
zation process are shown in the top row of Figure 8. be used by an optimization system (Figure 5). The
The results fulfill both of our requirements relatively main challenge of the system is interacting with de-
well: the red coloured street segments with maxi- sign variants, not because of the complicated user
mum choice values are located close to or inside the interface – it is, for example, possible to change the
central sub-area, while we find primarily only blue genotype and thus the later optimization process by
coloured street segments with low choice values in manipulating the graphical objects of the pheno-
the quiet sub-area. To evaluate the effect of the de- type (street segments and crossroads). This makes
fined sub-areas on the resulting street networks we it possible to realise a multi-directional planning
can compare the variants from Figure 6 and Figure 7 method as described above.
with the ones in Figure 8. We can observe very dif- The main problem of our system is that the op-
ferent structures in comparison to the second ex- timization process is much too slow for use in an
ample scenario in Figure 7 and similar structures to interactive process. The computation of the above
our first example scenario in Figure 6 showing the examples needed 2030 minutes on an average mod-
two separated network parts (Figure 8 in the mid- ern notebook. One generation therefore needed
dle). In general the results seems self-evident, but half a minute: half a second would be a more accept-
nevertheless we can see some problems, e.g. at the able timespan. Of course these times depend a lot
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 201
on the size of the street network, the population size significant effect on the optimization process und
and other aspects. But the main time-critical aspect thus on the quality of the results. For example the
is the computation of the all-pair shortest path. This optimization process can get stuck in local optima,
needs to be improved in future using an optimized since one criterion is already very good, but the
algorithm and more powerful hardware. other not. The improvement of poor criteria may be
In addition we use a very inefficient method to hindered because it may negatively affect other very
generate instruction trees. We use a random initial good criteria, so that the resulting fitness value can-
process which produces a very huge tree from which not be improved. To avoid these kind of problems
only a small fraction of nodes are needed to grow we need to use evolutionary multi-criteria optimiza-
the street network. In the examples above, in the tion (EMO) methods (Deb, 2001).
case of κi = 4, we have 3^depth instruction nodes for
each tree. For a tree depth of 8 this results in max. CONCLUSION AND OUTLOOK
3^8 = 6561 nodes, but we only have approximately In this paper we have demonstrated the potentials
300 street segments. Alternatively one could create of using an optimization system for urban planning
random but meaningful street networks in the be- tasks using a test scenario. In this scenario we have
ginning and encode them to make much more ef- generated street networks with defined local prop-
ficient instruction trees. erties. The presented system is a first component of
Variations of the angles and placement of the an framework with basic functionality to efficiently
initial street segments as shown in Figure 5 have a search compromise solutions for complex planning
relatively significant impact on the further growth problems. A first software prototype has been im-
and thus on the final phenotype of the network. plemented with an intuitive user interface to repre-
Therefore, to search for optimal solutions it may also sent planning problems, to present various compro-
be useful to vary the initial segment. mise solutions, and to improve them interactively.
Another interesting aspect of the presented The differences in the examples presented in
examples is a product of the property of EAs to cre- Figures 6-8 show clearly that our system doesn’t
ate their own biotope for the artificial life forms – in generate globally optimal solutions – e.g. one can
our case the street network. We can observe special delete connections that enable ring trips around the
strategies for the EA to maximize their fitness (the centre to increase the traffic through the centre (and
choice values): the first is to maximize the num- to increase the corresponding choice value). This is
ber of street segments to produce more trips and an inherent aspect of EAs: they cannot guarantee
thus higher maximal choice values. This could be finding the globally best solutions, but they can
overcome by averaging the values i.e. dividing the always offer good ones. This disadvantage can be
choice values by the number of streets. The second improved by running more generations or by using
is to generate street segments at strategically ben- separate populations in parallel and migrating the
eficial places (e.g. top left corner in the left and right best variants between them. In our context, this isn’t
networks in Figure 8). These segments can produce a problem because planners are not usually looking
more trips via certain segments with high choice for global optima as goal functions represent only a
values to increase them further. part of a planning problem. Thus the interactive and
In our last example (Figure 8) we have used two adaptable search for variants is the main support for
goal functions: one to achieve a center and one to the planning process.
create a quiet area. Both are combined into one The next step for the development of our frame-
fitness function. Here we run into the problem of work is to implement a more complex EMO sys-
weighting both criteria against each other in a more tem which integrates algorithms for parceling and
or less arbitrary way. This weighting, however, has a building placement (Aliaga et al., 2008; Knecht and
202 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Koenig, 2012). With this development we can achieve Parish, Y. I. H., and Müller, P. (2001). Procedural Modeling of
the multi-level approach illustrated in Figure 1. Cities. Paper presented at the SIGGRAPH, Los Angeles,
CA.
ACKNOWLEDGEMENT Radford, A. D., and Gero, J. S. (1988). Design by optimization
Special thanks go to our colleague Christian Tonn in architecture, building, and construction. New York:
from the Bauhaus-Universität Weimar, who imple- Van Nostrand Reinhold.
mented the fast graph calculations for GPU. Rutton, D. (2010). Evolutionary Principles applied to Prob-
lem Solving. Retrieved 18.06.2011, from http://www.
REFERENCES grasshopper3d.com/profiles/blogs/evolutionary-prin-
Aliaga, D. G., Vanegas, C. A., and Beneš, B. (2008). Interactive ciples
example-based urban layout synthesis. Acm Transac- Turner, A. (2001). Angular Analysis. Paper presented at the
tions on Graphics, 27(5), 1-10. 3rd International Space Syntax Symposium, Atlanta.
Bentley, P. J., and Corne, D. W. (2002). An Introduction to Turner, A. (2007). From axial to road-centre lines: a new rep-
Creative Evolutionary Systems. In P. J. Bentley and D. W. resentation for space syntax and a new model of route
Corne (Eds.), Creative Evolutionary Systems (pp. 1-76). choice for transport network analysis. Environment and
San Francisco: Morgan Kaufmann. Planning B: Planning and Design, 34(3), 539 – 555.
Deb, K. (2001). Multi-objective optimization using evolution-
ary algorithms: John Wiley and Sons. [1] http://www.grasshopper3d.com/ (Retrieved 01.02.2013)
Derix, C. (2009). In-Between Architecture Computation. In- [2] http://www.bentley.com/en-US/Promo/Generative%20
ternational Journal of Architectural Computing, 7(4). Components/default.htm
Dillenburger, B., Braach, M., and Hovestadt, L. (2009). Build- [3] http://www.esri.com/software/cityengine (Retrieved
ing design as an individual compromise between quali- 01.02.2013)
ties and costs: A general approach for automated build- [4] Galapagos is a plugin for evolutionary optimization for
ing generation under permanent cost and quality control. Grasshopper/Rhino3D: http://www.grasshopper3d.
Paper presented at the CAADFutures 2009. com/group/galapagos (Retrieved 01.02.2013)
Knecht, K., and Koenig, R. (2012). Automatische Grundstück- [5] http://en.wikipedia.org/wiki/Inverse_problem (Re-
sumlegung mithilfe von Unterteilungsalgorithmen und trieved 01.02.2013)
typenbasierte Generierung von Stadtstrukturen. Weimar: [6] Bundesamt für Landestopografie, swisstopo (Art. 30
Bauhaus-Universität Weimar. GeoIV), © 2011 swisstopo
Koltsova, A., Tuncer, B., Georgakopoulou, S., and Schmitt, G. [7] http://www.aforgenet.com/ (Retrieved 21.05.2013)
(2012). Parametric tools for conceptual design support [8] http://www.shodor.org/petascale/materials/UPMod-
at the pedestrian urban scale. Paper presented at the ules/dynamicProgrammingPartI
eCAADe.
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 203
204 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Evo-Devo in the Sky
Patrick Janssen
National University of Singapore, Singapore
patrick@janssen.name
INTRODUCTION
Evolutionary design is loosely based on the neo-Dar- may be computationally expensive and slow to ex-
winian model of evolution through natural selection ecute.
(Frazer, 1995). A population of individuals is main- Designers interested in applying evo-devo-
tained and an iterative process applies a number of design methods for performance based multi-ob-
evolutionary procedures that create, transform, and jective design exploration have typically faced two
delete individuals in the population. main hurdles: skill and speed (i.e. “it’s too hard and
Evo-devo-design differs from other types of evo- too slow!”). From a skills perspective, the require-
lutionary approaches with regards to the complexity ment for advanced interoperability engineering and
of both the developmental procedure and the eval- software programming skills is often too demand-
uation procedures. The developmental procedure ing for designers. From the speed perspective, the
generates design variants using the genes in the requirement for processing large numbers of design
genotype (Kumar and Bentley, 1999). The evaluation variants can lead to excessively long execution times
procedures evaluate design variants with respect to (often taking weeks to complete).
certain performance metrics. These procedures will Previous research has demonstrated how these
typically rely on existing stand-alone programs, in- hurdles can be overcome using a VDM procedural
cluding Visual Dataflow Modelling (VDM) systems modelling software called Sidefx Houdini (Janssen
and simulation programs (Janssen and Chen, 2011; and Chen, 2011). Firstly, a number of simulation
Janssen et al., 2011). In many cases, these systems programs were embedded within this VDM system,
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 205
thereby allowing designers to define development initialisation task also creates the initial population
and evaluation procedures without requiring any of design variants.
programming. Secondly, the evolutionary algorithm The growth and feedback tasks are used to pro-
was executed using a distributed environment, cess design variants in the population. The growth
thereby allowing the computational execution to be task will take in just a single individual with a geno-
parallelized. type and will generate a phenotype and a set of per-
Although the research demonstrated how the formance scores for that individual. (In the proposed
challenges of skill and speed could be overcome, method, the processes of development and evalua-
the solution was specific to the software tools being tion are thus defined as a single growth workflow.)
used, in particular Sidefx Houdini. Furthermore, for The feedback task will take in a pool of fully-eval-
most designers, the proposed approach remained uated individuals and based on a ranking of those
problematic due to the fact that they do not have individuals will kill some and will select some for
access to computing grids. This paper will propose a generating new children. With just these two tasks,
generalized method for evo-devo-design that over- a huge variety of evolutionary algorithms can eas-
comes these limitations. The method uses two key ily be specified. For example, if the pool size for the
technologies: computational workflows and cloud feedback is equal to the population size, then a gen-
computing. In order to tackle the skill hurdle, com- erational evolutionary algorithm will result, while if
putational workflow management systems are used, pool size is much smaller than the population size, a
called Scientific Workflow Systems (Altıntaş, 2011; steady-state evolutionary algorithm will result.
Deelman et al., 2008). In order to tackle the speed The first hurdle that EDITS must address is the
hurdle, readily available cloud computing infrastruc- skills hurdle. The initialisation, feedback, and termi-
ture is used. We refer to the proposed method as nation tasks are highly standardized and can there-
Evo-Devo In The Sky (EDITS). fore be generated automatically based on a set of
The next section will focus on the proposed ED- user-defined parameters. The growth task on the
ITS method, followed by a section describing the other hand is highly problem-specific and there-
implementation of a prototype EDITS environment. fore needs to be defined by the user. In order to
The final section will briefly present a demonstra- overcome the skill hurdle, the EDITS method uses
tion of how the method and environment can be a Workflow System for defining these tasks. Work-
applied. flow Systems allow users to create computational
procedures using a visual dataflow programming.
EDITS METHOD Users are presented with a canvas for diagramming
An EDITS design method is proposed that over- workflows as nodes and wires, where tools are rep-
comes the hurdles of skill and speed in a generalized resented as a nodes, and data links as wires.
way that is not linked to specific proprietary soft- Furthermore, this approach can also be used
ware applications. to flexibly link together existing design tools such
The EDITS method enables users to run a pop- as CAD and simulation programs. Interoperabil-
ulation-based evo-devo design exploration process. ity issues can be overcome by using data mappers,
This requires four computational tasks to be gener- whereby the output data from one tool may be
ated that will automatically be executed when the linked to the input data of another tool via a set of
evolutionary process is run: initialisation, growth, data transform, aggregation, and compensation
feedback, and termination. The initialisation and ter- procedures. This approach therefore allows paramet-
mination tasks are executed at the start and end of ric modelling tools to be linked to simulation tools
the evolutionary process respectively, and perform through an external coupling, which affords the user
various ‘housekeeping’ procedures. In addition, the greater flexibility in tool choice and linking options.
206 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
The second hurdle to be overcome is the speed building workflows; in batch mode, VisTrails
hurdle. The evolutionary process consists of a con- can be used to execute previously defined
tinuous process of extracting individuals from the workflows without requiring any user interac-
population, processing them with the growth and tion.
feedback tasks, and inserting the updated and new • A set of design tools, including CAD tools (such
individuals back into the population. Since the tasks as Houdini or Blender) and simulation tools
are independent from one another, they can easily (such as Radiance, EnergyPlus, and Calculix).
be parallelized. Cloud computing infrastructures al- (Other popular commercial CAD tools could
low users to have access to computing grids on an also be integrated with this environment. How-
on-demand basis at a low cost and can therefore be ever, due to inflexible licensing policies, it is
used to enable such parallelization. In the proposed currently difficult to deploy such tools in the
EDITS method, cloud computing is used for distrib- cloud.) The CAD tools can typically run either
uting the execution of both the growth and feed- in interactive mode or in batch mode while the
back tasks. simulation programs run only in batch mode,
with all interaction being restricted to text
EDITS ENVIRONMENT based input and output files.
In order to demonstrate the EDITS method, a pro- The EDITS environment is delivered as a cloud
totype EDITS environment has been implemented. based service. Cloud computing can deliver services
Three key type of software are used: a distributed to the user at a number of different levels, ranging
execution environment called Dexen, a workflow from delivering computing infrastructure to deliv-
system called VisTrails, and a set of design tools, ering fully functional software (Rimal et al., 2009).
such as CAD and simulation programs. These levels are typically divided into three catego-
• Dexen is a highly generic Distributed Execution ries: Infrastructure as a Service (IaaS), Platform as
Environment for running complex computa- a Service (PaaS) and Software as a Service (SaaS).
tional jobs on grid computing infrastructures, These levels can also build on one another.
previously developed by the author (Janssen et The EDITS environment is divided into three lay-
al., 2011). Dexen uses a data-driven execution ers, corresponding to IaaS, PaaS and SaaS, as shown
model, where tasks are automatically execut- in Figure 1. For the base IaaS layer, the EDITS envi-
ed whenever the right type of data becomes ronment uses Amazon EC2, which is a commercial
available. Dexen consists of three main com- web service that allows users to rent virtual ma-
ponents: the Dexen Client provides a graphi- chines on which they run their own software. Ama-
cal user interface for managing jobs and tasks; zon provides a web-application where users can
the Dexen Server manages the population and manage their virtual machines, including starting
orchestrates the execution of jobs; and Dexen and stopping machines. The SaaS and PaaS layers
Workers execute the tasks. will be described in more detail below.
• VisTrails is an open-source workflow system
that allows users to visually define computa- The SaaS layer
tional workflows (Callahan et al., 2006). VisTrails The SaaS layer consists of a number of graphical
uses a dataflow execution model that is well- tools for running EDITS jobs. Overall, there are four
suited to the types of procedures that need to main steps for the user: 1) starting the server, 2) cre-
be defined. It also provides good support for ating the growth task, 3) executing the evolutionary
integrating existing programs. VisTrails can be job, and 4) reviewing progress of the job.
used in one of two modes: in interactive mode, Step 1 involves using the Amazon EC2 web ap-
VisTrails provides a graphical user interface for plication to start an EDITS server. This simply con-
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 207
Figure 1
The three layers of the EDITS
environment.
sists of logging onto the Amazon EC2 website with start the EDITS job in the cloud and then reconnect
a standard browser, and then starting an Amazon in- with the running EDITS job intermittently in order to
stance. The operating system and software installed download the latest results. A plugin has therefore
on a virtual machine is packaged as an Amazon Ma- been implemented for VisTrails that adds an EDITS
chine Image (AMI), and for EDITS a customized AMI menu to the menu bar for starting EDITS jobs. When
has been created. This AMI is saved on the Amazon a new job is started, the user can select the growth
server, so it can simply be selected by the user from workflow, and can specify a number of parameters,
a list of options. The same server can be used for including population size, mutation and crossover
running multiple jobs. probabilities, selection pool size and the ranking al-
In step 2, the user defines the growth task by gorithm. Once these parameters are set, a number
creating a workflow with the VisTrails workflow sys- of Python scripts required to run the job are auto-
tem using a set of specially developed EDITS nodes. matically generated and uploaded to the server to-
Figure 2 shows an example of such a workflow, con- gether with the growth workflow. The job will then
sisting of a development procedure followed by start running automatically.
three parallel evaluation procedures. The develop- In step 4, the user connects to the EDITS jobs to
ment procedure uses SideFX Houdini to generate review progress and analyse the data that is gener-
the phenotype. The evaluation procedures use the ated. Dexen has its own client application with a
Radiance, Calculix, and EnergyPlus simulation pro- graphical user interface that allows users to get an
grams to generate performance scores. These proce- overview of all the jobs that are running and to in-
dures will be explained in more detail in the section terrogate the execution of individual tasks in detail,
describing the demonstration. providing information on execution time, crashes,
Step 3 involves executing the EDITS job. For the error messages, and so forth. Data related to in-
user, it is good if this execution could be orches- dividual design variants can also be downloaded.
trated from within the same VisTrails environment. However, downloading and viewing design variants
However, since the EDITS job may take several hours one at the time can be tedious and error prone. In
to execute, it is preferable to interact with it in an order to streamline this process, a set of VisTrails ED-
asynchronous manner. The user should be able to ITS nodes have been created for downloading data
208 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Figure 2
The EDITS growth workflow in
the VisTrails environment.
and design variants directly from the server running customised AMI was created for EDITS with all nec-
in the cloud. These nodes can for example be used essary software preinstalled and all settings precon-
to create a workflow that first downloads the per- figured. The EDITS AMI includes the base operating
formance scores of all design variants and then se- system, together with Dexen, VisTrails, and a set of
lects a subset of these design variants for display to commonly used CAD and simulation programs.
the user. VisTrails provides a visual spreadsheet that The software used for orchestrating distributed
can be used to simultaneously display 3D models of execution of the EDITS job is Dexen. When the EDITS
multiple design variants (Figure 5). server is started on EC2, Dexen will be automatically
started and all the other required software will be
The PaaS Layer configured and available. The two main tasks that
The PaaS layer builds on top of the Amazon EC2 IaaS need to be executed are the growth and feedback
layer, by defining an AMI for the EDITS Platform. A tasks. Dexen maintains the population of individu-
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 209
Figure 3
The software layers involved in
executing the growth task. The
workflow, highlighted in grey,
is the only layer that needs
input from the end-user.
als in a centralized database and will automatically CAD and simulation programs, VisTrails will then in-
schedule the execution of growth and feedback voke these design tools. For the end-user, the com-
tasks. For the growth task, individuals are processed plexity of the growth task is hidden, since they are
one at a time. For the feedback task, individuals are only required to create VisTrails workflow.
processed in pools of individuals, selected randomly
from all fully evaluated individuals in the population. EDITS DEMONSTRATION
Each time either a growth or feedback task needs to As a demonstration of the EDITS approach, the de-
be executed, Dexen will extract the individuals from sign for a complex residential apartment building is
the database, and send them to an available Dexen evolved. The case study experiment is based on the
worker for processing. Once the worker has com- design of the Interlace by OMA. The design consists
pleted the task, the updated and/or new individuals of thirty-one apartment blocks, each six stories tall.
will be retrieved and reinserted back into the popu- The blocks are stacked in an interlocking brick pat-
lation database. tern, with voids between the blocks. Each stack of
The Python scripts for the initialisation, growth, blocks is rotated around a set of vertical axes, there-
feedback, and termination tasks are automatically by creating a complex interlocking configuration.
generated by EDITS. The growth task is the most Each block is approximately 70 meters long by
complex due to the various layers that are involved. 16.5 meters wide, with two vertical axes of rotation
The task has a nested ‘Russian Doll’ structure, con- spaces 45 meters apart. The axes of rotation coin-
sisting of a cascade of invocations three layers deep, cide with the location of the vertical cores of the
as shown in Figure 3. The outer layer consists of the building, thereby allowing for a single vertical core
Python script. When this script is executed, it will to connect blocks at different levels. The blocks are
invoke VisTrails Batch Mode in order to execute the almost totally glazed, with large windows on all four
workflow. Since this workflow may contain numer- facades. In addition, blocks also have a series of bal-
ous nodes that link to other design tools such as conies, both projecting out from the facade and in-
210 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Figure 4
The initial configuration based
on the original design, consist-
ing of 31 blocks in 22 stacks of
varying heights.
set into the facade. The initial configuration, shown generated in skeletal form with a minimum amount
in Figure 4, is based on the original design by OMA. of detail. The full detailed model is then generated
The blocks are arranged into 22 stacks of varying only at the end, once the decision chain has finished
height, and the stacks are then rotated into a hex- completing.
agonal pattern constrained within the site bounda- In the decision chain encoding process, the
ries. At the highest point, the blocks are stacked four placement of each of the 31 blocks is defined as
high. a decision point. The process places one block at
For the case study, new configurations of these the time, starting with the first block on the empty
31 blocks were sought that optimise certain perfor- site. At each decision point, a set of rules is used to
mance measures. For the new configurations, the generate, filter, and select possible positions for the
size and number of blocks will remain the same, but next block. Each genotype has 32 genes, and all are
the way that they are stacked and rotated can differ. real values in the range {0,1}. In the generation step,
A VisTrails growth workflow was defined that per- possible positions for the next block will be created
formed both development and three evaluations. using a few simple rules. First, locations are identi-
The workflow shown in Figure 2 was developed for fied, and second orientations for each location are
this demonstration. identified. The locations are always defined relative
to the existing blocks already placed, and could be
Growth workflow: design development either on top of or underneath those blocks. The ori-
For the procedural modelling of phenotypes, SideFX entations are then generated in 15° increments in a
Houdini was used. For the genotype to phenotype 180° sweep perpendicular to either end of the exist-
mapping, an encoding technique was developed ing block. In the filtering step, constraints relating to
called decision chain encoding (Janssen and Kaushik, proximity between blocks and proximity to the site
2013). At each decision point in the modelling pro- boundary are applied, thereby ensuring that only
cess, a set of rules is used to generate, filter, and se- the valid positions remain. In the selection step, the
lect valid options for the next stage of the modelling decision gene in the genotype chooses one of the
process. The generate step uses the rules to create a valid block positions.
set of options. The filter step discards invalid options The resulting phenotypes consist of simple po-
that contravene constraints. The select step chooses lygonal models. Three separate files are generated,
one of the valid options. In order to minimise the one for each of the simulations. These models rep-
complexity of the modelling process, options are resent different sub-sets of information relating to
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 211
the same design variant. These sub-sets of infor- ous loading conditions. In order to reduce the
mation are selected in order to match the data re- computational complexity, the building con-
quirements of the simulation programs. In order to figuration is modelled in a simplified way, by
facilitate the data mapping, custom attributes are grouping individual structural elements into
defined for geometric elements in the model. For larger wholes called super-elements (Guyan,
example, polygons may have attributes that de- 1965). The performance criterion is defined as
fine key characteristics, such as block (e.g. block1, the minimisation of the maximum strain within
block2), type (e.g. wall, floor, ceiling), and parent (e.g. the structure.
the parent of the shade is the window; the parent of • Minimisation of cooling load: An evaluation is
the window is the wall). These attributes are used by defined that executes EnergyPlus in order to
the mapping nodes in order to generate appropri- calculate the cooling load required in order to
ate input files for the simulations. The geometry to- maintain interior temperatures below a cer-
gether with the attributes are saved as JSON files (i.e. tain threshold for a typical schedule. In order
simple text files). to reduce the computational complexity, an
ideal-load air system together with a simplified
Growth workflow: design evaluations zoning model is used, and the simulation is run
For the multi-objective evaluation, three perfor- for a periods of one week at the solstices and
mance criteria were defined: maximisation of day- equinoxes. The performance criterion is de-
light, minimisation of structural strain, and minimi- fined as the minimisation of the average daily
sation of cooling load. These performance criteria cooling load.
have been selected in order to explore possible con- In Figure 2, the three workflow branches defin-
flicts. For example, if the blocks are clustered close ing the evaluation procedures are shown. Each eval-
together the cooling load will decrease due to inter- uation procedure includes two mapper nodes: an
block shading but the daylight levels will also re- input mapper for generating the required input files,
duce. If the blocks are stacked higher, then they are and an output mapper for generating the final per-
likely to get better daylight but they may become formance score. These mapper nodes are currently
less structurally stable. The three performance crite- implemented as Python scripts, but part of this re-
ria are calculated as follows: search is the development of a graphical application
• Maximisation of daylight: An evaluation is de- for defining mapper nodes. See Janssen at al. (2013)
fined that executes Radiance in order to cal- for more details.
culate daylight levels on all windows under a The input mappers transform the JSON files
cloudy overcast sky. The amount of light enter- from the developmental procedure to the appropri-
ing each window is then adjusted according to ate input files for the simulations. As well as the ge-
the visual transmittance of the glazing system ometry information from these JSON files, the map-
for that window. The performance criterion is pers also require other material information. The
defined as the maximization of the total num- output mappers transform the raw simulation data
ber of windows where the light entering the into performance scores: for the Radiance data, the
window is above a certain threshold level for mapper calculates the number of windows below
reasonable visual comfort. the daylight threshold; for Calculix, the mapper cal-
• Minimisation of structural strain: An evalua- culates the maximum strain in the structure; and, for
tion is defined that executes Calculix in order EnergyPlus, the mapper calculates the average daily
to calculate the global structural behaviour cooling load. These three evaluation scores are then
using Finite Element Analysis (FEA) under vari- provided as the final output of the growth task.
212 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Figure 5
A set of design variants shown
in the visual spreadsheet tool
within VisTrails.
Results CONCLUSIONS
When running the job, the population size was set For designers, the EDITS approach allows two key
to 200 and a simple asynchronous steady-state evo- hurdles of skills and speed to be overcome. First, it
lutionary algorithm was used. Each generation, 50 overcomes the skills hurdle by allowing designer to
individuals were randomly selected from the popu- define growth tasks as workflows using visual pro-
lation and ranked using multi-objective Pareto rank- gramming techniques. Second, it overcomes the
ing. The two design variants with the lowest rank speed hurdle by using cloud computing infrastruc-
were killed, and the two design variants with the tures to parallelize the evolutionary process. The
highest rank (rank 1) were used as parents for repro- demonstration case-study shows how the EDITS ap-
duction. Standard crossover and mutation operators proach can be applied to a complex design scenario.
for real-valued genotypes were used, with a muta- Future research will focus on the development
tion probability being set to 0.01. Reproduction be- of VisTrails data analytics nodes. This would allow
tween pairs of parents results in two new children, users to create workflows to perform various types
thereby ensuring that the population size remains of analysis on the data generated by the evolution-
constant. ary process, including hypervolume and clustering
The evolutionary algorithm was run for a total analysis.
of 10,000 births, taking approximately 8 hours to
execute. The final non-dominated Pareto set for the REFERENCES
whole population contained a range of design vari- Altıntaş, İ 2011, Collaborative Provenance for Workflow-driv-
ants with differing performance tradeoffs. en Science and Engineering, PhD Thesis, University of
A workflow was created in order to retrieve and Amsterdam.
display designs from the Pareto front. A selection of Callahan, S, Freire, J, Santos, E, Scheidegger, C, Silva, C and
design variants are shown in Figure 5. Vo, H 2006, ‘Vistrails: Visualization Meets Data Man-
Generation, Exploration and Optimisation - Volume 2 - Computation and Performance - eCAADe 31 | 213
agement’, Proceedings of the SIGMOD, Chicago, pp. Janssen, PHT and Kaushik, V 2012, ‘Iterative Refinement
745–747. Through Simulation: Exploring Trade-off s Between
Deelman, E, Gannon, D, Shields, M and Taylor, I 2008, ‘Work- Speed and Accuracy’, Proceedings of the 30th eCAADe
flows and e-Science: An Overview of Workflow System Conference, pp. 555–563.
Features and Capabilities’, Future Generation Computer Janssen, PHT and Kaushik, V 2013, ‘Decision Chain Encod-
Systems, pp. 528–540. ing: Evolutionary Design Optimization with Complex
Frazer, JH 1995, An Evolutionary Architecture, AA Publica- Constraints’, Proceedings of the 2nd EvoMUSART Confer-
tions, London, UK. ence, pp. 157–167.
Guyan, RJ 1965, ‘Reduction of Stiffness and Mass Matrices’, Janssen, PHT, Stouffs, R, Chaszar, A, Boeykens S and Toth
AIAA Journal, 3(2), pp. 380–380. B, ‘Data Transformations in Custom Digital Workflows:
Janssen, PHT, Basol, C and Chen, KW 2011, ‘Evolutionary De- Property Graphs as a Data Model for User‐Defined
velopmental Design for Non-Programmers’, Proceed- Mappings’, Intelligent Computing in Engineering Confer-
ings of the eCAADe Conference, Ljubljana, Slovenia, pp. ence - ICE2012, pp. 1–10.
886–894. Kumar, S and Bentley, PJ 1999, ‘The ABCs of Evolutionary
Janssen, PHT, Chen, KW and Basol, C 2011, ‘Iterative Virtual Design: Investigating the Evolvability of Embryog-
Prototyping: Performance Based Design Exploration’, enies for Morphogenesis’, Proceedings of the GECCO, pp.
Proceedings of the eCAADe Conference, Ljubljana, Slove- 164–170.
nia, pp. 253–260. Rimal, BP, Eunmi, C and Lumb, I 2009, ‘A Taxonomy and Sur-
Janssen, PHT and Chen, KW 2011, ‘Visual Dataflow Model- vey of Cloud Computing Systems’, Proceedings of the
ling: A Comparison of Three Systems’, Proceedings of the INC, IMS and IDC Joint Conference, pp. 25–27.
CAAD Futures Conference, Liege, Belgium, pp. 801–816.
214 | eCAADe 31 - Computation and Performance - Volume 2 - Generation, Exploration and Optimisation
Algorithmic Design Generation
INTRODUCTION
In general terms, a genetic algorithm (GA) can be within a set boundary. In one variation of the prob-
characterised as a highly parallel and adaptive evo- lem, a shape-packing algorithm is designed to pack
lutionary search method. GAs are described as par- as many shapes as possible, without overlapping
allel searching methods because they search for them, and attempts to achieve a required minimum
solutions using the whole population of possible coverage area to minimize waste (Lodi et al., 2002).
options as opposed to altering a single potential so- In mathematics, circle packing focuses on the ge-
lution (Frazer, 1995). Since the most favourable solu- ometry and combinatorial character of packing of
tions are obtained by progressive alterations within circles of either equal or arbitrary size (Stephenson,
the same population over time, Frazer also refers to 2005). For circles of equal size, it has been math-
them as adaptive. Due to the mentioned character- ematically proved that a hexagonal honeycomb ar-
istics, GAs are becoming more popular and are be- rangement of circles produces the highest density
ing researched and increasingly applied to practical (Hsiang, 1992). In architecture, shape packing can be
problems. used in many pattern-based problems where densi-
Shape packing algorithms are optimization ty, number of packed elements and spatial relation-
methods that attempt to pack shapes together ships between elements is important.
(1)
where F is the fitness of the individual, Ri is the ra-
dius of the individual circle, and Rav is the average of
the specified minimum and maximum radii.
(50,000). The section below describes the results of
The GA sequence the four tests created based on the rules described
The initial population is created randomly, covering above.
the entire range of possible solutions (search space).
In case of the script used for this experiment the Results
new individuals were created using a circle-packing We conducted four tests in order to meet the design
algorithm until the maximum number of attempts requirements and solve the stated design problem:
for fitting more individuals has been reached (in this Achieving area coverage of 40-50% with 2000 cir-
experiment it was set at 50,000 attempts). In such cles. In each test, we iterated through four genera-
a case usually the initial population does not reach tions (Table 1).
the maximum number of individuals (in this case
2000 individuals). After the initial population has Discussion and Comments
been generated the fitness of each individual is cal- As the results show, meeting both of the design
culated. The obtained fitness scores are then used goals where the fitness function is awarding the
for selecting the fittest individuals and placing them radii from the extremes of the range of 5-800 cm, is
in the mating pool. We specified a constant 50% sur- rather unlikely to be achieved in the span of 4 gener-
vival rate and a 1% mutation rate throughout the ations even if the number of attempts is 50,000. The
experiment and implemented a “roulette wheel” se- outcomes might have been different if the number
lection method to select the fittest candidates while of attempts was increased to 100,000 or more. This
maintaining a similar diversity to the one found in is, however, an area for further research that lies out-
natural selection. After the individuals for the mat- side of the scope of this experiment.
ing pool have been selected the process of repro- Compared to the non-optimised first genera-
duction begins using crossover and mutation of tion of packed circles, it is evident from conducting
their genotypes. The process of generating popula- only four tests, that applying the GA dramatically
tions continues until a termination condition is met. increases the number of circles to meet the goal to
Termination takes place either when the population pack 2000 individuals within the prescribed bound-
target is met or when the algorithm reaches the ary (Figure 5), but that has two side-effects: 1) The
maximum number of attempts to fit the individuals average radius of circles decreases, and 2) the over-
all coverage area of these circles decreases as well smaller circles. This occurred due to the fact that
(Figure 6). Also, it can be concluded that the bigger the fitness function was awarding both extremes
the coverage area, the smaller the number of ele- – the smallest and the biggest circles. The mating
ments. In all of the tests the maximum number of pool quickly biased itself towards smaller circles af-
circles was achieved when the coverage area was ter the first generation because at the point when
consistently below 40%. Based on that, the main de- the maximum number of elements was reached
sign goals had to be revised. Because the GA proved there were far more circles with radii closer to the
that both design goals couldn’t be achieved simul- minimum than those closer to the maximum radius.
taneously, the designer has to decide which is a pri- That is, since there were a larger number of smaller
ority – the coverage area or the number of packed circles and because they were considered just as
elements. Since the main aim of the experiment fit for breeding as large circles, there was a higher
was based on creating the required shading pat- probability of choosing them for breeding the next
tern, the coverage area took precedence. Therefore population. This strength in numbers phenomenon
the façade pattern with the coverage area within initiated a vicious cycle of breeding smaller and
the range and achieved with the biggest number smaller circles while larger circles quickly became
of circles was chosen as the proposed design solu- extinct. The solution seems to approach a plateau
tion. In the four conducted tests, this was achieved after the third generation. An interesting contradic-
in the second generation of the third test with 1449 tion is that the overall results did not improve with
packed individuals and 49.36% coverage area (Fig- the subsequent generations even though the indi-
ure 7). viduals’ fitness was increasing. From an interesting
It is clearly visible from both the data and the perspective, this result supports a case for diversity
visual graphs (Figure 6) that even after the maximum where even if individual fitness is high, the overall
number of packed elements has been achieved the performance of the population is unsatisfactory due
GA was still breeding a population of increasingly to a lack of diversity.
Abstract. This work is a research on the application of genetic algorithms (GA) to urban
growth taking into account the optimization of solar envelope and sunlight in open spaces.
It was considered a typical block of a Spanish grid, which is the most common subdivision
of the urban land in towns situated in Argentina. Two models are compared, one in which
the growth has no more limitations than building codes. The other one, in which the
growth incorporates the solar radiation as a desirable parameter.
This way of parameterizing configures a bottom-up method of urban growth. No top-down
decisions intervenes in the growth process.
This tool proves to be useful at early stages of urban planning when decisions—which will
influence along the development of the city for a long time—are taken.
Keywords. Genetic algorithms; solar envelope.
INTRODUCTION
This work analyses the application of genetic algo- distributed in the blocks around the town core.
rithms (GAs) (Mitchell, 1998) to parametric urban Our case study , Lincoln, is a town situated in Ar-
design, considering access to solar radiation. Pa- gentine Pampa. Towns founded after Independence
rameterization allows the continuous control of the period (1816) like Lincoln (1865) (Tauber, 2000),
design process and evolutionary algorithms (Russel, followed the Spanish pattern. This town, bases its
2010) optimize the solar access. We develop a simu- economy on soya crops and the cash flows that this
lation of urban growth according to current restric- activity produces every year are reflected in the eco-
tions and building codes (Leach, 2004). nomic growth (Forrester, 1970).
Spanish towns in Argentina share a same urban With this data, the author built an urban model
pattern: a 45º-rotated grid (Randle, 1977). This grid is which reaches maximum growth. The study is ap-
suitable for a plain land with no mayor geographic plied to two blocks in the town centre area (Figure
accidents like our grassy prairies. A main square, 1). There is a wide range of uses situated in this area:
which derives from the Spanish “plaza”, determines commercial, housing and office buildings. The high-
the centre of the grid and the main public buildings est density is concentrated in this zone. The inten-
are situated around: the church, the government of- tion is to show how an urban block can grow differ-
fices, the school, and the police station. Housing is ent if environmental criteria are applied.
These towns have experienced a fast growth solar radiation in open spaces in the block core. It
during last years as soya prices began to rise. Urban considers the growing season until the average first
development has occurred without any planning frost date which is 1st April in the Southern Hemi-
except for the current law that regulates the use of sphere. The intersection between these two shapes
land, which was promulgated in the ´70s. This is the defines a buildable volume.
Decree- Law 8912/77 [1] and it does not reflect envi- Another simulation with no solar restrictions is
ronmental issues, as the solar envelope concept [2]. developed. The comparison between the two mod-
In this work, solar envelope allows urban dwell- els permits the urban designer to take into account
ings to satisfy a two-hour period of solar radiation solar access from the very beginnings of the codifi-
(11AM to 1PM) all year round (Niemasz, J et als, cation process and to reflect this issue in the build-
2011). Solar fan [3] is another tool which defines ing code, making the necessary corrections to the
a volume of a four-hour period (10 AM to 2 PM) of current one.
street width and building height is 3:2 for SE streets After that, we proceed to apply solar restric-
(18m) and 5:7.5 for SW streets (23m) for this specific tions. These solar tools determine a solar buildable
urban grid (121m x 81m). volume. It is designed for a two-hour period from 11
AM to 1PM.
METHODOLOGY The first tool solar envelope is applied to the
The objective of this work is to compare the two model. As it can be seen in Figure 3, the Code does
models: the built block according to the current not make any difference between different orienta-
Code and another one with solar restrictions and tions in order to regulate maximum height.
maximum growth for a Type I and a Type II blocks. As we observe the shades projected onto the
The whole drawing is parameterized in Rhinoc- adjacent streets —especially SE and SW— we can
eros © [4] by means of Grashopper [5]. In the first easily infer that they reach the façades of the block
case, an urban block is modeled following the cur- in front of ours (Figure 4). This image has been ex-
rent building Code (Figure 3). ported to Ecotect© by means of GECO [6], which is
Each plot reaches its maximum buildable vol- a Grasshopper plug-in. This image clearly shows
ume according to height limit (24m), LOF (0.6 plot´s shades and levels of direct radiation of buildings at
surface) and TBF (2.75 plot´s surface). No more limi- noon for June, 20th.
tations are taken into account. A wide variety of The other tool, the solar fan, is the void that has
buildings which exceed the bounding box appeared. to be left in order to assure a four-hour period of so-
lar radiation in open inner spaces of the urban block In Figure 7, the projected shade of the optimized
during growing season. This open space is called volume is analyzed in Ecotect. Shade on the SW and
‘block core’ in the Building Code and it is composed SE streets does not reach the façades of the blocks in
by all the backyards of the plots. Only corner plots front as they are limited by the solar envelope. The
are exempt from integrating this space. As it is ob- core block receives enough solar radiation during a
served in Figure 5, some volumes exceed the core four-hour period during growing season.
block when only considering current regulations. Type II block was also analyzed in the same con-
After this, a model considering solar envelope ditions as Type I and the results are shown in Figure
and solar fan was obtained by means of GAs. These 8. As the core block varies its dimensions, buildings
algorithms are used in optimization processes on NW and SE sides are not so restricted in their rear
(Goldberg, 1997). In this case, one called Galapagos facades.
has been run to maximize built volumes in each
plot. The buildable shape is obtained by the Boolean RESULTS
intersection between the solar envelope and fan. After modeling the normal Type I and II blocks and
Then, the algorithm develops the maximum allowed the optimized ones, we proceed to compare the re-
quantity of modules and their arrangements, con- sults between the experiences shown above.
sidering building regulations inside this bounding In Type I, while in the normal model, the maxi-
shape. The Boolean difference between the build- mum height is 24m or eight-storey in the whole
ings and this bounding box is minimized, reaching block, in the optimized one, it has to be reduced
in nearly every case, a value of zero. The results are to 15m or five-storey on SW façades to avoid shad-
shown in Figure 6. ing over the neighbour block. In SE façades, this is-
Figure 6
Optimized built volume inside
the solar bounding box.
Figure 8
Type II block Boolean differ-
ence between solar envelope
and fan with optimized built
volumes.
sue is even worse; the height is lowered to 12m to Two different types of blocks, I and II, are shown
maintain the built volume inside the bounding box. in Figure 10. Both types differ in the dimensions of
These buildings can be terraced up to eight-storey the block cores as the plots have different number of
height only on the rear part. plots and arrangements (Figure 2). As the core block
When analyzing Type II block, the portion of the increases in NW-SE direction, the buildings on these
buildings that exceeds the bounding box is shown façades have to diminish their height.
in red. It sums approximately 79 m2 (Figure 8). Even
when, in the plot 17, the GA improved the module DISCUSSION
arrangement, this plot cannot be completely built The use of these tools applied to urban design per-
up to 2.75 TBF. Maximum height should be reduced mits the improvement of environmental conditions
to seven-storey (21m) in plot 18 and six-storey in in buildings and open spaces. It can be applied to
plot 17 in the rear façades of the buildings. determine urban grid characteristics like:
The GA produced several alternatives that are • block dimensions
comprehended into the solar bounding box. Some • street widths
of these possible arrangements for SE plot are • maximum heights differentiated by orientation
shown in detail in Figure 9. The same procedure was • core block dimensions
followed with each plot. The solar envelope and fan as prescriptive tools
NE corner blocks are the only ones which can in urban design have to be studied together with
reach maximum height in their whole surface. The energy savings, developable density and infrastruc-
other ones have to be terraced. ture costs as well as local climate conditions. Our
typical urban grid promotes adjacent buildings struction is brick masonry with reinforced concrete
which reduce heat losses through dividing walls. roofs.
This is the reason why this work considers the block
as an individual at urban level and as a whole at plot CONCLUSIONS
level. In this bottom-up process [7], the individuals Genetic algorithms provide a useful tool to test dif-
can perform certain actions regulated by the current ferent alternatives of buildable volumes inside a plot
legislation. The whole block acquires characteristics as modules were dimensioned as real architectural
that are the result of these individual behaviours. elements. They were built considering heights and
Emergent properties (Hensel et al., 2010) arise when depths as well as daylight and natural ventilation
this happens and consequences affect the urban conditions, regulated by current codes.
tissue performance, e.g. wind direction and inten- In order to widen the scope of this work, GAs
sity provoke specific microclimatic conditions that can also be used to optimize solar radiation in roofs
affects building ventilation, inner temperature and and façades (Camporeale, 2013) to install PV or solar
comfort in urban spaces. water collectors. Fenestration can also be optimized
Towns like Lincoln are benefited with this urban by orientation (Camporeale, 2012).
tissue as the blocks tend to be compact. Local cli- Current legislation on urban design and build-
mate is temperate dry with quite thermal amplitude. ing restrictions on urban plots deserves a deeper
This benefits savings in energy consumption as the study than fixing height limits, total buildable vol-
thermal mass accumulates heating in winter and ume and land occupation. Rules should be imple-
prevents overheating in summer. Traditional con- mented according to local conditions as climate,
INTRODUCTION
Both measuring and changing a design’s perfor- mance, for an architectural object is a process differ-
mance is defined by such a high amount of informa- ent for each project, since context- and user-specific
tion that computational methods are of crucial im- factors vary constantly. Therefore generic programs
portance in this process. Computational tools may and tools can only cover a very general and unspe-
help process a much bigger amount of information cific area of the planning process and may only help
and variables than one may be capable to capture in the representation and simplification of a design,
intuitively and through classical design methods. therefore being insufficient for capturing the com-
The incorporation of digital tools should thus hap- plexity of an architectural object.
pen as early as possible in the design phase, even at Numerous established optimisation methods
the point of analysing the given task. Still, using the have been incorporated into architectural design
computational power of digital tools in architectural opening up a new dimension of solution options
design may prove itself more difficult than in other and a new freedom degree for designers, but at the
industries, since a building is usually an individual same time creating a new extremely complex prob-
and very context-dependent object. In architecture lem as to how such tools are to be implemented and
time and resources are often insufficient to allow further developed for the use in design tasks. Many
the development of highly performative designs questions arise, such as which method is best suit-
and design methods. Defining criteria of this perfor- able for architectural use, how this method is to be
since contradictory criteria cannot be 100% fulfilled and wind power in order to achieve as little wind
but need to be weighted as to how much one criteri- loads as possible on a facade pane but also to use
on shall be fulfilled in comparison to the other ones. the generated wind power with specifically located
Except for choosing and implementing appropriate wind turbines. This is one example of clearly con-
criteria, weighting these represents another step tradicting criteria in which less wind loads lead to a
that strongly influences the outcome of the GA and more stable structure but more wind power results
lies in the hands of the designer. Each specific task in more energy win. Weighting these criteria against
requires different analysis criteria and weighting each other could only be achieved after a number
and moreover a strong interdependency between of tests in order to understand the algorithms be-
the analysis criteria and the geometry generation haviour. In the end a minimal fitness value for wind
process. In the presented case the chosen criteria pressure and suction was chosen to be mandatory
include a wind load analysis, wind power analysis, so that the wind power generation was weighted
solar analysis, area and volume analysis and a num- less than the load analysis. Still using a parametric
ber of excluding absolute criteria, such as minimal definition wind turbines were located in the areas
radii in the facade and the gravitational centre for a with most wind power so that high energy efficien-
basic structural functioning of the building (Figures cy could be achieved.
2 and 3). The individual weighting of the criteria was For the recombination and regeneration of new
performed after numerous tests according to the individual generations a stochastic selection meth-
chosen purpose, not only to create a building as ef- od was chosen, such methods being acknowledged
ficient as possible but also focusing on wind loads and used for an optimal reach of a solution and con-
Figure 3
Resulting tower geometry.
stant increment of the generations’ fitness values that while the presented genetic algorithm showed
(Pinsky and Karlin 2011) (Figure 4). effective results, it is still a linear process which fol-
The presented methods have proven to be ef- lows a direction (form generation – analysis – im-
fective for such a complex task as a high-rise project, provement) that happens on one hierarchic level and
but there were still numerous difficulties encoun- is furthermore based on creating a very high num-
tered along the way. One of the greatest challenges ber of random variants that are then compared and
resulted to be the black box character and random- analysed. The wish for further research was to break
ness of the written algorithm. While you can track the linearity of the algorithm and create a process in
the development of the algorithm and its success or which different hierarchical levels could interact and
failure, there are no means of intervening through- lead to a result without the need of many variants
out the running time or even logically following the but slowly adapting to the given requirements.
process of the genetic algorithm. It proved to be a
rather random process in which you could expect Agent-Based Process
that the fitness value of each individual will increase Based on the knowledge of the evolutionary algo-
but without any means of fast tracing why or how rithm developed in the precedent project, a more
much it increases. Many tests had to be run in order general and flexible tool was searched in this second
to manually try out the variable parameters, such as approach. A number of critical points discovered
the weighting of the fitness criteria or the geometry while using optimisation algorithms were defined
generation since it couldn’t be easily understood as crucial and created the basis for the second ap-
how one value influences the algorithm. While this proach. The exact purpose was to create a more flex-
is part of the power of computational means, of ible tool, with a computational core that could be
generating designs that cannot be intuitively traced extended and adapted to a given task through the
down, it remains a time consuming factor to set up addition of adaptation criteria and through chang-
all variable parameters so that a successful result will ing the input constraints. As a major point the wish
be achieved. It is a run and result process in which to destroy the linearity of such an approach and
the designer has no capability of interacting with create a process that allows input parameters from
the computational tool he created, it is meanly a tool various hierarchical levels and the communication
that needs to run through from start to end and can between all subsystems of a general system, served
only then be evaluated through its result. Further- as the starting point for this design method. While
more, a genetic algorithm needs a lot of adjustments the evolutionary algorithm allows numerous criteria
in order to simply provide a result that constantly in- to be included and considered, it has a clear differ-
creases its individuals’ fitness and does not converge entiation between the generating parameters, in
to a not satisfactory early result. One major point is this case the geometry generation, and the optimi-
zation criteria. The generation is the one adapting to according to its own constrains and requirements,
all criteria so the information flow is unidirectional such as the smoothness of the surface resulting from
and does not allow other parameters to adapt to the angles of the different panels or the structural
the requirements of the generation method. The stability of the global force, curvature and height.
purpose of the agent-based tool is to allow this flow The second subsystem represents the panelling
of information from all input parameters into all di- elements, such as triangles or quads through which
rections and create a communication between all the global geometry is realized. These have flexible
participating subsystems, even located in different parameters such as shape, or planarity. These re-
hierarchical levels. quirements set by the designer are meant to inform
The chosen task is more general – also in order the other systems while the covering panel itself
to exemplarily represent the possibilities of such a shall change dimensions and orientation or loca-
method – and is intended to create a roof like grid- tion according to the requirements coming from the
shell structure over a given fictional site. The struc- other systems.
ture is divided into three representative subsystems The third subsystem describes a shading panel,
that are intended to show different hierarchical lev- meant to be representative for any type of facade
els of the general structure and their interdepend- panel reaching from a simple planar glass pane to a
encies intended to allow a continuous communica- complex shading element. This panel again defines
tion between these subsystems (Figure 5). a set of requirements such as planarity, dimensions
The first chosen subsystem is the global geom- or orientation. As mentioned before, these systems
etry, representing the freeform surface connected are simply exemplary and do not cover all complex-
to the predefined support locations. It defines the ity of a gridshell structure, but are meant to show
global shape and appearance of the final built result the possibilities of such a process.
and is not intended to be only a result of all other For the presented example a smooth surface
requirements and subsystems but set and adapt connecting three support locations with certain spa-
tial limitations, triangular covering panels and a sim- much double curvature as possible and through this
ple shading component were chosen to be imple- adaptation it defines the position and dimensions of
mented. The agent-based system was selected after the panels (Parascho et al., 2012) (Figure 6).
an extensive research for its capabilities of abstract- One of the high advantages of this method is
ing and simplifying complex behaviour into simple the flexibility of the tool and possibility to adapt it
basic rules. An agent is defined as the smallest part to a given task. Since the desire is to create a gen-
of the system (the division panel) and fed with nu- eral tool that may be changed and fed with numer-
merous rules representing all criteria of the partici- ous inputs and criteria, the agent-based system was
pating subsystems. These criteria were all translated extremely efficient in allowing such fast adaptions.
into geometric behaviour so that the agent con- Each behavioural rule may be added at any time
stantly reacts and adapts to the set of requirements during the process and may influence all defined
enabling a constant increment of these criteria. systems. It has also proven to be very powerful since
Chosen criteria include the smoothness of the new criteria can be implemented and tested very
surface, dimensions of the beams, the number of fast and the development can be traced simply by
beams coming together at a knot, geometric struc- watching the agents perform (Figure 7).
tural behaviour, static behaviour and lighting con- Still a number of points have proven to be dif-
ditions. The criteria were distributed to represent a ficult when implementing such a system. The first
specific subsystem or be external criteria in order to question arising is how to abstract such a complex
show the interdependency and connections of all model as a swarm system into a working algorithm
systems. For example, lighting conditions influence for a design purpose. It is of extreme importance
as well the shading pane that changes in size and how the singular agent is defined, what part of the
orientation as the global geometry that is created global system it represents and how flexible it is. De-
through the individual triangular panels that focus fining the basic agent has the highest effect on the
on achieving an orientation as parallel as possible to output, since too little flexibility may not ensure any
the light source. Similarly the static behaviour influ- result at all and too much will result in extreme so-
ences the global geometry that tries to achieve as lutions that may not be functioning as built objects
CONCLUSION / COMPARISON the interaction with this system making use of the
When comparing the two methods the most impor- strength of agent-based systems to react and adapt
tant fact is to differentiate between the task types to any exterior influence at any time. The genetic
that each algorithm can be addressed with. Both al- algorithm is rather a model that strongly depends
gorithms proved to be functioning systems for gen- on the definition of the input parameters and offers
erating architectural design, but they were focusing one final solution to these options. For tasks where
on two distinct points. While the genetic algorithm the focus lies on the optimization process and where
is extremely good in handling a great number of cri- certain criteria need to be fulfilled as strongly as
teria by using the high computing capacities of the possible, evolutionary algorithms and their capac-
computer, the agent-based model is developed to ity of working with a high number of variants lead
work with less information but create constant con- to satisfying results. On the other hands, tasks that
nections between this information. The agent-based require more adaption and fast changes in the input
model’s greatest strength is abstracting any type of would rather benefit from the agent-based tool.
criteria of any hierarchical system into one equally While both systems led to successful results, the
hierarchized level at which all parts can exchange in- main difficulty encountered during the processes
formation. It does not work through numerous vari- was the correct definition of the input parameters.
ants, created with a random factor, as the genetic Whereas working through a complex solution space
algorithm does, but intends to constantly change opens up a lot more possibilities than traditional in-
and adapt in order to improve the characteristics de- tuitive design methods, this freedom of covering all
fined in the behaviour of the agent. possibilities is strongly limited by the definition of
One big difference between the presented each constraint, optimization criterion or behaviour
methods is the option provided by the agent-based definition. Most time and energy flows into defining
model of interacting with the system. The black- each parameter influencing the final output and its
box character of the genetic algorithm is broken as importance for the global tool. It is often a precon-
the designer can constantly follow the process of ception that making use of the complexity of an
the agent-based tool. Future research will focus on architectural object through computational meth-
Abstract. This paper covers two workshops that are instances of a research on the
feedbacks between parametric patterning and material behavior. Infection sets the
conceptual background of these workshops utilizing pattern deformations as a generative
technique. Gridal Infection workshop focus on real-time dynamic patterns while Reflex
Patterning workshop integrates material performances to this exploration.
Keywords. Parametric patterning; material behavior; prototypes; fabrication; dataflow.
based on the correlation of the digital with the during the design process.
physical via parametric patterning techniques and a In the final application, visual outputs are pro-
composite material system. jected on the host body, aligned to its existing grid.
The third and last step in the ongoing research Below are three student projects that are the prod-
will be pushing the limits through the fabrication ucts of this three day introductory workshop.
processes. Parametric patterning and CNC molded
tiles will be explored as a case study for the future FiberGrid
workshop. Students considered the host body as a dead tissue
In following sections, details of the first two of an organism, resembling the wall as a standing
workshops are explained, concluding with a discus- idle and reckless element to its environment. In order
sion on outcomes. to revitalize it surrounding sound is considered as an
injection that changes the inner structure of the or-
WORKSHOP 1: GRIDAL INFECTION ganism and transforms the grid lines into curvilinear
This initial workshop focuses on the abstract no- fibers.
tion of grid, sampled from an existing 16x11 unit Above concept of FiberGrid is realized by con-
glass brick wall, the host body. Students are asked structing a grid out of interpolated curves (Figure
to articulate its formal (grid / pattern / tessellation / 1). Surrounding sound is captured and used as a
reference), performative (transparency / light / struc- real-time input that bends the curves. The change
ture / function) and tectonic (ambient / kinetic / au- in sound level affects the process, creating tempo-
ral) properties. On the early phase, three groups of ral variations. Finally, a history enabled algorithm
students hunted concepts “lesion, plasma and fiber- captures sequences of this process, creating waves
grid”. Then, they are asked to develop their projects of fibers (Figure 2). As it is a recursive algorithm, it
by creating parametric deformations, utilizing real- responds concurrently, getting faster / slower and
time interactions with the context. Students with no more / less fibrous while the surrounding sound
previous skill on parametric design are introduced level rises / lowers (Figure 3).
with Grasshopper for dataflow parametric mod-
eling and Firefly add-on for interaction design. They Lesion
ended up with three dynamic patterns, superim- In this project, the grid is considered as cellular
posed to the existing wall. The semi-opaque mate- forms packed together. The wall represents an abso-
rial of the wall created a surreal-animate vision and lute body, in which an infection causes various chal-
an apopohenia kind of feeling for the viewers. As an lenges, and activates an immune system as well. The
educational goal, the attempt was not to create an struggle between infection and the immune system
eye catching media-wall but to introduce students creates lesions eventually (Figure 4). This concept
with digital toolsets necessary to make them think resembles infection as a distortion on the regularity
of feedbacks in-between the digital and the physical of the wall. Irregularities of the surrounding factors,
Figure 3
FiberGrid; Application photos.
‘The Host Body’ is on the left.
such as the movements of people around causes This concept is realized by implementing a his-
pattern deformations. The host body gets infected tory based truncation process on a regular grid (Fig-
when someone gets closer to it, but eventually a ure 5). The truncation is associated with the vectors
time-based recovery process begins. of surrounding motions, captured by a webcam in
Figure 4
Lesion; Student sketches.
Figure 5
Lesion; Screenshots.
Figure 7
Lesion; Application photos.
‘The Host Body’ is on the left.
real-time. There are parameters such as recovery and its surroundings. The real-time deformation input
immune system in the dataflow diagram (as seen in was a similar one with Lesion, including a webcam
Figure 6) that function as a temporal deformation capture. Distinctively, this project aims to capture
returning to its initial state progressively. In the final not all of the small details of the surrounding, but
installation, various regular grids (square and hexag- the average motion, searching for focal points of
onal) are tested with an infection caused by people movement. Students argued that this transforma-
around (Figure 7). tion of the glass brick wall to plasmatic body makes
it more interactive with other bodies around it.
Plasma In this project, students’ conception (Figure 8)
In this project, the host body is considered to be is extended into a geometric solution based on me-
infected by high fever and pressure, changing its taballs (Figure 9). After various experiments on the
solid phase into plasma. The solid molecules repre- reactions of metaballs, a grid-based deformation is
sent the strict order of the grid on the wall, while the chosen (Figures 10). When a person comes closer
plasma represents a more flexible order, sensitive to to the wall, its motion creates focal points. Eventu-
Figure 8
Plasma; Student sketches.
Figure 10
Plasma; Dataflow diagram
composed in Grasshopper.
Cluster 1 collects all necessary
data including the webcam;
Cluster 2 calculates a vector
deformation on a regular grid;
and Cluster 3 creates a serie of
metaballs.
Figure 11
Plasma; Application photos.
‘The Host Body’ is on the left.
ally these points become blob centers that react and isting patterns of the hall and transform that inert
combine into larger blobs (Figure 11). A time-based void to a reacting body.
algorithm captures sequences of this process, creat- Within this three-day workshop, we worked
ing superimposed metaball variations. with 30 students and introduced them with digital
techniques of pattern-making and pattern deforma-
WORKSHOP 2: RE-FLEX PATTERNING tion using Grasshopper. We discussed on how they
In the second Infections workshop, the host body could use parametric modelling to deform a grid
was the gallery hall of the faculty building at AİBU, a based pattern.
passive void waiting to be activated (Figure 12). The The composite material system proposed for
regular pattern dominating that body was the struc- the workshop was a combination of flexible and soft
tural grid of columns and beams that is reference to materials (textile or bubble wrap) with a stiffer but
all the details around it such as floor coverings, light- lightweight plate material (5mm. foam boards). Soft
ings etc. In this workshop students are encouraged material is to be covered with foam boards in both
to think on sub concepts of infection, recognize ex- sides with nuts and bolts to explore its composite
material behavior. Students were required to pro- reflexes of the material and reactions of the host
pose a patterning that controls the behavior of this body. Each project was unique to explore various
composite material with the help of the re-flexing material behaviors using regular, irregular and as-
performances. sociative patterning (Figure 13). Students chose the
project that proposed a canopy formed by patterns
Prototypes of circulation .This project was able to control the
On the first day, we discussed on concepts of infec- macro-form as a self-regulating surface.
tion and the context. 5 groups of students present-
ed their proposals via diagrams and drawings. They Final
focused on changing parameters and dynamics The last step was working on patterning of the cho-
such as daylight, circulations, gatherings, vistas and sen project, and is developed with the guidance of
proposed concepts as molecules, fluid flows, coloriz- instructors. At the application phase, 1600 individu-
ing etc. We wanted them to construct their first ma- al polygonal elements are coded and laser-cut from
terial prototypes by 2 m X 2 m via various methods foam boards, attached to the textile with nuts and
of patterning. The next morning students installed bolts (Figure 14). The product of the workshop was
their physical prototypes to the hall to observe the two canopies of 1,5m. by 5m. in size. The emergent
Figure 13
Re_Flex Patterning; Initial
prototypes, testing the com-
posite material with various
tessellations.
performances of this product could only be experi- such as fabrication technologies, material studies,
enced when these surfaces were installed in the gal- and generative techniques. This requires not only an
lery hall via flexing them with the help of steel cables intuitive handling on digital tools and methods, but
(Figures 15, 16 and 17). Students were excited with a also an experience on material and production con-
feeling of both familiarity and alienage of this prod- straints simultaneously.
uct, mentioning that the passive void is becoming Patterning emphasizes a material shift in the
an-other living body. generative side of the digital paradigm, and a geo-
metric shift in the material side, as well. The study
CONCLUSION presented in this paper is an example of the inte-
Contemporary trend of the computational design gration between digital tools and material practices
education is grounded on an integration of domains by implementing pattern deformation as a synthe-
Figure 15
Re_Flex Patterning; Third and
final day, installation.
Figure 16
Re_Flex Patterning; Final
project.
sizer. Such integration liberates students from pas- ment of Architecture for their collaboration and kind
sive and formal search of an on-screen parametric support. We would like to thank all workshop par-
modeling, familiarizing them to a more practical and ticipants for their invaluable efforts and feedbacks.
sophisticated body of knowledge about the physical More information about the workshops and full list
becoming itself. Nevertheless, the articulation and of students can be found in the blog [1].
reconstruction of patterns help pedagogical objec-
tives as they promote temporal but instant, explicit REFERENCES
but unstable nature of design exploration. Garcia, M 2009, ‘Prologue for a History, Theory and Future
of Patterns of Architecture and Spatial Design’, Archi-
ACKNOWLEDGEMENTS tectural Design: Patterns of Architecture, vol 79, no 6, M
Authors would like to thank to İstanbul Bilgi Univer- Garcia (ed), Wiley, London, pp.6-18.
sity Faculty of Architecture, bi’sürü student organi- Gleiniger, A. and Vrachliotis, G. (2009), Pattern: Ornament,
zation at Yıldız Technical University, Department of Structure and Behavior’, Birkhauser, Berlin, pp.7.
Architecture, and Bolu Abant İzzet Baysal Univer-
sity Faculty of Engineering and Architecture, Depart- [1] www.infections2.blogspot.com
PUBLIC SPACE
Instead of the embodiment of a static order more French inventions. In both of these cases monumen-
and more a city is considered to be an ever chang- tal and vast public spaces allowed the gathering of
ing organism. Over few decades, architects have to the crowd and assembly of a national collective.
cope with new concepts of space imposed by global
markets, the Internet, ballooning population figures, Foam City
social isolation, and environmental crisis. Philoso- The current nature of the human environment is de-
pher Peter Sloterdijk argues in his article “Foam City” fined by the fact that nature and human action can
that architectural designs have been always integral no longer be separated. Technology and nature are
to establishing the society. considered to be all part of a network; a whole that
cannot be managed by simple urban planning strat-
City vs. Society egies.
The article focuses on the Fête de la Fédération of Sloterdijk describes the city as a Foam City: ‘The
July 14, 1790, celebrated on the first anniversary of co-isolated foam of a society conditioned to individu-
the storming of the Bastille. The author argues that alism is not simply an agglomeration of neighboring
the architectural staging of this spectacle served to (partition-sharing) inert and massive bodies but rather
generate an embodiment of the nation, enhanced multiplicities of loosely touching cells of life-worlds’
by affective and acoustic measures. While the article (Sloterdijk 2006).
is mainly concerned with the architectural technolo- In other words the idea of the collective society
gies of politics related to the French Revolution, it has disappeared and was replaced by the society
also points beyond this specific historical case and that resembles the foam, where the individuals are
briefly indicates how 20th-century fascisms used clustered in co-isolated groups. In these co-isolated
techniques that were prefigured by 18th-century groups individuals share their interests and opin-
ions. Therefore the city and public space can no and the entire space was used mainly as a commu-
longer be designed for a massive collective but rath- nication corridor. By developing a right patterning
er for an ever changing multiplicity. strategy we aimed to invite as many social groups to
interact in the plaza.
PROJECT GOALS The second step was to develop the algorithm
Several years ago, we were approached by Moba ar- that would guarantee that all technical require-
chitectural studio to collaborate on a refurbishment ments are met. There were seven sizes of the con-
of a medium sized town in our country. The original crete circles with diameter ranging from 1.2 to 4 me-
design was based on Sloterdijk’s Foam City meta- ters. No two circles could intersect with each other
phor. We were commissioned to develop a method and also with multiple other objects/obstacles.
(algorithm) that would set out a layout of hundreds Furthermore the minimum continual asphalt area
of concrete circles in the surface of the refurbished among the circles could not be less than 0.5 m2due
plaza while reflecting Sloterdijk’s observation of the to given construction limitations.
society in his article “Foam City”.
The assignment consisted of two rather inde- METHODS
pendent steps. The first step was the functional anal-
ysis of the square and development of a patterning Input, brief
strategy that would initiate various activities to hap- Early in the process we realized there is a strong re-
pen in the public space. There was hardly any social lationship between any algorithmic method and the
interaction happening in the square beforehand designed pattern. The original design (an outcome
of an architectural competition) was based on a ran- 3. If the concave residual space defined by any
dom distribution of circles with the highest density three circles was smaller than 0,5 m2 (gap
in the centre of the plaza (Figure 1). The paving pat- smaller than 50 millimeters was considered as
tern was created intuitively by the architects. Our closed) all three circles were moved apart.
task was to optimize the design with as little inter- The process usually took about one hundred itera-
vention as possible. tions to optimize the circles. Visual aids to mark any
possible collisions were scripted in to help to re-
Technical requirements move possible dead end suboptimal results.
We used a simple circle packing algorithm based on
collision detection and an iterative approach. The al- Functional analysis of the square
gorithm ran through the randomly generated circles Having met the technical part of the brief early in
and in every round checked for several conditions the work process, we started to question the func-
derived from the brief (Figure 2). tional quality of random distribution. The architects
1. If any of the circles collided with the boundary did not have any means of designing layout of al-
of solved space it was moved away in the op- most five hundreds circles other than random dis-
posite direction. tribution with intuitive gradient density. Our moti-
2. If any two circles were closer than 6 millimeters vation was to propose a better way of working with
to each other both of them were moved apart such a high amount of design elements and still be
in the opposite direction. coherent with the original design brief.
Blacktop continuity
Tiles - medium size
Pattern x Purpose
Relation of the public space to
Tiles density
Communication X X
Public space X X X X
Commerce X X
Auditorium X X X X
Restaurant X X X
Relax X X X
We analyzed the public space and defined sev- become quiet rest areas were defined by a high den-
eral qualities of the pavement that we wanted to sity of the whole range circles, spots to slow down
address in a new generator (Table 1). For example, on (such as entranced to public buildings) were de-
spots with more and faster traffic would be defined fined by a high density of smaller circles.
by bigger and less dense circles (the asphalt is easier With such an approach we were able to com-
to walk and cycle on), spots that were supposed to pose a colored gradient map that served as a layout
Figure 3
Gradient map of different
qualities of the public space.
generator (Figure 3). However, this algorithmic driv- described by Carpo (2011)). It is disputable whether
en design method was perceived by the architects there was any control (other than intuitive visual) at
as something uneasy to control and was not deve- the first place.
loped further. In a general, yet similar, case (urban paving pat-
tern), early design stage algorithmic tools capable
CONCLUSION of gathering and manipulating vast range of design
Without an algorithmic approach it would not be information would help the team do devise a better
possible to handle a project of this size within the and more functional design. In that case, an “infor-
short amount of time given to the project. The al- mation engineer” should play a substantial role in
gorithmic approach not only helped to optimize the the design team similar probably to a role construc-
setting out of the circles but was essential for pro- tion, civil or technology engineers play during a
duction of final project documentation and for lay- standard building design process.
ing out the concrete circles on site (Figure 4).
The failure of the approach was inability to REFERENCES
change the rather simple definition of random dis- Sloterdijk, P 2006, ‘Foam City’, Distinktion: Scandinavian
tribution and gradient density of the circles. The Journal of Social Theory, 9(1), pp. 47–59.
designers were not comfortable with passing any Carpo, M 2011, ‘The Alphabet and the Algorithm’, pp. 126
control to an algorithm and with dual authorship (as
INTRODUCTION
The building’s skin plays the main role in deliver- mal solution without the need of testing all possible
ing natural daylight to indoor spaces. Performative ones. This paper investigates the ability of integrat-
façade design can significantly improve the indoors ing computational and simulation tools for design
visual and thermal conditions, which in turn, im- problems with different levels of complexity. The
proves the quality of life and work environment by methodology proposed in this research employed
creating productive and appropriately lit spaces. a simple Genetic Algorithm for optimizing the day-
Building skins, therefore, shouldn’t be just designed lighting performance of parametrically modeled of-
for its aesthetic aspects but also as a functioning ele- fice building facades.
ment in the building.
Building Performance Simulation (BPS) tools are Genetic Algorithm and Daylighting Perfor-
broadly used for achieving designs that have bet- mance
ter impact on the users and the environment. While A traditional optimization scheme is an algorithm
simulation tools are effective in testing and evalu- which finds the minima or the maxima of a given
ating different designs, it becomes harder when function, typically known as the objective function.
evaluating numerous solutions. Simulation engines The objective function may depend on any number
usually take a considerable amount of time for each of parameters and any combination of parameter
solution. Therefore, it is more practical to consider values within the defined search space is considered
using optimization tools that can arrive to an opti- a feasible solution. The optimal solution will be the
BASE CASE SIMULATION RESULTS nating the overlit areas. However, in many cases that
Daylight availability was analyzed for the base case came with a drawback in the overall performance
facing the South and the East orientations. Both due to the increase in partially daylit areas. Combin-
South and East facing spaces were subject to the ing the solar screen with light shelves was found to
penetration of the direct sun. Overlit areas reached achieve better results. In this case combining solar
43% in the South and 42% in the East. However, no screens and light shelves was examined. The design
partially daylit areas were anticipated in South ori- parameters of both systems change according to
ented office space where daylight area reached 53%. the results obtained from previous research works
In the East faced room, 13% of the space were found (Sabry et al., 2012). The parametrically modeled cas-
to be partially daylit (Table 2). The main challenge, es had six different changeable parameters shown
therefore, is to eliminate the overlit areas without a in Table 3. Overall, the number of resulting possible
significant increase in partially daylit areas. designs exceeds two thousand possible solutions.
and East orientations . -38 -45 -54 -60 -60 -59 -56 -47 -38 -11 -22 -27 -31 -34 -36 -33 -32 -30
-25 -28 -36 -39 -38 -40 -37 -30 -27 -9 -14 -17 -21 -23 -24 -22 -25 -18
-15 -19 -19 -23 -22 -23 -21 -20 -17 98 -7 -9 -11 -16 -17 -18 -17 -15
-13 -13 -16 -15 -18 -15 -17 -15 -14 96 -5 -7 -5 -9 -13 -14 -15 -9
99 99 100 99 -7 99 99 -7 -7 94 95 96 -6 -6 -7 95 -7 -9
98 98 99 99 99 99 99 99 99 86 90 91 94 95 95 95 95 -5
98 98 98 98 98 98 98 98 97 72 76 83 82 88 88 88 78 77
96 97 97 97 97 97 97 96 95 65 68 69 71 75 72 73 69 65
94 95 95 96 96 96 95 94 92 51 57 57 55 57 58 57 59 54
81 84 89 93 90 90 89 86 79 39 44 51 53 52 53 50 50 43
71 78 81 81 82 82 79 74 71 33 36 40 40 45 45 43 41 40
59 66 70 77 77 69 73 71 65 31 34 34 36 39 40 41 37 38
a 1:1 (H: V) screen with 90% perforation and 50° VSA matched. Table 4 illustrates the best results obtained
was combined with a 120 cm, 10° rotated light shelf. from the optimization process.
It remained the best solution for the next twenty
generations. However, several designs also went SECOND CASE STUDY: FORM FINDING
far beyond the performance of the base case (45% Form-finding can be described as a process of dis-
daylit area). Moreover, all the proposed solutions covery and editing (form emerges from analysis). Ex-
had minimal overlit area percentages which didn’t treme form-finding is not fully architecture but more
exceed 7% of the whole space area and several applied engineering as form exclusively determined
cases succeeded in entirely eliminating the overlit by function. In this case study a free form daylight-
area. Figure 3 shows the simulation results after the ing system was proposed. Similar to the previous
26 generations. It’s noticed that the optimal solu- studied case, this system combines a redirecting
tion was reached at an early stage within the first six and shading techniques, however the form is more
generations. The process afterwards wasn’t success- organic and fixable. A “gills surface” was modified to
ful as the coupling with new solutions resulted in be used as a shading devise in the lower part of the
worse solutions while the optimal solutions kept un- window and as a light shelf in the upper part. Gills
Light Shelves
Parameter Possible values
External light shelf depth 60 cm, 80 cm, 100 cm, 120 cm
Internal light shelf depth 30 cm, 60 cm, 80 cm
External light shelf rotation angle 0°, 10°, 20°, and 30°
81
83
93
92
92
-7
89
95
95
93
86
92
91
95
94
94
85
81
93
92
95
95
95
96
96
95
96
96
95
95
96
97
95
95
91
93
Table 4
82 90 91 95 94 93 91 88 77 84 88 94 96 93 92 94 94 82 Cases with highest perfor-
75 79 83 92 86 86 80 85 78 83 80 93 93 94 93 88 93 83
32 42 40 42 40 45 46 41 30 29 42 38 41 47 44 40 48 33
small overlit area percentages.
24 24 26 34 34 27 33 34 34 20 30 29 27 34 35 37 34 35
19 22 32 26 26 32 28 29 28 17 18 26 31 32 29 21 28 25
14 20 26 31 27 31 21 19 20 17 17 14 19 24 27 26 25 21
Screen configurations: 90%, 1:1, 40° Screen configurations: 90%, 2:1, 50°
Light shelf configurations: Ext. 100 cm. 10°, 60 cm Light shelf configurations: Ext. 100 cm. 0°, 60 cm
Int. Int.
Daylit 64% Daylit 62%
Overlit 0% Overlit 3%
Partially Daylit 36% Partially Daylit 35%
94 97 97 97 97 -5 96 96 92 85 94 93 96 94 93 94 91 88
92 92 92 97 95 95 96 93 89 82 87 91 95 96 94 94 93 86
86 90 94 93 95 92 94 85 86 78 86 88 92 91 93 93 77 77
84 83 90 92 -7 -7 -6 -6 86 62 70 75 77 84 89 80 83 69
76 80 83 83 82 82 82 75 74 62 74 74 72 77 74 75 70 64
63 71 68 78 75 74 74 75 64 65 67 59 72 70 60 71 64 57
56 60 63 68 69 66 60 63 59 46 56 57 61 59 60 54 59 55
41 55 56 57 58 57 56 53 51 46 51 55 58 57 56 56 59 44
32 40 39 42 46 50 48 48 38 38 44 42 42 47 43 48 42 43
29 27 31 32 32 36 41 34 35 28 24 35 38 36 39 38 42 31
19 17 27 30 33 32 32 31 28 20 27 31 34 29 38 33 28 23
8 21 20 24 25 30 28 24 22 17 23 26 26 28 28 27 20 19
Screen configurations: 90%, 4:1, 40° Screen configurations: 80%, 1:1, 40°
Light shelf configurations: Ext. 100 cm. 0°, No Int. Light shelf configurations: Ext. 120 cm. 20°, 60 cm
Int.
Daylit 62% Daylit 61%
Overlit 7% Overlit 6%
Partially Daylit 31% Partially Daylit 33%
surface is a free form inspired from nature and has designs are obtainable. Such a huge pool of design
been recently used in several architecture works. choices highlights the necessity of using tools such
The proposed system was applied to the South as genetic algorithms for finding designs that can
facade and was parametrically controlled to provide provide suitable performance (exploratory analysis).
a wide range of options. Every louver had a median Figure 4 shows different shapes and settings for the
control point which represents the curve peak. This façade.
point has the ability to move in the vertical and hori-
zontal direction to control the openness and close- Daylighting performance optimization
ness of that part as well as the amount of shading results
it provides. The transition of the rest curve points is, The algorithm succeeded in providing several ac-
however, not unique; Instead it follows a Gaussian ceptable cases considering the fact that the designs
curve were transition is defined by a symmetrical were found to have a wide range of performance
sequence of values, with null extremes. Similarly, (oscillated from 56% high to as low as only 5%). It
the curved light shelf in the upper part has a simi- might be useful to use such a tool to limit the op-
lar point that controls its extension and curvature. tions in the beginning of the design phase. An op-
Sixteen positions are optional for each single part timal solution was obtained in the second genera-
of the system and more than two millions different tion and continues to be the fittest for the remaining
twenty generations. The optimal solution had 56% the first case, The GA reached a near optimum solu-
daylit area. Results show that the algorithm wasn’t tion and succeeded in reaching solutions that have
able to conduct a real optimization, most likely be- a significantly better performance compared to the
cause of the extremely wide solution space and the base case. It’s, therefore, recommended to use opti-
simple characteristic of the algorithm. Larger gen- mization tools and evolutionary solving methods in
erations and more computing time (more genera- guided searches for optimal solution from various
tions) would have possibly reached better results. possible options.
However, it’s hard to judge the success of the algo- In the second case, and because of the vast
rithm without further optimizations. Figure 5 shows number of solutions, the algorithm seemed to set-
the optimization progress and Table 5 shows the tle with a local optimal. Although, it may be hard
best performing cases. to judge the results unless more optimization trials
with different setting are made, the algorithm was
CONCLUSION found to be a suitable exploratory method to limit
The two studied cases demonstrated the ability of the options when no previous experience is avail-
the Genetic Algorithm in producing designs with able. This is an exceptionally useful feature that ena-
acceptable daylight performance. However, the bles the integration of performance analysis in ear-
performance of the optimization tool was found to lier stages of design.
differ based on the complexity of the problem. In The proposed methodology can be adjusted to
68 73 76 81 83 80 76 72 66 69 77 74 82 78 82 76 74 65
Table 5
74 78 78 78 79 79 80 79 73 70 74 77 81 80 80 79 75 69
64 70 76 -7 -7 -7 73 69 67 66 70 75 -7 -7 -7 71 70 67
mance for the second case
63 68 65 76 71 68 65 61 54 56 61 66 74 67 63 68 67 60
study.
48 55 59 58 57 58 57 54 43 47 52 61 60 58 56 54 62 46
24 41 47 44 49 48 49 51 32 25 41 49 44 48 43 39 51 56
3 16 22 23 21 39 15 24 13 14 10 14 26 39 32 23 17 11
10 13 5 8 9 10 12 4 1 3 2 6 5 9 14 9 8 7
7 1 2 2 9 4 0 0 0 0 0 0 0 3 3 0 0 0
0 0 0 0 2 0 0 0 0 0 0 0 0 1 1 0 0 0
0 0 0 0 5 0 0 0 0 0 0 0 0 1 0 0 0 0
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 271
272 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
Algorithmic Form Generation for Crochet Technique
INTRODUCTION
This research aims to understand the behavior of technique. Once a form is developed through this
a crochet-knitted surface and decode its rule for computational model, the rule that is extracted from
a computational model so that it can be utilized this computational model is generic and also used
in architectural design process. Use of generative for physically knitting of it (Figure 1). This paper il-
computation suggests a possibility of rethinking the lustrates the first stage, which is concerned with the
architectural design process. Such rethinking could development of crochet-knitting computational
lead to slightly different comprehension of the de- model. Further research of this study will follow au-
signers’ decision making. In this study, the produc- tomatizing the physical production.
tion technique is predefined (crochet-knitting) and
the form generation is constrained by the rules of ALGORITHMIC THINKING IN KNITTING
production technique. Through algorithmic pro- Algorithm is a precise specification of a sequence
cess of crochet-knitting technique, various surfaces of instructions to be carried out in order to solve a
-which imply spatial and structural features- can be given problem (Rajaraman, 2003). Each instruction
generated and automatized for physical produc- tells what task is to be performed. Algorithm serves
tion through a computational model. This compu- as a codification of the problem through a series of
tational model, which contains the knowledge of finite, consistent, and rational steps. Although the
knitting surface behavior, facilities exploration of sequence of an algorithm is simple, the outcome
various forms developed through crochet-knitting could still be very complex and unpredictable. Fol-
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 273
Figure 1
Process of the research aim
lowing some specific rules -or a sequence of instruc- controlled by computers, but the mechanism of the
tions to do a job- is used in many fields. Knitting is knitting machines is not developed. If the mechan-
one of the main algorithmic procedures utilized by ics of these knitting machines can be modified for
humans till their early existence. The example below certain behaviours, it would be possible to knit as a
highlights the relation between knitting instructions whole, those complex forms that explored through
and algorithmic thinking. the computational model.
Example: Instruction to knit a sweater
Step1: Cast on 133 stiches KNITTING TECHNIQUE
Step2: Repeat steps 3 and 4, 11 times Knitting is a technique where one continuous line/
Step3: Knit 2, *Purl 1, Knit 1, Repeat from * to last thread composes not only Euclidian but also non-
stitch, Knit 1 Euclidian surfaces with a very simple operation. As
Step4: Knit 1, *Purl 1, Knit 1, Repeat from * to stitches are added, they push and pull on each other
End…Similar steps (Rajaraman, 2003) and create an emergent surface. The size of those
By proper permutation and combination of this stitches, and the number of their neighbours in the
elementary set of actions (knitting, purling, casting rows above and below, determines the shape of the
stitches on or of needles), an infinite number of knit- work [2]. For knitting a desired surface, the mathe-
ting patterns can be created. The algorithm, which matical concept of the surface needs to be convert-
is the rules for generating knitting patterns, can be ed into a pattern. This conversion requires a greater
also seen as a translator between human mind and understanding of the behaviour of the knitted sur-
computer. The power of computation, which in- face because one has to figure out where exactly to
volves vast quantities of calculations and recursions, increase and decrease stitches so that the resulting
can detect abilities that may have not ever occurred surface as a whole is as close as possible to the de-
to the human mind. The computational model, sired surface. Each work can be quantified as its own
which is based on knitting algorithm, expands the pattern. Once knitting pattern is derived than it is
limits of human imagination. generic and the same surface is created aside from
The algorithm instruction given above example the tension of the working yarn. Crochet as one of
illustrates that knitting technique can be automated the knitting techniques is chosen for handmade ex-
due to its algorithmic process. Knitting machine, periments, since it is easier to compose complex sur-
which is invented by William Lee in 1589, uses al- face. However the surface that is composed through
most the same principle with hand knitting [1]. It knitting technique is looser than the crocheted sur-
knits patterns with algorithmically defined needle face, both techniques essentially create the same
movement. Today, the needle movements can be geometry.
274 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
Figure 2
Hyperbolic crochet variations
of Daina Taimina and its
crochet patterns.
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 275
2001). Since these two techniques can compose the Figure 3
same hyperbolic surface, is there any connection be- Crocheted, polyhedral models
tween them? consist of triangles and
As a result of exploring these two techniques, in polygons.
both of them there is the same physical attraction
that forces the whole surface into a hyperbolic form.
In other words, the polyhedral model behaves al-
most the same as the crocheted surface. Each stitch
in the crocheted surface behaves like one equilateral
triangle in the polyhedral model. While in crocheted
surface, each stitch pushes and pulls on each other COMPUTER MODEL OF GENERATING
and whole system creates an emergent 3D surface, SURFACE
in polyhedral model each edge of the equilateral The computational model (Figure 5) that simulates
triangles tries to stabilize itself in the same distance. crochet technique is created through the polyhe-
This knowledge is the key argument that enables to dral logic given above and is coded in processing
perceive and geometrically describe the behaviour programming language. Each pentagon, hexagon
of the crocheted surface. and heptagon is added one by one and attached
In order to establish the relationship between to each other with at least two vertices of the previ-
crocheted surface and polyhedral model, the cro- ous polygon. The code defines vertices as well as the
chet patterns that are extracted from polyhedral centre point of each polygon as a node and forces
models are tested. In this research: instead of equi- each node in order to be in the same distance with
lateral triangles; pentagons, hexagons and hepta- its neighbours. The form of the surface is governed
gons are chosen for computational and physical by the position of these nodes, which provide an
model generation. Because using these polygons easy process for calculating the overall geometry. In
generates similar smoothness that the crocheted processing code, each node as a particle is connect-
surfaces have (Figure 3). ed with its neighbouring nodes through springs.
As shown in Figure 4, hexagonal pattern gener- Therefore, the position of each node needs to be
ates a flat surface. If pentagons and heptagons are calculated until an equilibrium state was reached for
also inserted, the system deforms itself into 3D sur- the entire model while any node is added. During
face. modelling, each node determines the emergent be-
Figure 4
Extracting crochet rules from
polyhedral model.
276 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
Figure 5 EXTRACTING THE KNITTING PATTERN
Print screen of the code To generate the prototype, the code, which is writ-
interface written in processing ten in the processing programming language, is used
programming language. to create computational polyhedral model of the
desired surface. The creation of the surface starts
with the first polygon definition then the user con-
trols the surface generation by deciding the position
and the number of the edges of the next polygon.
Once computational model is generated then it is
printed as a flattened surface to build its physical
paper model. The pattern of crochet prototype (the
number and the order of the stitches for each row)
is extracted by counting the number of triangles in
each vertex on the paper model. This pattern that is
haviour by affecting on the overall shape. The pro- the output of the computational model is used for
cess is iterative and it has a different approach than crocheting the replica of desired (Figure 6).
a standard computational modelling. It does not
start with a pre-defined geometric surface, besides REALIZATION OF A FULL SCALE PROTO-
the generated geometric surface is unpredictable. TYPE
The process of form generation is nonlinear and it To scale the production of crocheted surface, the
enables negotiation between several nodes simulta- polyhedral model is also used. In polyhedral model
neously. This negotiation between nodes generates if the number of equilateral triangles is increased,
the global form from the local conditions and deci- the scale of the whole crocheted surface becomes
sions. The computational model does not have any bigger. In order to increase the number of trian-
material properties such as elasticity, etc. because gles, loop subdivision method (Figure 7), which is
the geometry of the concluded knitted surface is developed by Charles Loop, is applied. This method
not affected by the property of the material. On the multiplies the triangles by adding new vertices in
other hand the material affects the rigidity of the the middle of each edge [4]. This polyhedral model
concluded form. with more triangles –created through loop subdivi-
Figure 6
The whole process from digital
model to analog crochet.
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 277
sion method- is used to extract the crochet pattern, Figure 7
which will make the crocheted geometry bigger. Loop subdivision methods.
CONCLUSION
This research demonstrates that extracting the
crochet rules for each surface is possible through
computational polyhedral model. The crochet tech-
nique is more promising than Taimina’s crocheted
models that are shown in Figure 2 in order to gen-
erate different variations of hyperbolic geometries. REFERENCES
Using hyperbolic geometries in architecture have David, WH. Taimina, D. 2001, ‘Crocheting Hyperbolic Plane’,
an important potential since they present an op- The Mathematical Intelligencer, 23(2), pp. 17–28.
portunity to achieve self-contained structures. But Osinga, HM. Krauskopf, B. 2004, ‘Crocheting the Lorenz
the conventional way of building them is expensive Manifold’, The Mathematical Intelligencer, 26(4), pp.
and it results in material wastage because of com- 25–37.
plex custom-made casting. In this context, crochet Rajaraman, V. (ed) 2003, ‘Computer Algorithm’, Computer
technique could provide building hyperbolic struc- Programming in Pascal, Prentice Hall of India, New
tures by eliminating the need for complex casting. Delhi, pp. 1-3.
Moreover, the crochet rules that are extracted from
computational polyhedral model can also be used [1] www.supernaturale.com/articles.html?id=277
as generator code during the further research on dig- [2] www.en.wikipedia.org/wiki/Knitting_machine
ital fabrication of these complex crocheted surfaces [3] www.ted.com/talks/margaret_wertheim_crochets_the_
(Figure 8). coral_reef.html
[4] www.en.wikipedia.org/wiki/Loop_subdivision_surface
Figure 8
The polyhedral models that
are chosen for prototyping.
278 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
3D Regular Expressions - Searching Shapes IN Meshes
NAÏVE IDEA
Instead of an introduction, let us jump directly to environment has to load identical geometry
the idea behind the search algorithm and see how multiple times into memory, which may pro-
it can be applied within the architectural workflow: hibit working smoothly with the mesh for lack
Assume that we have imported a large mesh into a of performance. What is needed is an approach
modeling environment, in which all information but that can replace instances of the same geom-
the list of vertices and faces is lost. Such a situation etry by a reference to a single one.
can occur during data exchange, entailing two ma- Our contribution concentrates on solving the
jor problems: mentioned problems and additionally brings for-
1. there is no object identity, i.e. we have to manu- ward a “search and replace” functionality for 3D
ally select vertices and faces belonging to an meshes. In more detail, we present an algorithm that
object in order to work with it. This can be a • can find shapes IN meshes (i.e. sub-meshes), giv-
challenging task, though, as geometry might en a search pattern in the way of a set of paths
overlap (see Figure 1a). (which we interpret as succession of angles);
2. In cases where there is more than one instance • can restore object identity, thus making it possi-
of the same geometry, a manual approach is ble to work with a possibly inaccessible collec-
highly tedious. Furthermore, the modeling tion of vertices and faces;
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 279
• can replace found geometry by a reference to a Figure 1
single geometry container; Searching and replacing in
• can replace found geometry by a different ge- meshes. (a) Restoring object
ometry. identity from a polygon soup.
Figure 1 gives two results obtained with the ap- (b) Searching and replacing
proach: In Figure 1a, object identity was restored geometry within a mesh.
from a previously inaccessible polygon soup. In Fig-
ure 2b, we have searched for the given outline and
replaced each occurrence by a different geometry.
The latter takes 19s on a 2.4GHz single-core proces-
sor (C++ implementation, mesh containing 12064
vertices), which is moderately fast. The pattern is
given as own mesh, which is automatically compiled
into a search description which our algorithm needs.
In the coming sections, we will describe exactly
how the search is done and how the mentioned
compilation proceeds (see “Background” and “Elabo-
ration”). We further provide details on the studies
conducted (see “Studies”), which have served as
test-bed and ground for discussion concerning the
future development of the tool. Before concluding,
we also deliver details on the two implementations
existing so far (see “Implementation”).
280 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
Table 1 Regex construct matches e.g. becomes in 3D regex algorithm
Regular expressions versus 3D character sequence, e.g. abc abc angle sequence, e.g. 90° 10°
regular expression. repetition (one or more times), e.g. a+ a, aa, aaa match vertices in same direction
backreferences, e.g. (a|b)\1 aa,bb match previously encountered point
“b” which are put into a list (“a” “b”). Implicitly, two trast, our algorithm searches in 2D or 3D and addi-
more transitions are added to the head and the tail tionally takes proportions into account (i.e. surplus
of the list, namely MATCH and FAIL. Both signify a to the angular search). Examples of other shape re-
stop condition - in the first case, the algorithm has trieval techniques, which use statistical data instead
found the supplied string, in the second case, the of angles, are the ShapeSifter tool that is based upon
algorithm has failed. The list of transitions thus be- features such as surface area, volume etc. (Sung, Rea,
comes (FAIL, “a”, “b”, MATCH). An automaton has a Corney, Clark and Pritchard 2002) and the Princeton
transition pointer, placed initially on the second item Shape Search Engine which can compare sketches
of the list (“a”). Each transition type has its own way to sections (Funkhouser, Min and Kazhdan 2003).
of matching. In the simple case mentioned, we have
a character transition, which compares the current ELABORATION
character in the string to the one specified. If both Our search technique describes an angular path the
are equal, the transition pointer is advanced (“b”). transition types given in Table 2. The most important
This process is repeated until we finally encounter one is the angle transition (ANG), which tries to find
MATCH. In the case that the criteria specified in the an edge pairs having a given angle, extending from
current transition are not satisfied, an error flag is the current point. This is usually followed by a clo-
raised and the transition pointer is set to the preced- sure transition (CLO), which jumps over any interme-
ing transition. diate points lying at the same angle (as mentioned,
these do not contain significant information). As
Angular search further transition types, we have begin and end of a
Angular search is concerned with finding a se- regular expression group (BOR, EOR), backreference
quence of angles in a given geometry. Examples of (REF) and begin-at (BAT). These are described in due
such algorithms are to be found in the automotive course, using examples to help understanding. As in
industry, in the form of a search tool for mechanical regular expressions, we also have FAIL and MATCH;
parts in a large CAD database (Berchtold and Kriegel the latter reports the points encountered during the
2004). However, in the concrete case mentioned, whole matching process, i.e. the sub-mesh found.
only section plans were taken into account, and the We will now walk through the different pos-
whole algorithm was limited to 2D retrieval. In con- sibilities for implementing a regular expression au-
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 281
Figure 2
Transitions. (a) Angle and clo-
sure (b) backreference to point
and (c) to edge, (d) branching
at a point using the Begin-At
transition, (e) branching 50%
of an edge, counting from its
start. (f) Relative edge lengths
used to introduce proportions.
tomaton, starting with the 2D case and extending In Figure 2a, we search for an angle of 125 degrees
this to 3D. We also give details on the used compiler, formed by the current point (shown in the center),
which converts a search pattern (a mesh consisting the previous point (shown as a tiny rectangle) and
of paths) into an automaton. a possible next point (shown as circle). Two cases
are distinguished: (Case 1) If there is yet no previ-
The 2D regex automaton ous point (because we have just started), we try the
Two different modes are considered in the 2D case: combination of all neighbor points twice (neighbor
If this is the first ANG to be matched, then an inner 1 - current point - neighbor 2; neighbor 2 - current
angle (0...360°) between edge pairs situated at the point - neighbor 1), since that establishes the march-
current point is found. In all other cases, the edge ing order of the algorithm. (Case 2) In case that there
last taken is already fixed: The automaton then is a previous point, as shown, the algorithm tries to
searches for a next edge at the correct angle with continue along a non-visited neighbor which has
reference to the previous one. Depending on re- the correct inner angle. Regardless of which of both
quired strictness, angles are compared using a toler- cases the algorithm has dealt with, the marching
ance interval. The same applies to the matching of direction has been fixed (shown by an arrow). The
intermediate points. In the case no suitable angle is next transition, a closure, consumes all points of the
found, the algorithm gets to the previous transition mesh lying in that direction, which allows us to skip
and tries the next edge pair. past points that contain no significant angular infor-
Figure 2 brings examples of such a 2D search: mation.
282 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
Figure 3
Tolerance values. (a) Angular
tolerance at points, (b) closure
tolerance for marching for-
ward, (c) percentual tolerance
for matching edges.
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 283
Figure 4
3D regex. We need an ad-
ditional angle at each next
point, (a) the angle between
the previous normal and
the next leg. (b) There is an
ambiguity for successive 90°
angles. Through comparison
of lengths, we can find out
whether we have an (c) inside
• Closure tolerance (Figure 3b) specifies what de- An intermediate point neighbor’ is a point situated or (d) outside winding.
viations from the marching direction are to be at length p on the edge (testpoint, neighbor). The
accepted. distance (prev. ANG’s b, neighbor’) is defined to be q.
• Percentual tolerance (Figure 3c) states the in- r is the length (prev. ANG’s b, testpoint). If q is small-
terval around a percentage of an edge’s length. er than or equal to r, we can conclude that we have
These three values are specified globally, for the an “inside” winding (Figure 4c). In all other cases, we
time being. However, results obtained with the help have an “outside” winding (Figure 4d). The winding
of some basic test cases (see “Studies”) show that criterion is added to the transition specification and
this is a possible weakness of our algorithm, as we compared at runtime with the mesh, for cases in
cannot easily adapt to sub-meshes that contain which successions of 90° angles are present.
parts which require a more fine-grained, localized
notion of tolerance. Thus, this part is likely to be ex- The regex compiler
tended in future implementations. The regular expression description is translated
automatically from a search pattern made of paths
The 3D regex automaton (ordered edges) into a regex automaton. Briefly
For the 3D case, the 2D regex algorithm is extended. outlining the algorithm, we sequentially translate
In order to fix a next edge ei+1, we need to take the each edge pair into an ANG CLO (first pass). In that
last two edges ei and ei-1 into account (refer to Figure process, we ignore co-linear edges. However, these
4a): The cross product ei x ei-1 gives the normal vec- are needed later for checking intersections (second
tor n of the plane in which the last two edges lie. A pass): (a) In case the end vertex of the forward edge
suitable next edge is one that (1.) has the correct an- was already visited, we generate a REF after ANG
gle between ei and ei+1 (same as in the 2D case) and CLO. There are two distinct cases: If the visited ver-
additionally (2.) has the correct angle between n tex was co-linear (i.e. it was ignored during the first
and ei+1. Because of this, we need at least four points pass), we generate an edge reference (REF %). In
in the mesh to be searched. all other cases, we surround the edge that leads to
An ambiguity arises for cases in which there is the point with BOR..EOR and generate a backrefer-
an ANG 90° following an ANG 90° (see Figure 4b), ence to that regex group (REF #). (b) In case the start
since it is not clear whether to march left (“outside”) vertex of the forward edge was visited, we generate
or right (“inside”). This case can be resolved through a BAT before ANG CLO in the previous fashion (co-
projection: Let (prev. ANG’s b, prevpoint, testpoint, linear: BAT %, else BOR…EOR BAT #).
neighbor) be successive points, neighbor being the
candidate for marching onward. Then, we have to fix STUDIES
three lengths p, q and r, as follows (refer to Figures Under this section, we examine the studies con-
4c and d): p is the length (prev. ANG’s b, prevpoint). ducted with the regex algorithm in some detail. In
284 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
Figure 5
Basic Test Cases. (a) Sphere
Matching (b) Proportion
Check. Replacement of (c)
Openings (d) Pyramid steps
(e) Leafs.
all cases, we have applied a single regex pattern bling a room, simplified as cubes of different
(acting as input) to a scene (also an input), produc- sizes (Figure 5b). We were able to find different
ing an output in the form of a set of selected ver- geometries based on their proportions, how-
tices and edges of the found sub-meshes. During ever, it must be mentioned that we also had
evaluation, the number and type of matched sub- counterintuitive cases where geometry is so
meshes (not all were “correct” in a visual sense, even proportionally close that one regex intended
though the angles and proportions matched) were for a specific type of furniture also returned a
compared to the type of regex used (ranging from different geometry (not a false positive in the
“closely resembling” the searched sub-mesh to more classical sense, though). We have furthermore
perturbed versions). In a post-step, the replacement perturbed the regex pattern, and checked that
algorithm has been used on the selected point- and angular precision can comfortably cope with
edge-sets or their connected components, typically such errors.
placing and orienting an object such that it fit into • We tested the 3D “search and replace” algo-
the resulting bounding box. The latter is quite trivial rithm using the colosseum mesh shown in
to extend to arbitrary geometry that would be ill- Figure 5c. For every instance of the matched
suited for bounding-box placement, using specific search pattern (shown red in the lower part of
rules stated in a scripted program of the modeling Figure 5c), we used the bounding box to locate
platform. the replacement. The orientation (heading,
pitch, roll) of the replacement was concluded
Basic test cases from the found points in comparison with the
During the development of the approach, we have search pattern. Further tests for “search and
used a set of basic test cases for assessing the algo- replace” were also conducted with a pyramid
rithm: (Figure 5d) and a gerbera, which was turned
• In the simplest case, we have looked at a set of into a lily by replacing each leaf (Figure 5e).
spheres (Figure 5a) with increasing tesselation
(4, 8, 16, 24 vertices as base). As result, we could Reconstruction of destroyed synagogues
show that a regex of k ANG transitions can only We are currently testing the 3D regex algorithm
find objects with tesselation greater or equal on large-scale models in the context of virtual re-
to k (a 8-ANG regex will find the 8-, 16- and construction of destroyed synagogues, mainly
24-sphere but not the 4-sphere). stemming of the period 1890-1910 (see Figure 6).
• Proportions were tested on a scene resem- Though hundreds of synagogues were in this era
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 285
Figure 6
Matching “windows” in the
synagogue case study. (a)
Overview of matches, (b)
arc - real size, (c) proportional
size, (d) non-proportional size,
(e) connected arc - real size,
(f) proportional size, (g) non-
proportional size.
286 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
Figure 7
(a) Ornaments, (b) columns,
(c) ornamented arcs.
Figure 8
(a) Pathological door made of
(b) arbitrary sub-meshes.
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 287
In that context, we also wish to note also that library of patterns used for matching as well as a
the overall efficiency is highly coupled to the search taxonomy that connects these would seem a useful
pattern (and thus: the compiler) used; a regex that extension that is yet too early to undertake, as we
discriminates early (i.e. sharp angles first, before have to fix the foundations first. Other tasks that we
coming to rounded forms) has a far better perfor- would like to look into in the future are: data extrac-
mance than one that considers discriminating fac- tion from laser scan data and 3D fractal analysis of
tors as last step. Devising a better regex compiler architecture based on “finding sub-meshes within
and searching in an optimized mesh (overlapping sub-meshes”.
edges and points merged) are clearly on our agen-
da. Also, future versions of the approach might lead ACKNOWLEDGEMENTS
away from the idea matching linearly in an automa- This work is based on a diploma thesis (Wurzer,
ton but recursively in the supplied search pattern 2004) supervised by Katja Bühler (Vienna UT and
(i.e. without compilation, but still utilizing the pre- VRVis Forschungs GmbH), Peter Ferschin and M.
sented concepts). Eduard Gröller (Vienna UT). The synagogue model
base is a results of a continuing effort in virtual re-
CONCLUSION AND OUTLOOK construction by Bob Martens (Vienna UT), Herbert
We have presented an algorithm that can search in Peter (Academy of Fine Arts Vienna), among many
meshes that have lost all information but their ver- others participating in that effort.
tices and faces, based on regular expressions and
angular search. The benefits of this are threefold: (1.) REFERENCES
We can restore object identity, (2.) we can replace Forta, B 2004, Sams Teach Yourself Regular Expressions in 10
multiple instances of the found geometry by a refer- Minutes, Sams, Indianapolis.
ence to a single geometry container and (3.) we can Berchtold, S and Kriegel, H-P 1997, ‘S3: Similarity Search
replace found geometry by an alternative one. in CAD Database Systems’, Proceedings of the SIGMOD
Two case studies frame the presented approach: Conference, May 13-15, Tucson, USA, pp. 564-567.
The “basic test cases”, which we applied during de- Sung, R, Rea, H, Corney, JR, Clark, DER and Pritchard, J 2002,
velopment, and the ongoing “synagogue” test cases, ‘Shapesifter: A retrieval system for databases of 3D en-
which use a collection of models exported from gineering data’, New Review of Information Networking,
CAD. As discussed, the first results with the latter 8 (1), pp. 33-53.
domain have shown that the complexities associ- Funkhouser, T, Min, P and Kazhdan, M 2003, ‘A Search En-
ated with “real” data are not to be underestimated: gine for 3D Models’, ACM Transactions on Graphics,
The data is both huge (typically 350K vertices, 450K 22(1), pp. 83-105.
polygons) and of bad quality (overlapping geom- Peter, H 2001, Die Entwicklung einer Systematik zur virtuellen
etry, unintelligible polygon groups forming con- Rekonstruktion von Wiener Synagogen, Diploma Thesis
nected components, bad tesselation). On top of this, (Vienna University of Techology).
the expectation regarding the growth of the model Sung, R, Rea, H, Corney, JR, Clark, DER and Pritchard, J 2002,
collection in the next coming years is expected to be ‘Shapesifter: A retrieval system for databases of 3D en-
substantial. gineering data’, New Review of Information Network-
A meta-search will therefore become of even ing, 8 (1), pp. 33-53.
more importance, connected with a pre-step for Wurzer, G 2004, 3D Regular Expressions: Searching IN Meshes,
automated simplification and cleaning of the mesh Diploma Thesis (Vienna University of Techology).
which lies on our future agenda. Also, building a
288 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
A Computational Method for Integrating Parametric
Origami Design and Acoustic Engineering
INTRODUCTION
Design for a concert hall includes acoustic engineer- acoustic engineering to find the best geometric
ing and architectural design (i.e. aesthetically pleas- form of a concert hall. First, we discuss the limita-
ing design that satisfies complex architectural condi- tions of conventional collaborations between archi-
tions, such as concert activities, building regulations, tects and acoustical engineers. Second, to overcome
structure, construction processes, budgets and so these limitations, we develop an interactive design
forth). Usually, computation is often used for gener- method and show its application to a concert hall
ating all possible geometries fulfilling those various design project in Japan (the hall will be completed
architectural constraints. However, the most difficult in 2014). The design method consists of three inter-
part of design processes is to choose the best geo- active subprograms: a parametric origami program,
metric form among the resulting various alternatives. an acoustic simulation program, and an optimiza-
This paper proposes a computational method tion program. Finally, we describe the advantages
for integrating parametric origami design and of the proposed method, including the ease with
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 289
Figure 1
A diagram of the interactive
design method.
which it visualizes engineering results obtained engineers. To bridge this gap, we propose a compu-
from the acoustic simulation program, and a final tational design method for integrating architectural
optimized geometric forms it provides to satisfy design factors and acoustical engineering factors.
both architectural design and acoustic conditions. In addition, we want to develop an objective
Because the method efficiently manages funda- method in which acoustic data derived from a simu-
mental factors underlying architectural forms, it can lation process are efficiently utilized. In this connec-
provide a design framework in which architectural tion, Leach (2009) mentioned as follows: “Within
design and acoustic engineering are integrated. contemporary architectural design, a significant
shift in emphasis can be detected – move away from
THEORETICAL BACKGROUND an architecture based on purely visual concerns to-
In the design process for a concert hall, architects wards an architecture justified by its performance.
collaborate with acoustical engineers. First, archi- Structural, constructional, economic, environmen-
tects develop a geometric form and then acoustical tal and other parameters that were once secondary
engineers analyze the acoustic efficiency of the pro- concerns have become primary – are now being em-
posed form using their simulation program. How- braced as positive inputs within the design process
ever, there are relatively few exchanges between from the outset”. Our proposed interactive design
them. As a result, in the conventional method archi- method can deal with acoustic parameters as one of
tectural optimization and acoustical optimization the primary design materials as well as origami pa-
tend to be rather independent operations, and they rameters and design intentions (Figure 1).
are not always coordinated. For instance, acous- With the recent improvement of computer per-
tic optimization does not always take into account formance, simulation technology has improved sig-
complex architectural conditions or the architects’ nificantly. As a result, it has become easy to visualize
design intentions, whereas architects do not always the state of the acoustic parameters. What makes
utilize informative data provided by the acoustical our method intriguing is that those parameters can
290 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
Figure 2
The parametric origami
program can transform a
flat sheet of paper into a
geometric form through
various folding techniques:
basic techniques (Yama-
ori, Tani-ori, Nakawari-ori,
Kabuse-ori), advanced tech-
niques (Jyabara-ori, Miura-ori,
Hira-ori and so on).
find unpredictable forms which meet both acoustic Nagata Acoustics (an acoustical consulting firm). The
conditions and design intentions. design method consists of three interactive subpro-
grams: a geometric form-generating program, an
EXISTING RESEARCH acoustic simulation program, and an optimization
In the existing studies, the use of computational program.
methods for designing concert halls is limited to
performing two tasks: acoustic simulations and Geometric form-generating program: the
generation of all possible geometries satisfying parametric origami program
various architectural constraints. However, there are The first subprogram, the parametric origami pro-
few methods for choosing the best geometric form gram, adopts the idea (proposed by the SUEP archi-
among the resulting numerous alternatives. tects) that a form is generated by folding a sheet of
In this paper, we apply a computational method paper—the traditional Japanese art called ‘origami.’
not only to acoustic simulation and generation of The program can transform any surface into a geo-
various possible geometries but also determination metric form using the basic folding parameters of
of the best geometric form satisfying both the archi- the origami folding system: folding lines, folding
tectural design and acoustic requirements. depth, folding width, folding angles and so on (Fig-
ure 2). These are mutually constraining (i.e., not in-
CONCERT HALL DESIGN PROJECT dependent) parameters.
In this paper, we apply the interactive design meth- The objective of this parametric origami pro-
od to a concert hall design project in collaboration gram is to develop a method for finding combina-
with SUEP architects (an architectural office) and tions of origami parameters which generates ge-
ometries fulfilling complex architectural constraints
Figure 3 (Figure 3). The computer technology enables us to
A diagram of generating test every combination of parameters in order to
architectural form fulfilled find out possible designs that meet certain require-
complex architectural con- ments. In this process, designers are no longer mak-
straints. ing single geometry but finding design parameters
which determines a final form (Figure 4).
Another feature of this program is: if there is no
combination of parameters that meet every require-
ment, then the program provides an alternative
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 291
Figure 4
Possible form variations.
combination. In the literature, a few studies deal sound propagation in terms of straight rays.
with computational origami methods for architec- There are some existing software packages
tural design. Most of them follow a strict origami which can simulate sound propagation and geo-
rule such that a single sheet of paper is fold into a metric forms interactively. However, it can simulate
given polyhedral surface without any cut. However, only the distribution of direct sound, which is not
in architectural design processes, this rule some- enough for sound optimization of a concert hall.
times disturbs design intention or other architec- To overcome these limitations, we developed an
tural performances. To overcome this limitation, the acoustic simulation program which visualizes sound
method enables us to balance parameter weights in propagation in a three-dimensional space over time
an optimization process. That is, the method allows in three ways: by arrows originating from a sound
us to cut a sheet of paper or loosen architectural source at an arbitrary point in a hall; the distribution
constraints, in the process of balancing between ori- of reflected sound; and the distribution of reverber-
gami rules, acoustic performances and design. ating sound (Figure 5).
These two subprograms, the acoustic simulation
Acoustic simulation program program and the parametric origami program, run
The second subprogram is an acoustic simulation interactively in the following manner.
program, which deals with geometric acoustics, i.e.
Figure 5
Left: reflected sound simula-
tion, Right: reverberating
sound simulation.
292 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
Figure 6
Distribution of sound density.
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 293
Figure 7
The part of various possible
geometries of a hall according
to the parameters derived
from the architectural condi-
tions, and the results obtained
by the acoustic simulation
program.
Figure 8
The results obtained by the
quadrat method
294 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
Figure 9
Left: the final model of the
concert hall generated by
modified Miura-ori taking
account of various folding
parameters. Right: Details of
the final model of the concert
hall.
With the proposed computational method, to take account of acoustic parameters as one of the
about two hundred possible geometries were gener- primary design materials as well as designer’s sense.
ated, among which the final geometry of the concert Computational technology is not only useful for
fall was chosen through the optimization process. improving or automating design processes, but it is
The folding pattern of the final geometry is also valuable for generating new possibilities of ar-
based on Miura-ori, consisting of concave polyhe- chitectural form by shifting from a form-making pro-
dral surfaces. At first glance, the final geometry looks cess to a form-finding process.
simple but it is complex in the sense that the folding
depth and angle are delicately controlled (Figure 9). REFERENCES
Diggle, PJ 2003, Statistical analysis of point patterns, Arnold
CONCLUSION publishers, New York.
This interactive relationship enables us to choose Egan, MD 1941, Architectural acoustics, J. Ross publishing,
the best combination of parameters satisfying both New York.
architectural design and acoustic requirements Leach, N 2009, ‘Digital Morphogenesis’, Architectural Design,
among numerous possible forms. Parametric design V 79, I 1, pp. 34-37.
is often used to explore complex geometries, but in Reas, C and McWilliams, C 2010, Form+Code in design, art,
this method it is used to promote complex interac- and architecture, Princeton Architectural, New York.
tions of collaborators. Tachi, T 2010, ‘Freeform variations of origami’, Journal for ge-
Terzidis (2006) mentioned about a form-mak- ometry and graphics, 14(2), pp. 203–215.
ing process as follows: “architects and designers Takenaka, T and Okabe, A 2010, ‘Networked coding method
believed that the mental process of design is con- for digital timber fabrication’, Proceedings of the ACADIA
ceived, envisioned and processed entirely in the hu- Conference, Calgary/Banff, Canada, pp. 390-395.
man mind and that the computer is merely a tool for Terzidis, K 2006, Algorithmic Architecture, Architectural
organization, productivity, or presentation”. Howev- Press, Oxford.
er, the computational form-finding process allows us
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 295
296 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
A Novel Method for Revolved Surface Infrastructures
Gökhan Kınayoğlu
Bilkent University, Turkey
http://www.tectonicon.com
gokhan.kinayoglu@bilkent.edu.tr
Abstract. This paper presents an algorithm for the formation of single or double curved
revolved surface’s infrastructures through standardized parts. Any revolved surface
can be generated with only two types of parts, interconnected by a ribbed structure
technique. The proposed method differs from the accustomed orthogonal rib structures by
the varying angle in-between coupling parts. The algorithm can be customized through
several parameters like the number, width of parts and thickness of the material used
for the infrastructure. The algorithm also offers an advantageous nesting pattern with
minimum loss of material regardless of the revolved surface cross-section.
Keywords. Revolved surface; standardization; ribbed structure; contouring; nesting
pattern.
INTRODUCTION
Following the design process, the transformation of number of unique variations, each requiring an in-
a complex shape from the digital medium of com- creased degree of computation and a heavy process
puter software into the physical reality through of production. Therefore, rationalization is required
material existence requires a further computation to transform the non-standard parts into standard-
and rationalization (Griffith et al., 2006). The ration- ized ones, in terms of their geometry, variability and
alization process, depending on the geometrical economy. Besides rationalizing the surface parts, the
complexity of the intended shape, aims for the infrastructure that builds up the surface also needs
standardization of parts for possible ways of manu- a process of computation and rationalization (Figure
facturing, while lowering the manufacturing costs 1). It is the latter one, the infrastructure of a surface
at the same time. Therefore, the process of ration- that this study is going to focus on specifically.
alization becomes an inevitable part of computa- This study is an attempt to devise a methodol-
tional design, as long as the target shape is a non- ogy for producing single or double curved surfaces
standard surface and requires non-standard parts through standardized parts. This kind of approach
for its constructability. By non-standard surface, any can be considered as the primary step for enhanc-
surface - single or double curved - that cannot be ing the current computational processes and bring-
built through conventional manufacturing and con- ing forth a novel method for the fabrication of sin-
struction techniques, and requires a computational gle or double curved surfaces. As Branko Kolarevic
design process for both rationalization and manu- states, the production of a surface, whether single
facturing is indicated. In the case of non-standard or double curved, can be realized through “contour-
surfaces, the number of units may end up in a large ing, triangulation, use of ruled, developable surfaces
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 297
Figure 1
An example of the infrastruc-
ture for a spherical revolved
surface.
and unfolding” (Kolarevic, 2003, p. 43). The devised tectural projects, especially in skyscraper structures.
method can be regarded as an innovative contribu- The environmental and structural factors make it
tion to the contouring technique in which the slices suitable for the diagrid system to be used in high-
are not ran parallel to each other in two orthogonal rise buildings. Therefore, another objective of this
directions. paper can be considered as the investigation of
As the starting point of this study, revolved sur- possible extensions of the diagrid system through a
faces have been chosen, for their constant curvature smaller scale implementation.
along the rotation axis. The constancy of curvature
among the rotation axis enables the standardiza- RELATED WORK
tion of parts and apart from the cross-section of the Apart from the orthogonal ribbed structures, which
revolved surface, the algorithm generates only two have been studied and implemented extensively,
types of parts which form the final infrastructure. studies made on non-orthogonal ribbed structures
The method introduced in this paper focuses mainly have been analyzed. Agnieszka Sowa’s study, at ETH
on small scale infrastructures, which are possible to Zurich, explores the possibility of separating parts
be manufactured from sheet materials, like card- in relation to each other, for going beyond the sca-
board, acrylic, fiberboard or metal. lar limitations of the material and manufacturing
The final infrastructure can be denoted as a techniques. (Sowa, 2004) Sowa’s method focuses on
diagrid system, which is being widely used in archi- generating and optimizing a cubic structure as an
298 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
instance of many possible forms, through a number a predefined maximum amount of variation in the
of cross-sections integrated through a ribbed struc- intersection angles. Therefore the firmness of the
ture, by using an algorithm. The parts are manufac- structure is attained when the assembly of the parts
tured from planar timber elements with a constant is completed.
thickness. Similar to this study’s approach, the varia-
tion of angles between the parts has not been con- ALGORITHM
sidered in Sowa’s study, but instead all parts were The algorithm was devised through a script writ-
manufactured by a two-axis CNC milling machine, ten in Autodesk Maya’s Maya Embedded Language
hence the perpendicular slits, apart from the angle (MEL) and later re-implemented in Rhinoceros and
of intersection. The main aim of the study is to de- Grasshopper plug-in. It produces the production
vise an algorithm that is capable of generating the drawings of the parts of the infrastructure and they
intersecting ribs, optimizing the structure through can be manufactured by a two-axis router, laser-cut-
some parameters, separating each rib where nec- ter or water jet, for the planar quality of the parts at
essary and nesting the manufacturing drawings for the current stage of the study.
generating the cutting scheme. The algorithm starts with a cross-section curve on
In Kenfield Griffith, Larry Sass and Dennis the XY plane for the revolved surface, where Y-axis
Michaud’s study of generating a strategy for irration- is the axis of revolution; hence farther the curve to
al building design, contouring technique is adopted Y-axis, larger the revolved surface will be. The cross-
for irrational surfaces, generating horizontal and section curve should not intersect the Y-axis for
vertical ribs, with every part different from each oth- the preservation of tubular quality of the revolved
er (Griffith et al., 2006, p. 467). While horizontal ribs surface. Additionally, because of the limitations of
are located at differing levels with equal intervals the infrastructure, the cross-section curve should
and remain parallel to each other, the vertical ribs pass the horizontal line test, i.e. a line in X direction
are generated perpendicular to the surface, end- should intersect the cross-section only once at any
ing up with a more complex arrangement. The final point, however a line in Y direction can intersect the
structure resembles a waffle slab system used in cross-section any number of times. By revolving the
concrete constructions, projected onto a curvilinear cross-section around the Y-axis for 180˚, a revolved
irrational surface. The perpendicularity of horizontal surface is formed.
and vertical ribs solves the problem of slit angle vari- The surface is then intersected by a plane ly-
ation, thus enabling the manufacturing of the com- ing in the YZ plane and inclined through Z-axis. The
ponents on a two-axes milling machine. However, degree of inclination is the first parameter of the
it should be noted that the variance among shapes infrastructure. To guarantee the infrastructure cover-
creates a highly irregular nesting pattern. age of the whole revolved surface, the intersection
An ongoing study by Yuliy Schwartzburg and points of the plane and the revolved surface at the
Mark Puly, at Ecole Polytechnique Federale Laus- top and bottom points should be checked, and the
anne, Switzerland, explores the possibility of con- angle of inclination may be lessened in order to fully
structing any shape through intersecting planar intersect the revolved surface.
shapes (2013). A devised algorithm searches for The resulting curve of the intersection is cop-
solutions taking into consideration the cases of in- ied towards the Y-axis with a certain distance, the
tersection and optimizing the intersection angles parameter for the width of all infrastructure parts,
and positioning of every rib through assembly limi- and two intersecting curves form the main construc-
tations, slit constraints and material qualities. The tive element by lofting to generate a surface. The
study also takes into account the varying angle in- resulting surface is arrayed radially N times around
between the parts and comes up with a solution of Y-axis with a total angle of 360˚, where N is the third
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 299
Figure 2
Three examples of the infra-
structure.
parameter of the infrastructure. By mirroring the re- in-between the parts, slit width increases and a 90˚
sulting surfaces across XY plane, the infrastructure is intersection produces slit width equal to the mate-
formed by planar elements with a total number of rial thickness.
elements of 2*N.
Beyond the computation of the slit dimensions, ADVANTAGES, LIMITATIONS AND FU-
thickness of the material is also useful for visualiza- TURE WORK
tion purposes (Figure 2). By extruding the parts with When the same revolved surface is produced
thickness of the material, the infrastructure is basi- through orthogonal ribbed structures, in which par-
cally formed, except for the slits. allel horizontal and axial vertical sections are used to
Each part consists of a number of intersections generate the infrastructure, the method introduced
depending on the formal characteristics and size of displays some advantages. First of all, the devised al-
the cross-section, degree of the inclination angle gorithm guarantees that there will be only two types
and the number of elements. The inclined nature of parts to generate the infrastructure. However in
of each part results varying intersection degrees, orthogonal ribbed structures, while the vertical sec-
but they are limited to the number of intersecting tions will be identical due to the revolving quality
parts, i.e. if each part has 7 intersections, the model of the surface, the horizontal sections differentiate
will have a maximum 7 varying angles at total. To according to the cross-section, all being circular. Ad-
determine the position and size of each slit, angle ditionally, for all of the slits are parallel to each other
between the parts and thickness of material are on every part, stability of the infrastructure depends
used. The diagram shows the mathematical relation on the frictional forces or the infrastructure requires
between the intersection angle and the slit width additional elements to fixate the parts; whereas the
(Figure 2). With the intersection angle decreasing introduced method has the advantage of interlock-
300 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
ing itself due to the varying directions of each slit. and around the implementation of 70% of the parts,
Moreover, the standardization of parts also allows the interlocking becomes completed. It should be
the perfect nesting in the cutting scheme regardless noted that the elasticity of the material used plays
of the shape of the final part or the cross-section of an important role for the different directions of the
the surface (Figure 3). slits. While assembling the infrastructure, the parts
For the structural quality of the infrastructure is need to be bended to a certain degree for joining
not crucial in small-scale implementations and ma- them. This property also ensures the interlocking
terial efficiency has a higher priority over structure, of the infrastructure. Bendable elastic materials like
in the algorithm the intersection curve is not offset- acrylic, medium-dense fiberboard (MDF), spring
ed to attain a constant width among the curve but steel or cardboard suit well to the infrastructure.
instead copied towards the Y axis. As a direct out- If two-axis manufacturing techniques and thick
come, the parts have varying width throughout, but materials are used for the infrastructure, the surface
on the contrary they can be nested in the produc- quality will be highly coarse. This can be better visu-
tion drawings regardless of the initial cross-section alized by increasing the thickness of the material in
of the revolved surface. Additionally, as the nesting the algorithm (Figure 4). Additionally, ribbed struc-
pattern also allows a lossless configuration of the ture technique has a problematic slit connection in
parts, routing time is also decreased by the shared cases other than the parts are intersected perpen-
edges in between the parts. As long as the manu- dicularly. Both problems may be overcome through
facturing techniques allow, instead of cutting each the use of thin materials and further manufacturing
part separately and producing left-over material in- techniques. The algorithm is capable of generat-
between the parts, the parts are arranged perfectly ing the exact three dimensional model for precise
without any loss of material. However, it should also interconnection between parts, resulting from the
be noted that this kind of approach may bring forth varying degrees of intersection. Therefore, when the
structural inadequacies, for the uncontrolled vari- parts require a higher degree of precision, a five-axis
ance in the material widths. Nevertheless, the nest- milling machine becomes more adept. Instead of
ing algorithm may be updated depending on the planar pieces, five-axis milling machine will be able
structural necessities for each case, still protecting to incorporate the surface curvature of the revolved
the advantages of the lossless nesting pattern. surface into the parts through thicker materials, be-
The perfect nesting quality of the parts results sides the angle variations in the slit connections.
in high efficiency in terms of material use when Another possible manufacturing method, and
the number of parts being manufactured increase. maybe the most suitable and optimized one for
Therefore, an infinite number of parts manufactured the infrastructure is the use of injection molding
from a roll of steel leads to a 100% material effi- technique. As the number of parts required for the
ciency, which is a distinctively advantageous feature implementation of the infrastructure is limited only
when mass fabrication is considered. to two, in cases of mass-production of a specific
The assembly process follows a relatively sim- cross-section, the manufacturing of two molds will
ple system of formation. There are only two types of be enough. Together with the decreased manufac-
parts, inner and outer. Parts are interconnected to turing times and costs, the precision of the final in-
each other through corresponding slits, i.e. first slit frastructure is achieved through the use of molding.
to first, second to second and so on. For the varying Since in its current formation the model does
direction of slits, assembly process is a bit problem- not offer any structural properties, there should be
atic at the beginning when only a few number of further studies for the optimization of parameters
parts are assembled. With the increasing number of in terms of structural criteria. Number of elements,
parts, the infrastructure becomes more interlocked width and thickness of parts, degree of inclination
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 301
Figure 3
Visual representation, param-
eters and nesting patterns of
the previous three examples.
302 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
Figure 4
Slit width calculation formula.
should be tested and analyzed structurally to find inquiry into the relation between individual part
out the structural advantages and deficiencies of and overall form, together with their integral rela-
the method. Alterations to the algorithm may be in- tion in between, which cannot be separated in any
troduced according to the findings about the struc- phase of the design process. This attempt reflects an
tural behavior of the infrastructure. Additionally, understanding of an approach, which prioritizes the
the setbacks of the varying width may be studied potentials of rationalization from the initial steps of
as an outcome of the structural findings, together the design process, while also taking material con-
with the nesting possibilities apart from the algo- siderations into account.
rithm’s current potentials. Consequently, material
limitations may be overcome and larger prototypes REFERENCES
and implementations may be further achieved. For Griffith, K., Sass, L., Michaud, D 2006 ‘A strategy for complex-
attaining larger scale infrastructures, Sowa’s tech- curved building design: Design structure with Bi-lat-
niques may be adopted (2004). By dividing each part eral contouring as integrally connected ribs‘, SIGraDi
into sub-parts and connecting them with additional 2006 [Proceedings of the 10th Iberoamerican Congress of
elements, dimensional restraints may be extended. Digital Graphics] Santiago de Chile, Chile pp. 465-469.
Kolarevic, B (ed) 2003, Architecture in the Digital Age: Design
CONCLUSION and Manufacturing, Spon Press, New York.
Apart from the formal qualities of the form, stand- Schwartzburg, Y. and Puly, M. 2013 ‘Fabrication-aware de-
ard or non-standard, through designed methods sign with intersecting planar pieces‘, Eurographics, 32
and techniques, the part can be rationalized. The (2), pp.317-326.
proposed algorithm shows an example of rationali- Sowa, A. 2004 Generation and Optimization of Complex
zation of a non-standard surface through computa- and Irregular Construction/surface: On the Example of
tional processes and attaining standardized parts. NDS2004 Final Project. Postgraduate studies final the-
In a larger scope of context, this study proposes an sis in ETH Zurich. ETH Press: ETH Zurich.
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 303
304 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
Ruling Im/Material Uncertainties
Abstract. Visual rules are powerful in loosely capturing the impact of material behavior
on form in designer’s hands-on experimentation. They present a first step to translate
the causal relations between material and form to computation without sacrificing the
uncertainties in the designer’s interaction with the materials. This study investigates
how to model the relation between material and form with visual rules so that the
model embodies some of the phenomenological aspects of reality, rather than merely
reproducing it.
Keywords. Digital materiality; physics-based modeling; abstractions; visual schemas;
shape studies.
INTRODUCTION
Recent developments in programming and digital interacts with the materials on an immediate level or
production technologies create a new conscious- builds a system of different materials and lets them
ness within the architectural profession, yielding interact with each other while s/he acts as the ob-
to new design methodologies. The high level of server and the controller of this process. The altera-
product precision in digitally calibrated fabrication tion of the designed form based on these interac-
requires a high level of precision in design represen- tions is phenomenological, in that it involves the
tation. This numerical certainty finds its expression interpretation of various instances of the materials
in mechanistic design approaches that make use of that are “transcomputable” (Glanville, 1998). In this
quantifiable, solid data for performance and opti- paper we focus on incorporating the founding rela-
mization. However these approaches mostly adopt tions (Rota, 1997) of form and material to address
the limitations of existing computational techniques the interpreted in design representations through
instead of exploring design beyond the limits of the visual schemas (Stiny, 2011).
quantifiable phenomena. In the following paragraphs we discuss digital
Digital representations that are constructed and physical experiments. We review preliminary
with mathematical descriptions of the physical ob- studies firstly in the physical modeling of plaster
ject are only capable of reproducing some part of in elastic formwork and secondly in visual abstrac-
the reality. The description of a reality limited to its tions of this process in different digital modeling
known finite qualities is insufficient for the designer approaches. We then develop and present a set of
who alters this reality in direct and indirect ways visual schemas to illustrate the physical processes in
throughout the design process. The designer either material based transformations.
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 305
Figure 1
SANAA, House, New York, USA,
2008. Image source: [1].
Figure 2
Pipeline Model of a tree:
Diagrammatic representation
showing the progress of tree
growth. Image source: (Shino-
zaki, 1964).
306 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
Figure 3
Squeeze, pull, twist, push.
Figure 4
Grasshopper model represent-
ing liquid transformations.
points, the density of the lines locally changes (Fig- forward as it might appear. In order to change the
ure 4). The actions of the hands are to a certain de- shape of the model by moving the points around
gree abstracted in attractor points. The smoothness the scene, first a mathematical description of the
of the movement caused by the pressure of the liq- change needs to be made and then the attractor
uid on the elastic surface is acquired with a cosine points’ relation to the change needs to be defined.
function. The shape of the bump on the surface Furthermore, if the user of the plug-in is not very
could be determined by changing the parameters familiar with the analytical descriptions of shapes
of the function. In this case the key features of the s/he might have difficulty in controlling the shape
transformation process that are used to construct changes. Throughout the design process every
abstract models are smoothness of the liquid move- time the designer changes the model, the model
ment and flexibility of the elastic surface. is reevaluated based on the design objectives. This
Rather than being the exact reproduction of kind of an evaluation comes mostly from intuitive
the physical models, the digital model is meant to aspects of seeing, and as Stiny (2006) suggests “see-
simulate the interaction with the modeled object. In ing and drawing work perfectly without rational (ana-
order to achieve similarity, the manipulation of the lytic) thought”. In computation, analysis is valuable
digital model must to some degree correspond to when coupled with seeing. Hence, our analysis aims
the physical material transformations. In the physi- to sustain the phenomenal aspects in the designer’s
cal environment the designer is able to touch the interaction with the material.
materials that s/he is working with; this is a direct
way to interact with the materials. Commonly used Cellular Interaction
method of attractor points in the digital models is a In subsequent investigations, conducted as group
way to “touch” the models in the digital medium. Still work in the graduate studio, plaster-filled balloons
it happens on a symbolic level, and is not as straight- are put in a rigid mold and their interactions with
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 307
Figure 5
Physical models.
each other and the surrounding rigid mold are ob- geometry of the rigid mold and division rules of Vo-
served. Different experiments are held changing the ronoi tool. In Softbody each module is treated as a
parameters, particularly the number of units in the consistent whole with predefined properties. Their
rigid mold and the amount of fluid in one balloon. interaction with each other and the rigid mold is
The circumstances that caused the shape change simulated through the behavior of each module.
are examined in connection with the morphology of Softbody plug-in of 3dsmax is a physics-based
the end products (Figure 5). modeling environment and its interface allows the
Two digital models for the project were gener- user to control the material behavior of the mod-
ated with Grasshopper and Softbody plug-ins of eled objects by changing the parameters like stiff-
Rhino and of 3dsmax respectively. The end results ness, damping, friction and the gravity (Figure 7).
were very similar as shown in Figure 6. Grasshop- Physics-based modeling approaches like these have
per model was generated in a top-down manner by proven to be useful when building lifelike represen-
dividing a whole into its part. In this case the mod- tations of the materials with in the design process.
ules are handled as parts of a whole defined by the Principally a physics-based modeling environment
Figure 6
Digital models: from left
Grasshopper Voronoi model,
3dsmax Softbody, negative
space model.
308 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
Figure 7
Interface of the Softbody
plug-in.
Figure 8
Softbody model.
operate on the basis of a simulation algorithm deve- shaped differently. This unpredictable variability in
loped for the physical process it represents and the material transformation is a challenge for represen-
user interacts with the model through visual out- tations that are expected to support it. We propose
puts. Visual schemas play an important role in phys- visual rules to achieve this. Stiny’s (2011) definition
ics-based modeling approaches. For example the of general transformation rules and the unrestricted
Softbody plug-in of 3dsmax simulates the elasticity rules suit the variability in question here. An initial
of objects through the principles of particle phys- shape schema is crucial with parts that could be al-
ics in that the surface of a “softbody” is defined with tered to generate different products of the transfor-
points which are interconnected with hypothetical mation process. Stiny’s (2011) examples of Goethe’s
springs. With the help of this surface abstraction it Urpflanze and Semper’s Urhutte are both archetypal
becomes possible to model the elastic deformation schemas for a class of objects, that are varied and
of materials (Figure 8). each with definite parts. Our question has been how
we can formalize visual schemas for objects with-
VISUAL SCHEMAS OF PHYSICAL PRO- out definite parts such as the plaster filled balloons.
CESSES It is insufficient to observe just the products of the
The models above are attempts at representing transformation for formalization of such a schema.
material properties that impact form. They are pur- An examination of the conditions that bring about
posefully incomplete exercises that serve to analyze the transformation is also necessary (Figure 9, 10).
material properties and to see where digital models The rules are derived by looking at the transfor-
may fall short. As seen above, each plaster unit is mation process, and the relations between proper-
Figure 9
Multiple products of the ‘cel-
lular interaction model’.
Figure 10
Cellular interaction model:
neighboring components.
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 309
Figure 11
Different processes leading
to different morphologies: a)
cellular interaction, b) hand as
a mold, c) plaster in a balloon:
fully filled - no deformation,
d) plaster in a balloon: half
filled, e) plaster in a balloon:
half filled.
ties of the components. Key features among the end as shown in Figure 13. Parts are recognizable at the
products of the transformation process are identi- contact areas with the other components and they
fied to construct abstract schemas. These common are either concave or flat (Figure 14).
features, when varied, are what make the products The geometry of the contact areas are deter-
unique incidents. Figure 11 shows the results of the mined by the material properties of the components
different transformation processes for plaster in a in the system. When two plaster-filled balloons come
balloon. It is clearly seen how both materials circum- into contact with one another, the more rigid one
stantially take the shape of each other. For example imposes its shape on the other. The rigidity in this
in the case of half filled balloons shown in figures 11 case is determined by the two factors: the amount of
d and e plaster take the shape of the creases of the liquid in the balloon and the physical state (liquidity)
balloon, however in the ‘hand as the mold’ model of the plaster at the moment of contact. Another fac-
in 11 b with the balloon squeezed liquid plaster tor that specifies the geometry of one object is the
stretches the balloon rushing away from the pres- number of objects that it is in contact with. When we
sure of the hand. mark the differentiating surface parts on the contact
To find out which of their parts make them dis- areas with surface partitioning rule, the polyhedron
tinguishable as the products of different processes, like structure of the remaining parts of the surface
first these parts need to be determined. It can be is revealed (Figure14). The more tightly the plaster-
simply done with Hoffman and Richards’ (1983) filled balloons are packed in a rigid mold, the more
smooth surface partitioning rule (Figure 12). Ac- angular is the appearance of this polyhedron-like
cording to this rule human vision enables recogni- structure. The polyhedron-like structure and surface Figure 12
tion of objects by dividing them into their parts. differentiations at the contact areas are recurring Smooth surface partitioning,
The minima rule states that this partitioning process features in each component, whereas the angular- “Minima Rule: Divide a surface
takes place based on the discontinuities on a sur- ity of this polyhedron-like structure and concavity of into its parts at loci of nega-
face (Hoffman, Richards, 1983). With this method the differentiating surfaces are varied. tive minima of each principal
we divide the surface of the model in Figure 11-a A visual rule illustrates the relation between curvature along its associated
the shape transformations of each plaster object family of curvature.” (Hoffman
and Richards, 1983).
Figure 13
Smooth surface partitioning
of the surface of a “cellular
interaction” model.
310 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
Figure 14
Marking the convex and flat
surfaces with smooth surface
partitioning method.
and the way they are packed in a rigid mold (Figure We vary this rule to capture emergent proper-
15). In this rule, the initial shape of the plaster-filled ties of the plaster-formwork interaction. The rules
balloon is represented with a circle plane. The right in Figure 16 display the condition where the outer
side of it shows the transformed shape while the in- rigid mold gets smaller while the number of the
dicator above the arrow gives us information about units in the mold stays the same. The increase in the
the context used in the action. The area of the circle angularity of the resulting shape is visible as the sur-
plane corresponding to the volume of a component rounding units get closer to one another. The rules
stays the same during the transformation process. in Figure 17 show the formation of polygon-like
Figure 15 Colored lines represent the neighboring units. shape of a unit with the increasing number of sur-
Visual Rule 1: Gray circle plane rounding units. It also reveals the relation between
represent the initial shape the number of surrounding units and the number
of the plaster-filled balloon of sides of the polygon. By changing the position
and. Red lines stand for the and the number of the surrounding units, different
surrounding units of a compo- shape computations can show the gradual transfor-
nent in transformation. mation of a unit (Figure 18).
Figure 16
Visual Rule 1 elaborated: An-
gularity of the plaster object
increases as the volume of the
outer rigid mold gets smaller.
Figure 17
Visual Rule 1 elaborated:
Angularity and the number of
the sides of the plaster object
increase as more balloons are
put in the rigid mold.
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 311
Figure 18
Shape computations showing
the gradual transformation of
a unit as the surrounding units
The rules presented in Figures 15-18 give a clue 20 the thickness of the line signifies the rigidity of get closer to each other.
about how shapes come about. Nevertheless, it is the elastic mold while the grey tone stands for the
not possible to fully comprehend the process by just hardness of the plaster. These are depictive rules. In
looking at these. The mold of the surrounding piec- search for alternatives that can be more generaliza-
es shapes the plaster-filled balloon. The rules pre- ble, we also develop the visual rules in Figure 21 that
sent the surrounding units as solid shapes, however serve the same purpose but more generally to work
this is not always the case. Rules still need to reveal even for singular objects. They exhibit two different
the interaction of the neighboring objects. cases of being in a mold. Based on the rigidity of the
Further examining the transformation, it is pos- components, which is represented with line thick-
sible to improve the visual rule to contain more in- nesses, their potential to transform one another is
formation on the process. That leads us to a less gen- displayed. Different weights (color and thickness)
eral rule. In Figure 19 the visual rule for the schema x signify properties that undergo transformations.
à x – prt(x) + prt(x)’ is presented. Here the transfor- Shapes are generic and can be interpreted to sub-
mation of the subtracted part of the initial shape is sume others.
displayed with parametric variation rule under gen-
eral transformation rules (Stiny, 2011), for the areas CONCLUSION
of the subtracted and added parts are equal. Current digital modeling environments have the ca-
To further enhance the rules, properties could pacity to provide the designer some form of inter-
be assigned as weights (Stiny, 1992). As the shape action with the model but phenomenal aspects of
transformations are mainly regulated by the rigidity the physical environment often get lost in symbolic
of each component in the system, it is the first mate- reductions. In most cases the designer interacts with
rial aspect to be included in the visual rules. In figure the digital models on a symbolic level and forgoes
Figure 19
Visual rule of the schema xà
x- prt(x)+prt(x)’ .
312 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
Figure 20
Weights as tones of gray and
line thicknesses. The darker
the gray tone the harder the
plaster it represents
Figure 21
Weighted shapes representing
the rigidity of the bound-
ary of an object. The rigidity
increases with line thickness.
the causality shaping the design. In the digital mod- based shape transformations can be visualized to
el, the designer is able to perform transformations be compared with one another, to be manipulated
on the model by changing some numeric values if necessary, and to be understood within a broader
within set ranges. In addition to this capacity, there picture of how shapes come about. Differently than
is a need for case-specific visual rules. This is to em- rules, schemas, as defined and categorized by Stiny
body the designer’s unique reasoning which feeds (2011), aid in understanding the rules within formal
from the interaction with the material. The variation categories that might prove helpful in setting up the
of plaster-in-balloon morphologies in Figure 11 il- support system in the digital platforms. Visual rules,
lustrates differences between instances. We study and visual schemas as their more general versions,
a particular hands-on experimentation in order to not only document transformations but also sum-
showcase how visual rules may document the form- marize and help systematize the designer’s percep-
material relation with the aim of supporting the in- tion of founding relations of actions. Visual rules
teraction of the designer in the digital form-finding presented in this paper also utilize weights that can
processes. We have developed exemplary rules and be used to represent magnitudes of certain material
schemas as general and visual as possible based on properties.
parameters derived from hands-on experimenta- Further research requires applying these kinds
tion. There are many parameters that determine the of rules for synthesis, as opposed to for analysis, and
composite behavior of the materials. In this study, in parallel to a design exercise as opposed to a ma-
they add up to two main features: geometry (cur- terial exploration exercise as the one referred to in
vature) and rigidity. The values indicating material this paper. This would help us see how the results
properties of components are employed in the com- correspond to the rich interactions the designer has
putations of shape transformations. in the material world. Additionally, since visual rules
The rules given in this paper are in no way a are specific to case and designer but can be catego-
complete grammar but are directives for phrases rized using more general schemas, it is meaningful
that can belong to a grammar if a designer wishes. to pursue a system to support various visual rules in
These rules are mere instances of how material the digital platforms.
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 313
ACKNOWLEDGEMENTS 56-62.
The graduate studio mentioned in the text is Digital Hoffman, D.D., Richards, Whitman 1983, ‘Parts of Recogni-
Architectural Design Studio, a required course in Ar- tion’, Cognition, 18, pp. 65-96.
chitectural Design Computing Graduate Program in Rota, G-C 1997, Indiscrete Thoughts, Birkhauser, Boston, MA.
Istanbul Technical University. The studio was super- Stiny, G 1992, ‘Weights’, Environment and Planning: Planning
vised by Mine Özkar and teaching assistant Ethem and Design, 19, pp. 413-430
Gürer in the academic term of Spring 2012. The Stiny, G 2006, Shape: Talking about Seeing and Doing, MIT
group work that serves as the object of this inves- Press, Cambridge, MA.
tigation was conducted by students Aslı Aydın, Halil Stiny, G 2011, ‘What Rule(s) Should I Use?’, Nexus Network
Sevim, Ersin Özdamar, and Zeynep Akküçük. The Journal, 13, pp. 15-47.
analysis of the experiments with visual schemas are Shinozaki, K, Yoda, K, Hozumi K and Kira T 1964, ‘A quantita-
entirely done subsequent to the studio. tive analysis of plant form—the pipe model theory, I.
Basic analysis’, Japanese Journal of Ecology, 14, 97–105
REFERENCES
Glanville, Ranulph 1998, ‘A (Cybernetic) Musing: Variety and [1] http://archimodels.info
Creativity’, Cybernetics And Human Knowing, 5-3, pp.
314 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
Hyperdomes
Abstract. The development of new shapes in architecture has deeply influenced the
current perception of the built environment. The analysis of the processes behind this
evolution is, therefore, of great interest. At least two well known factors, influencing this
development, may be pointed out: the great improvement of digital tools and the tendency
toward building distinctiveness.
In particular, the innovation of digital tools such as parametric modeling is resulting
in an overall diffusion of complex shapes, and the phenomenon is also evident in a
clear expressionistic search for architectural singularity, that some might consider as a
negative effect of globalization trends.
Though, if we can consider as a positive result the fact that parameterization allows
a deeper control over design factors in terms of reference to cultural, historical and
physical context, at the same time such control possibilities are sometimes so stark to
be even auto-referential, stepping over site-specific parameterization, to create unusual
shapes just for the sake of complexity.
The ever-growing diffusion of generative design processes is in fact going to transform
niche procedures, frequently limited to temporary decontextualized structures, into an
architectural complexification as an end in itself.
The hypothesis of this paper is to demonstrate that site-specific parametrization can be
considered as a tool able to translate intentions into shape; it is necessary, for this aim,
the widening of the meaning of the word singularity.
Keywords. Urban environment; distinctiveness; non-standard roofing structures.
INTRODUCTION
The need for new shapes in architecture has brought tendency toward the building distinctiveness. The
a great development in techniques and processes aim of this work is to define the which digital tools
able to control and manage the building construc- produce distinctive shapes, by analyzing a set of sig-
tion. It is worth to focus on two factors of this evo- nificant case-studies through times.
lution, the improvement of digital tools and the
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 315
Figure 1
Digital tool scheme. Boxes in
blue are the tool families of
interest for this work.
DIGITAL TOOLS
We refer to digital tools as a generic expression to instructions, routines and by running an algorithm
define a giant umbrella of different software, which until the shape which performs better is reached.
are very different in aims and efficiency. It is there- The final shapes are so produced only by a sequence
fore useful framing the parametric tools into a nar- of instructions that produces a result. The designer
rower family of design instruments. The use of para- isn’t the only actor in the shape creation process,
metric tools for design complex shapes creates new because it is paired with the machine results, which
methods, often unexplored, to describe a compre- might go beyond the starting idea. In fact the initial
hensive notion of building performance. shape design, might even be developed into some-
The meaning given to parametric tools is worth thing unpredictable at the start of process. So it is
to be deepened because of relative youngness of essential to focus the attention on the component
this discipline, which lacks of acknowledged notion. that directly modifies the production design, which
There are at least two families of parametric tools is the code.
that are radically different in methodologies and fi- The code writing, as an act of creation, corre-
nalities. The framing applied to this work has been sponding to the designer’s intention, gives complete
schematize in Figure 1. freedom to choose the road to the shape definition.
The first is the Building information Modeling This freedom is partially constrained in control-
(BIM), widely used to optimize building perfor- ling the resulting shape, which may go beyond the
mance with a certain degree of constraint. The limit choice of the preferred shapes. It is so introduced a
of BIM is the creation of new shapes, which are not disruptive innovation in the design process, which
pre-build inside the software. changed deeply the ordinary design method.
The second family, which is of higher interest for The ordinary design process is made of a circu-
this work, is that corresponding to the so called gen- lar correspondence between the mental knowledge
erative design tools. These digital tools that works of the shape and its final representation. In genera-
in strict connection with coding, which embraces tive design, instead, the effort is focused in thinking
an area of knowledge quite far from traditional ar- about the code that will produce the shape, until the
chitectural design procedures. In this case the ar- desired shape is reached. The resulting shape, there-
chitectural form is defined through code, made fore, is generated with an indirect procedure, not by
by declaring variables and constants, by writing direct modeling and editing of shape. In this sense,
316 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
two classes of design process drivers may be out- The singularity is intended as a recognizability of
lined, external and internal: site-specific parameters a building in an urban environment. It is considered
and building related parameters. at the same time as internal and external character
of architecture that has to relate to imageability of
USE OF TOOLS – SITE SPECIFIC AND the shape, considering its connection with urban
BUILDING RELATED PARAMETERS environment.
Site-specific parameters are made by the elements Internal singularity is related to the distinctive-
of the urban environment that influence the build- ness of structural and technological performance
ing in its components. The effect of these external of architecture which makes exceptional a building
constraints is evident in some aspect, as the external in itself. External singularity, instead, is the recog-
skin of buildings, but it may influence the structure nizability character of the architecture on a larger
and the functions of the generated spaces. The ap- scale, making it a relevant element of the urban
plication of these specific parameters is important environment. A parallel can be set with the rela-
to provide the building with the correct contextual- tion between internal and external singularity and
ization within the neighboring spaces. It is therefore the aforementioned connection between building-
important to understand the rules that define the related and site-specific parameters, as pointed out
urban environment to better set up parameters that in Figure 2. The strict relation between the tools for
will characterize the building, giving it the character form-finding and the pursued aim creates a disrup-
of distinctiveness. The use of these elements points tion in the process of singularity creation. The lin-
out the importance toward building located in ur- ear process where tools creates the singularity is
ban environments, which are endowed of their own transformed into a design loop where tools create
characters, which cannot be ignored. complexity, and the singularity generates new pa-
In parallel with these elements, collected from rameters to drive the software.
external environment, it seems important to under- Despite the tools limitless shapes creation, their
line the importance a second class of factors, the complex approach and the steep learning curve,
building related parameters. These may be defined keeps away from the use outside academia and top-
as the set of relationships established within the notch designer.
geometric elements of the building skin. This ap- So that most of the buildings created with the
proach works perfectly with art installation, which is generative process are endowed with internal sin-
needful by itself. The aim of this design method is to gularity because they are small-scale architectural
give a complex and appealing perception to build- manufactures, pavilions and temporary installations
ings because for some kind of aesthetical need, a that are designed intentionally ignoring the connec-
lack of intricacy in shape is perceived as a lack by a tion with urban environment.
large part of designers. This need is largely fulfilled This tendency toward singularity wasn’t so defi-
by the use of generative design tools, which easily nite through the times. It is to be underlined, in this
generates an auto-referential complexity. With these sense, the denial of monumentality, in Le Corbusier’s
specifics, it is easily understandable how the gen- architecture.
erative-design tools have been pointed as the next Therefore it has seemed uncompleted conduct-
-generation step in the evolution of design process. ing an analysis of this phenomena, limiting the
analysis to contemporary buildings endowed with
SHAPE DISTINCTIVENESS external and internal singularity, so it was chosen to
The innovation of digital tools, is one of the two driv- consider the domes, which have always been distin-
ers in new shape generation, the other is the ten- guishing elements of verticality emerging in hori-
dency toward distinctiveness. zontally dominated urban environment.
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 317
Figure 2
Shape distinctiveness: the case
of the Kunsthaus emerging in
the roofscape of Graz.
FROM DOMES TO HYPERDOMES gies to achieve such aim. The dome issue is shown
The meaning of dome, intended as a “large hemi- in Figure 3 in which Andrea di Bonaiuto, painted the
spherical roof or ceiling” (Merriam Webster diction- Church before Brunelleschi’s design. The depicted
ary) has a deeper significance connected with the its dome is a fake because in 1350 there was no built
function in the past. In fact the spaces too wide to dome, just designs. because of the complexity of the
be covered with normal ceilings, were closed with aim. Therefore Santa Maria del Fiore dome may be
hemispherical roofing structures. One renowned considered as a reference example of non-standard
example of these issues is the cathedral dome of roofing structure clearly emerging in an urban land-
Santa Maria del Fiore in Florence. The base of the scape. Further Case studies for these past domes are
dome was built in 1315 and it remained unfinished the XVII century Sindone dome by Guarino Guarini
until 1436. It took more than 100 year to be finished in Turin (Figure 4 left) and the XIX century San Gaud-
because at that time nobody was able to design a enzio Church dome by Alessandro Antonelli, in No-
cover for such a span of space, until Brunelleschi, vara (Figure 4 right).
in 1418 conceived a series of structural and strate-
318 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
Figure 3
Detail in Santa Maria Novella
from “ Spanish chapel” 1350.
Santa Maria del Fiore is
depicted with a fake dome
because it wasn’t possible at
that age to build a real one.
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 319
Figure 4
Churches of XVII century
Sindone dome by Guarino
Guarini in Turin and XIX cen-
tury San Gaudenzio Church
dome by Alessandro Antonelli,
in Novara, relevant prototypes
of hyperdomes clearly mark-
ing the urban and, for the lat-
ter, even regional landscapes.
to this term by Kevin Lynch (1960), as a “quality in a hall in Kamigahara (Figure 11), by Toyo Ito and the
physical object which gives it a high probability of Lingotto dome by Renzo Piano (Figure 12).
evoking a strong image in any given observer”).
In this sense, that might seem not only chal- CONCLUSIONS
lenging but even provocatory, some case studies This study has analyzed the aforesaid series of case
for contemporary structures are the Future Systems’ studies, pointing out how the new relationships be-
Selfridges building in Birmingham (Figure 5), the tween design tools, structural conception, shape in-
Kunsthaus in Graz by Peter Cook and Colin Fournier novation, contextual references and symbolic values
(Figure 2), the Opera House in Lyon by Jean Nouvel become key factors to understand the evolution of
(Figure 6), the Greater London Authority building hyperdomes.
(Figure 7), British Museum Great Court in London Starting from the given hypothesis this paper
(Figure 8) and Reichstag in Berlin (Figure 9), all by has shown a possible interpretation of the current
Norman Foster, the recent roofing structures in Gent interpretation of domes and how both the internal
and Taiwan (Figure 10) and Meiso no Mori funeral and external singularity may be considered for giv-
320 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
Figure 5
Selfridges building in Birming-
ham - Future Systems. The
hyperdome creates a singular-
ity by integrating itself in the
urban environment though
being a complex shape.
ing the building a shape distinctiveness in the urban the phenomenon of the complex shapes in terms of
context. A positive or negative assessment of the relationship to the urban context, without involving
role of hyperdomes goes beyond the aim of this pa- aesthetic and historical issues that deserve further
per that mainly aims at recognizing and interpretate and specific disciplinary attention. Nevertheless, it
Figure 6
Opera Nationale de Lyon –
Jean Nouvel. A contemporary
dome, which creates a singu-
larity in urban environment.
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 321
Figure 7
Greater London Authorith
builing - Norman Foster.
Geometric singularity by the
discovery of the only rotation
angle that creates circular sec-
tion from a elliptical ellipsoid.
Figure 8
Queen Elizabeth II Great Court
at British Museum, Norman
Foster. Structural singularity.
Effects of compression and
bending must pass through
the nodes in all directions,
decreasing bear loading
of central building. Green
performance is achieved
through the glass perceived as
clear, which is shielding 75%
of ultraviolet rays.
322 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
Figure 9
Reichstag dome - Norman
Foster. Internal geometric
singularity. Ramp as a spiral
inscribed in the circumference
( Loxodrome).
seems possible to anticipate that the sake for search- through a further long term process of historical, cul-
ing the shape singularity as an end in itself, that tural and even social interpretation and acceptance.
many recognize as a common issue in contemporary
architectonic structures, it seems to be necessary, REFERENCES
but not sufficient, to mark the urban environment Lynch, K., 1960, The Image of the City, MIT Press, Cambridge
with significant permanent signs that need to go MA
Figure 10
Taichung Metropolitan Opera
House - Toyo Ito. Singularity
in flux allowed by the walls
which bends to merge with
floors and ceilings.
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 323
Figure 11
Meiso no Mori Crematorium
- Toyo ito. A generative design
applies the mechanichal thory
that minimizes strain energy
in a structure to create a ra-
tional free-cureved surface.
Figure 12
The organic shape of the
“Bolla” (Bubble) designed by
Renzo Piano on the roof of the
Fiat Lingotto Factor in Turin.
324 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
Action Based Approach to Archaeological Reconstruction
Projects: Case of the Karnak Temple in Egypt
Anis Semlali1, Temy Tidafi2, Claude Parisel3
1
École Supérieure des Sciences et Technologies du Design, Tunisia, 2,3University of Mon-
treal, Canada.
2,3
http://www.grcao.umontreal.ca
1
anis.semlali@gmail.com, 2temy.tidafi@umontreal.ca, 3cparisel@videotron.ca
Abstract. The proposed paper deals with a numerical approach that could better assist
the archaeologist in the archaeological reconstruction projects. The goal of our research
is to explore and study the use of computerized tools in archaeological reconstruction
projects of monumental architecture in order to propose new ways in which such
technology can be used.
Keywords. Architectural heritage; archaeological reconstruction; action-based
modeling; architecture and complexity.
INTRODUCTION
The definition and development of new modeling The case study: Karnak Temple
methods is the objective of a research project in To do this, our project uses as laboratory the Kar-
progress at the CAD research group (GRCAO) of Uni- nak temples in Egypt: certain information is already
versité de Montréal. These methods aim for a better available in the form of plans, surveys, elevations
integration of the varying types of knowledge impli- and sections of existing monuments (with or with-
cated in the reconstitution of ancient architectural out proposed restitutions), and excavation reports,
structures, as well as greater flexibility in the manip- while other information is still to be surveyed on
ulation and utilization of this knowledge. To reach site, or catalogued. Beyond the technical aspects
this objective, technology will not suffice. It is nec- that allow for the precise encoding of the basic com-
essary to integrate methods, knowledge and goals ponents of constructions and structures, the meth-
of a collection of scientific disciplines that are not od allows for the elaboration of a reconstitution
used to working together (without forgetting the that notes the different proposed reconstitutions of
inherent incoherencies): to social sciences as with parts that are either currently missing or have been
archeology (in the classical and not the anthropo- modified several times over a millennium of history.
logical sense of the term), history, art history, epigra- Also the method takes into account the degree of
phy and chronology, architecture, geometry, optics, probability of the proposed reconstitutions.
and information technology must be joined. This To experiment our general approach of restitu-
requires that each discipline define itself in terms tion, we choose the case study of the VIIth pylon in
of what it can bring to the reconstitution of physical the Karnak Temple. The choice of this case study fol-
objects in an environment, and thus, the reconstitu- lows directly from the data availability. Indeed, the
tion of architectural heritage. seventh pylon was a pretext for testing survey func-
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 325
because the inscriptions are deteriorating at great Figure 1
speed and there is a real risk of losing completely The VIIth pylon divided into as
some impotents scenes. many blocks as it contains.
The main problem at the present time is that
the traditional methods carried out to survey the
inscriptions are very time-consuming. For example
the most common of these methods consists in
making facsimiles of the wall to be surveyed, with
photographs as background or simply with trans-
parent sheets placed against the surface of the wall.
This method involves numerous checks during the
drawing process and is therefore rather tedious, be-
cause it requires the collaboration between different
drawers.
Research carried out by the GRCAO leads to
present method of computerized epigraphic survey
tions in the project “Karnak-1” in GRCAO. We made a that can be used for drawing and recording the hier-
division of the complete structure of our case study oglyphic signs for all planar, but also conical and cy-
(the VIIth pylon in the Karnak Temple) in as many lindrical, architectural elements of Egyptian temples.
blocks as it contains (Figure 1). Our goal is to assign This method is user-friendly for archaeologists and
to each of the blocks from corpus, its place in the epigraphists alike, thanks to the very detailed menus
general scheme of the studied structure. It is a set of created in the AutoCAD© software. Numerous choic-
epigraphied blocks with variable dimension. es are constantly available during the surveying
When the corpus of blocks to be processed is process, and every operation can be undone if nec-
very important, it is necessary to find the necessary essary. Each surveyed sign is recorded in a database,
resources to divide the whole into “manipulable” in the form of a text file, which can later be used for
units through a multitude of actions which identify other research purposes: studies on the shapes of hi-
the blocks that have one or more common charac- eroglyphs, automatic translation of the texts, search
teristics. The data can be used to identify the relative for missing elements, etc. This method considers the
position of a unit with respect to another. Based on needs of the epigraphists and offers them the pos-
the geometric attributes, iconographic or other, the sibility of controlling various operations during the
goal is to identify, manipulate and / or to connect computerized survey process. Particular empha-
and recreate these attributes in order to find indices sis has been put on the fact that the decoration of
that will help us to argue one or more assumptions a monument is indissociable from its architectural
about the hypothetic position of a block in relation support. The drawings must be recorded with all
to the adjacent ones, respecting of course the over- the information necessary to understand their real
all assembling of the general unit. meaning (i.e. the architectural and archaeological
context). The recording format has been normalized
The survey and description of blocks so as to be exploitable for research purposes (statis-
Carrying out epigraphic surveys is a very important tics, restoration of structures, etc.) (Figure 2).
task in archaeology, particularly in Egyptology be- Moreover, various exploitations (reconstitution,
cause all the monuments contain numerous texts paleography, etc.) are possible, thanks to the fact
and scenes engraved on their architectural ele- that all the signs drawn are recorded in a universal
ments. It is a matter of urgency to do such surveys, format. The publication of the texts can still be made
326 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
Figure 2 archaeologists an effective utilization of the pos-
Survey of the blocks in the sibilities offered by computer-assisted tools. This
VIIth pylon using the GRCAO study allowed us to demonstrate the complex and
method. systemic nature of using ICT in the field of archaeo-
logical reconstruction. The multiple actors, condi-
tions, means and goals considered in archaeological
reconstruction projects have led us to explore a new
approach that reflects this complexity.
in paper form, but can now be in numerical form Study of the publications in the archaeo-
too, which in turn leads to other possibilities such logical projects
as data exchange. This approach is of course adapt- In order to achieve the goal of our research, it was
able to the survey of other types of temples (Greek necessary to further study the nature of the archae-
for example). ological process. This involved understanding the
links and interrelations between the various compo-
DEVELOPMENT AND VALIDATION OF nents that defines the archaeological approach and
OUR ARCHAEOLOGICAL RECONSTITU- the various thought processes involved in archaeo-
TION MODEL logical reconstruction projects.
This part presents an exploratory prototype deve- In summary, archaeologists perceive and de-
loped to assist (but not to control) the reasoning scribe their approaches through filters determined
and decision-making in the formulation and com- by their use of these descriptions. Any scientific de-
puter simulation of architectural reconstruction hy- scription is both the result of past constructions, and
potheses. This “assistance” will be taking advantage the source of present and future constructs to enrich
of the knowledge and data available or extrapolated them or replace them. These filters can, in many cas-
by the production of computer models as well as es, push the archaeologists to become very attached
alphanumeric documents resulting from targeted to their hypothesis and persist in not recognizing
questions of the databases. their weaknesses.
From this perspective, archaeological publica-
The Use of ICT in archaeological recon- tions all look the same slightly “there can not describe
struction projects a monument without referring implicitly to the state
In our quest to answer this question, we begin with of knowledge and research objectives that determine
a study of the different restitution approaches used proper method the substance and form of the de-
in various phases of archaeological reconstruction scription, so that a catalog, especially when the terms
projects. This involves understanding how the differ- “rational” is a theoretical construct in the same way if
ent methods of approach have evolved (epistemo- not to the same extent that any historical essay “ (Gar-
logically), how those involved in such projects have din, 1979). This study showed a direct relationship
put information and communication technologies between the subjective nature of the process and
(ICT) to use in the field of built heritage. This study the diversity of approaches and thought processes
has identified two main avenues: one whose aim is which can be implemented.
the “representation” of project results and another This exploratory and propositional research
whose aim is to model this process in order to as- reinforces the systemic and complex nature of our
sist the archaeologist through various phases of a approach and prompts us to explore, in practice
project. It is the second approach that can better and through published literature, the elements of
respond to our goals and that can guarantee to the known reality. The study of archaeological reason-
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 327
ing through academic publications has allowed us Figure 3
to propose an initial typology of arguments studied. Operating principle to search
Each of these typologies reflects a methodological for blocks assembly using
approach based on organized actions that can be the elliptical mathematical
recorded in a set of reasoning modules. The clas- method.
sification of the various arguments by type of rea-
soning in order to determine the configuration of a
building has enabled us to establish a model of the
various components of the archaeological process
as well as validation rules that have been used by ar-
chaeologist in real reconstruction projects.
This research has allowed us to highlight phe-
nomena and observed processes, leading to a
model representing interrelationships and interac-
tions as well as the specific results of these complex
interconnections. This pattern reflects a cyclical pro-
cess of trial and error, in which the actors consecu-
tively ‘experience’ (according to the project’s goals
and through reasoning modules), several answers
to the questions exposed to him under the corpus
definition, description, structure, interpretation and
validation of the results until the latter would appear
to meet the original targets. Three examples of rea-
soning modules have been developed and tested
through a case study of the VIIth pylon of the Karnak up of lines and surfaces defining the boundaries of
temple in Egypt. those faces of points and defining the ends of these
lines. These data are essential for encoding neigh-
Geometric approach to restitution: Ex- borly relations between blocks because they are the
ample of a module using the geometric main reliable parameters of adjacencies. Each block
reconstruction of 2D objects is individually registered using the survey method
Considering the large number of blocks that archae- adopted (GRCAO method), in two ways:
ologists handle in archaeological reconstruction • The outline of the blocks: this corresponds to
project, it will be extremely difficult for them to visu- the detailed record of the actual boundaries of
ally identify formal and geometric complementari- the block witch will be saved as control points.
ties among the studied blocks. The main goal of this • The min-max block: this corresponds to the
module is to present a reasoning tool to search for polygon including the useful surface of the
possible complementarities among the geometric studied block. This contour is stored as control
characteristics of the identified blocks. It can assist points coordinates.
archaeologists to identify, among the huge mass of Although this reasoning module is based on
available data, a manipulable subsets based on their complex mathematical models, the user will not,
geometric characteristics (Figure 3). This reasoning in any case to manipulate them. All calculations
module may bring, in this case, a considerable as- will be back plans and the user will have to han-
sistance. dle only ‘objects’, which he is used to deal with.
Stone, the basic component of a wall, is made We have demonstrated, through the explora-
328 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
tion of a 2D topological help reclaim the blocks, This will be our first approach.
the relevance of such an approach and the rela- Another very important consideration in the
tive facility on the computer translation of the overall structure and composition of a scene ele-
actions that may include the reasoning module. ment: the “continuity of the theme.” Indeed, each
This module, and meeting our original goals, open to of the blocks that make up the structure includes a
the implementation of other actions (respecting the portion of the overall scene. Our second approach
same logic). Depending on the initial objectives and will be to consider a scene as an assemblage of icon-
methods that archaeologists adopt, a specialized ographic elements. These elements, taken together,
team that will be responsible for translating actions could possibly give a meaning to the whole (regis-
that he wishes to undertake and thus to optimize the ter, wall, room, etc.). It is therefore to study the type
contribution that can bring computer tools in the of continuity and propose a reasoning module that
success of the architectural reconstruction projects. can assist archaeologists in this type of work.
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 329
Figure 4
Restituation of the VIIth
pylone scene using the icono-
graphic module.
have an approximate value of the module used in goal is to study their type and their variants. Our
the scene that appears on the “Medinet Habu” py- analysis allowed us to identify eight types of conti-
lon. The iconographic analysis of the pylon reveals nuities between the studied blocks :
a traditional theme that represents the scene of the • linear continuity
“massacre of the enemies”. This scene was repeated • Iconographic continuity: human body, hiero-
in several Pharaonic structures. In an approach for a glyphic sign, other
restitution by completion, and based on the com- • relief continuity: Level Difference, Surface Un-
plete scene on the temple pylon “Medinet Habu”, coupling
our goal is to determine the missing elements, and • Continuity of the type of engraving: texture
so complete the scene studied (Figure 4). This treat- continuity, etc,
ment was made in four steps: • Text continuity
• Step 1: Determination of the modulus value • Geometric continuity: same Min-max, etc.
proportion of the VIIth pylon at the Karnak • Theme continuity: Text, cartouche, human, etc
temple, • Zone continuity: horizontal or vertical text, etc.
• Step 2: survey of the representation of the
Pharaon on “Medinet Habu temple” and on the
VIIth pylon with the GRCAO method,
• Step 3: Making the superposition of the 2 sur-
veys after practicing the technique of “tiles
making”,
• Step 4: Completing the missing part of the
scene (Figure 5).
330 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
Figure 6
Blocks assembled using the
typology and iconographic
connections.
Models of Computation: Form Studies - Volume 2 - Computation and Performance - eCAADe 31 | 331
construction, but its logical architecture, once com- Figure 7
pleted. Indeed, the results presented, through our Model of archaeological
case study, have shown that the approach is often reconstruction using the
an iterative process that is constantly progressing reasoning modules.
(by trial and error) through the manipulation of
data by actions encapsulated in various modules of
reasoning (order of epigraphic (text and phonetic),
constructive order, physical or geometric, etc..). In
this progression, we have “experienced” various hy-
potheses through the application or implementa-
tion of new reasoning modules reasoning. The find-
ing of inadequacy determined each iteration and
pushed us back to the data and the means available
to a new definition of corpus, description, structure
or interpretation. It was mainly to combine several
reasonings to reduce the number of available pos-
sibilities and progress until the results can meet the
objectives of the study. The evolutionary aspect of
the system allows us to add other modules of rea- Gardin JC 1979, Une archéologie théorique, Hachette, Paris.
soning if the resources available cannot enable the French adaptation of the original edition: Archaeologi-
objectives of the actors. cal Constructs: an Aspect of Archaeological Theory, Cam-
bridge University, Cambridge, UK.
REFERENCES Robins, G, Gardin, JC, Guillome, O, Herman, PO, Hesnard, La-
Carlotti, JF 1995, Contribution à l’étude métrologique de grange, MS, et al. 1994, Proportion and style in Ancient
quelques monuments du temple d’Amon-Rê à Karnak, Egyptian art, Austin-University of Texas, pp. 62-85.
Cahier de Karnak, Paris 10: 65-125. Schwaller de Lubicz, RA 1999, Le temple de l’homme : Apet
du Sud à Louqsor, Dervy, Paris.
332 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Form Studies
Models of Computation:
Human Factors
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 333
334 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
Fusion of Perceptions in Architectural Design
Ozer Ciftcioglu1, Michael S. Bittermann2
Delft University of Technology, The Netherlands
http://bk.tudelft.nl/en/research/research-projects/computational-intelligent-design
1
o.ciftcioglu@tudelft.nl, 2m.s.bittermann@tudelft.nl
INTRODUCTION
Perception, and in particular visual perception, is Gelade, 1980; O’Regan et al., 2000; Treisman, 2006),
an interdisciplinary concept taking an important so that gaining insight into the nature of human
place in many diverse applications. These range perception from the experiments remains minimal.
from design of objects and spaces, for which per- However, considering that the perception phe-
ceptual qualities are aimed (Bittermann and Ciftcio- nomenon is due to brain processing of retinal pho-
glu, 2008), to robotics where a robot moves based ton-reception, it should be clearly noted that the
on perception (Ciftcioglu et al., 2006a; Bülthoff et phenomenon is highly complex. That is, the same
al., 2007). However, although visual perception has experimenter may have different perceptions of the
been subject to scientific study for over a century, same environment at different times, depending on
e.g. see Wertheim (1894), it is interesting to note that the complexity of the environment, psychological
it remained mysterious what perception precisely is state, personal preferences and so on, not to men-
about, while it eluded mathematical modeling until tion different vantage points. Due to the complexity
very recently. Many approaches to perception, in of the brain processes and diversity of environments
particular in the domain of psychology and neuro- subject to visual perception, the empiric approaches
science, are based on experiment, while underlying to perception yielded merely rudimentary under-
theoretical models or hypotheses are either sim- standing of what perception is. Although some ver-
plistic, ambiguous or even absent (Treisman and bal definitions of the concept are presented in the
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 335
literature, e.g. (Gibson, 1986; Palmer, 1999; Foster, to probabilistic considerations. This approach has
2000; Smith, 2001) due to excessive ambiguity of the been described and its validity demonstrated (Ciftci-
linguistic expressions, these are not to be converted oglu et al., 2006b; Bittermann and Ciftcioglu, 2008).
to precise or even more unambiguous mathematical This probabilistic approach is unique in the
expressions. sense that the perception refers to human percep-
Computational approaches addressing some tion. In the field of computer vision perception is
perception aspects have been proposed by Marr considered to be a mere image processing and en-
(Marr, 1982) whose prescription is to build computa- suing pattern recognition process, where Bayesian
tional theories for perceptual problems before mod- methods are appropriate (Knill et al., 2008; Knill and
eling the processes which implement the theories. Richards, 2008; Yuille and Bulthoff, 2008). Bayesian
Explicitly, different visual cues are computed in sep- approach is to characterize the information about
arate modules and thereafter only weakly interact the world contained in an image as a probability dis-
with each other, where each module separately es- tribution which characterizes the relative likelihoods
timates scene properties, such as depth and surface of a viewed scene being in different states, given
orientation, and then the results are combined in the available image data. The conditional probabil-
some way. These works can be termed as image pro- ity distribution is determined in part by the image
cessing based approaches, and they are determinis- formation process, including the nature of the noise
tic in nature, starting from simulation of retinal data added in the image coding process, and in part by
acquisition. The retinal photon-reception certainly is the statistical structure of the world. The Bayes’s rule
the first stage in the time sequence of the process- provides the mechanism for combining these two
ing in the visual system, and it might be dealt with factors into a final calculation of the posterior distri-
by means of an image specified as a two-dimension- bution. This approach is based on Bayes formula
al matrix. However, the ensuing neural processes p (i | s ) p ( s )
p( s | i) =
are highly complex, so that retinal image does not p (i ) (1)
imply that all the information in the scene is regis- Here s represents the visual scene, the shape and
tered in the human brain and remembered shortly location of the viewed objects, and i represents the
afterwards. Only part of the visual information is re- retinal image. p(i|s) is the likelihood function for
membered. For instance, it is a common experience the scene and it specifies the probability of obtain-
that when we look at a scene, we are not aware of ing image i from a given scene s. p(s) is the prior
the existence of all objects the scene comprises. This distribution which specifies the relative probability
is easily verified for scenes where the number of ob- of different scenes occurring in the world, and for-
jects exceeds about seven objects. mally expresses the prior assumptions about the
In this work a probabilistic approach is adopted scene structure including the geometry, the light-
for perception, where perception is considered a ing and the material properties. p(i) can be derived
whole process from the stimulus coming from the from p(i|s) and p(s) by elementary probability theory.
scene to mental realization in the brain. In other Namely _ _
words, all complex processes, e.g. image formation p (i ) = p (i | s ) p ( s ) + p (i | s ) p ( s ) (2)
on the retina, processes in the visual cortex in the so that (1) becomes
brain, and final realization of ‘seeing’ is modeled as p (i | s ) p ( s )
a single probabilistic event, where ‘seeing’ in that p( s | i) = _ _
probabilistic description is considered to be percep- p (i | s ) p ( s ) + p (i | s ) p ( s ) (3)
tion, where remembrance is a matter of probability. The posterior distribution p(s|i) is a function giving
The final realization or remembrance of the scene in the probability of the scene being s if the observed
the brain may be absent or elusive, which is subject image is i. Bayesian approach is appropriate for
336 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
Figure 1
Plan view of the basic geo-
metric situation of perception;
P represents an observer’s
point, viewing an object (a);
probability density function
characterizing perception
along y direction for lo=2 (b).
(a) (b)
computer vision, because for human p(i|s) is almost described elsewhere (Ciftcioglu et al., 2006b;
clearly known, that is p(i|s)=1. Consequently, p(i|s)=0 Bittermann and Ciftcioglu, 2008) and briefly men-
and from equation (3) tioned as follows. We consider a basic geometric
situation as shown in Figure 1a. For a visual scope
-p/4£q£p/4 the probability density characterizing
(4) perception along the y-direction is shown in Figure
which is independent of the probabilistic uncer- 1b for lo=2 and given by
tainties about the scene. This means, as the p(i|s)
is definitive for human recognizing a scene, p(s|i) (5)
is also definitive, being independent of p(s) which The probability density with respect to q is given by
is the prior assumptions about the scene structure fq(q)=1/qS , where qS=π/2. The one-dimensional per-
including the geometry, the lighting and the mate- ception of an object spanning from arbitrary object
rial properties. The effectiveness of Bayes for ma- boundaries a and b on the y-axis is obtained by
chine vision is due to its recursive form, providing
improved estimation as the incoming information is (6)
sustained. yielding perception as an event being subject to
The organization of the paper is as follows. In probabilistic computation. For the case of percep-
the modeling human perception section a vision tion of an object by a single human observer the
model is established. In the perception from multiple computation is accomplished always by (6) when
viewing positions section, the fusion of perceptions the projection of the object is considered as one-
from multiple viewpoints is derived. In the section dimensional along a line. The same computation
experiments, two experiments demonstrating the can be valid for three-dimensional objects, provided
fusion of perceptions in architectural design are pre- we consider the projection of the object on a plane.
sented, and this section is followed by conclusions. In this case, the same formulation can be used twice
for each respective orthogonal dimension of the
MODELLING HUMAN PERCEPTION plane in the form of product of the two probability
In the human perception an object is visually densities integrated over the projected area on the
seen, but its remembrance is subject to some plane.
degree via probabilistic considerations. This is
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 337
Figure 2
Perception events E1, E2, and
E3 respectively denoting
perception of an object from
three viewpoints VP1, VP2,
VP3; the union of the events is
indicated by the white dashed
line (a); Venn diagram cor-
responding to the perception
events in Figure 2a (b).
(a) (b)
PERCEPTION FROM MULTIPLE VIEWING conspicuous, in order to determine for instance the
POSITIONS building entrance that should preferably be posi-
In many occasions an object is subject to perception tioned. Consequently it will be easily noticed.
from multiple viewing positions, either by the same A scene subject to investigation as exemplary
observer or by multiple observers. That is, percep- case is shown in Figure 2a, with the three perception
tions from different viewing positions are subject to events E1, E2, and E3. The figure shows a plan view
fusion. As the perception is expressed in probabilis- of the space and the location of an object subject
tic terms, the union of different perception events is to perception assessment and optimal positioning.
subject to probabilistic computation. Requirements The object is subject to perception from the three
with respect to perception from multiple viewing viewing positions VP1, VP2, and VP3, where it re-
positions can occur in many practical applications. spectively subtends the angle domains θ1, θ2, and
To demonstrate fusion of perceptions we restrict the θ3 as seen in the figure. The dashed lines in the fig-
study to two basic examples. It is noted that they ure indicate the boundaries of the observer’s visual
may not be important depending on the particular scope at the respective viewing positions spanning
design problem; however the examples are simple the angles θS1, θS2, and θS3. Figure 2b shows a Venn
in order to clearly explain the method. The same diagram corresponding to the perception situa-
method can be applied in more complex tasks, such tion in Figure 2a. In the case of perceiving an object
as courtroom design (Bhatt et al., 2011), auditorium from several viewing positions this corresponds to
design, office design, as well as urban design. In the the probabilistic union of the perceptions, which
first case study we consider an exhibition gallery en- is obtained by P(E1ÈE2ÈE3)=P(E1)+P(E2)+P(E3)-
vironment, where there are several entrances to a P(E1ÇE2)-P(E1ÇE3)-P(E2ÇE3)+P(E1ÇE2ÇE3), as this
gallery space, and we are wondering what the best is seen from Figure 2b. It is noted that the events
position to place an object is, so that perception of P(E1), P(E2), and P(E3) are independent. In the three
the object is maximized. In the second application dimensional perception case θ1, θ2, and θ3 become
we are considering an urban environment, where solid angles Ω1, Ω2, and Ω3 and the scopes θS1, θS2, and
a building will be erected that will be seen from a θS3 become solid angles ΩS1, ΩS2, and ΩS3.
number of prominent viewing positions. We are in-
terested to obtain the perception of the different EXPERIMENTS
parts of the future building as fusion of perceptions Computer experiments are carried out, where P(E1),
from these viewing positions. In the latter case study P(E2), and P(E3) are obtained by probabilistic ray
this is to identify which part of the building is most tracing, so that a three-dimensional object is sub-
338 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
ject to perception measurement without need for which is termed fitness. In the algorithm population
projection to a plane as shown in Figure 1a. That is, members with a comparatively high fitness will be
the solid perception angle Ω subtended by the ob- favored over solutions with low fitness, by giving
ject, as well as the solid angle ΩS, which defines the the former a higher chance to remain in the popu-
observer’s visual scope, are simulated by vision rays lation and to produce new solutions by combining
that are sent in random directions within the three fit solutions. The combination among solutions is re-
dimensional visual scope. The randomness in terms ferred to as crossover operation, and it is carried out
of unit Ω is characterized by fΩ(Ω)=1/ΩS, conform- among pairs of population members referred to as
ing to the uniform pdf fq(q)=1/qS that models the parents. Crossover entails that the parameters con-
unbiased observer in the case of perception of an stituting a parent are treated as binary strings, and
object that is contained in the scope of vision plane, portions of the strings are exchanged among the
as seen in Figure 1a. In the experiments the number two solutions to create new solutions with features
of vision rays is denoted by nv. An object within the from both parents. This process is repeated for sev-
visual scope will be hit by a number of vision rays np, eral iterations, and due to the probabilistic favoring
and these rays are termed perception rays. The per- of fit solutions, eventually optimal solutions appear
ception of the object is given by P=np/nv. in the population (Goldberg, 1989; Zalzala and Flem-
ing, 1997).
Experiment Nr. 1 The resulting best solution after 40 generations
The first experiment concerns a basic issue in an is shown in Figure 3a in a plan view and in Figure 3b
architectural design, namely positioning an object, in perspective view, where the perception rays are
so that its perception from several viewing posi- seen. The circles in the figures mark the boundaries
tions is maximized in the sense that the object will at 3.0 m distance from the doors. In Figure 3c-e the
be perceived well at least from one of the relevant space is shown from the respective viewing position
viewpoints. This issue is exemplified by means of VP1, VP2, and VP3. The best position of the sculp-
positioning a sculpture in a museum space having ture is at the edge of the circle in front of viewing
several entrances; namely the space has three doors, position VP1. This position has the highest union of
where the relevant viewing positions are located de- perceptions in the feasible region, namely PU=.307.
noted by VP1, VP2, and VP3. The problem is to posi- This is composed of the perceptions P1=.157 at VP1,
tion the sculpture in the space, so that the visitors P2=.063 at VP2, and P3=.048 at VP3.
entering the space from either door will notice the For comparison the second best position is
object. The problem is to maximize the union of the shown in Figure 4, namely the perceptive plan view
perceptions from the three viewpoints P(E1ÈE2ÈE3), in Figure 4a, perspective perceptive view in Figure
while at the same time the sculpture positioned at 4b, and the perceptive views from VP1, VP2, and VP3
point x should not obstruct entrance to the room in Figure 4c-e respectively. The union of the percep-
from either door. The latter constraint is formulated tions PU=.255, that is 17% lower compared to the
by the condition ‖x-xo‖≥3, where xo is the position of best solution in Figure 3. The union is composed of
each viewing position. The maximization is carried the perceptions P1=.072 at VP1, P2=.152 at VP2, and
out by the method of random search, accomplished P3=.033 at VP3. The results demonstrate a common
through the method of genetic algorithm. Genetic design knowledge, namely when one aims to maxi-
algorithm is a stochastic optimization method from mize the perception of an object in a space with
the domain of computational intelligence. The al- several possible viewing positions, it is preferable to
gorithm starts from a number of random solutions position the object to have a high perception for at
referred to as members of a population. Each mem- least one of the possible positions, for that matter
ber satisfies the objective function to some degree, VP1, rather than having several moderate percep-
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 339
Figure 3
Best position for the sculpture
in a plan view (a); in a percep-
tive view (b); for VP1, P1=.157
(c); for VP2, P2=.063 (d); for VP3,
P3=.048 (e).
(a) (b)
Figure 4
Second best position for the
sculpture in a plan view (a); in
a perceptive view (b); for VP1,
P1=.072 (c); forVP2, P2=.152 (d);
for VP3, P3=.033 (e).
(a) (b)
340 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
Figure 5
Location in an urban scene,
where a new building is
subject to perception consid-
erations.
Figure 6
Zoomed out rendering of the
urban scene in Figure 5, where
a new building is subject to
perception considerations
from three viewpoints.
tions, i.e. without any outstandingly high one. The tions, is seen in Figure 5. The zoomed out rendering
lower perceptions in Figure 4c demonstrate the im- of the scene in Figure 5 is shown in Figure 6, where
plications of the Cauchy function in Figure 1b, where the three viewing positions VP1, VP2, and VP3 are
deviation from the frontal direction for an object, in indicated. Figure 7 schematically shows the floor
particular at a near distance from the observer, yield plan of the urban situation, as well as the percep-
reduction in probability density, i.e. visual attention tion cones and vision scopes belonging to the view-
is diminished in this case. ing positions, which are the endpoints of streets
entering to a square where the building is located.
Experiment Nr. 2 Figure 7b shows random vision rays having uniform
A second experiment concerns the perception of a pdf with respect to the vision angle modeling visual
building in an urban context from three viewpoints scopes for three viewing positions. Figure 7c shows
that are prominent locations in the surrounding of those rays among the vision rays that hit the build-
the building. The location in an urban scene, where ing subject to perception, for perception computa-
a new building is subject to perception considera- tion. The results from the perception fusion for the
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 341
respective building envelope portions are shown Figure 7
Figure 8, where the numbers display the fused per- Scheme of an urban situation,
ception associated with the respective portion. From where a building is subject
the analysis it is seen that the part of the envelope to perception analysis from
that is most intensely perceived from the three view- three viewpoints in plan view
points, is the area in front of VP2, while the second (a); random vision rays with
most intense part is the part of the building corner uniform pdf w.r.t. the vision
oriented towards VP1, which is expected consider- angle modeling visual scopes
ing the influence of the distance lo in the perception for three viewing positions
computations in (5). The information obtained from VP1, VP2, and VP3 (b); the rays
perception fusion is of relevance for a designer de- among the vision rays that
termining formal and functional details of the en- hit the building subject to
velope, for instance determining the position of en- (a) perception (c).
trance during conceptual design. Figure 9 shows the
fused perceptions of the building envelope from the
three viewpoints per envelope element with a vision
scope that is 20% narrower compared to Figure 8.
CONCLUSIONS
A method for fusion of perceptions is presented and
demonstrated with two examples from architectural
design. The probabilistic treatment, where percep-
tion quantifies the chance that an unbiased observ-
er notices an environmental object, is accomplished
through fusion of perceptions. The method of quan-
tified union of perceptions has been an unresolved
issue up till now, that is resolved in this presentation.
The fusion by probabilistic union yields significant (b)
information for designers. With the presented ap-
proach an object is to be perceived from several
viewpoints at the same time. Such abstraction is
necessary, since the precise analysis of the percep-
tions is a formidable issue due to abundant visual
scene information. The use of perception fusion
as constrained design objective has been demon-
strated by coupling the method with a probabilistic
evolutionary algorithm performing the constraint
optimization. The combination of the two proba-
bilistic methods is a powerful tool for designers as
it permits treatment of architectural design to be
highly constrained and involving many perception
related demands. Although the examples presented
are rather basic, the method is generic and yields
highly appreciable scoring executions in diverse ap- (c)
342 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
Figure 8
Fused perceptions of the
building envelope from the
three viewpoints per envelope
element.
Figure 9
Fused perceptions of the build-
ing envelope from the three
viewpoints per envelope ele-
ment with a vision scope that
is 20% narrower compared to
Figure 8.
plications in the areas where perception plays a role, ies on visual perception for perceptual robotics’, ICIN-
such as architecture, urbanism, interior and indus- CO 2006 - 3rd Int. Conf. on Informatics in Control, Auto-
trial design, as well as robotics. mation and Robotics, Setubal, Portugal, pp. 468-477.
Ciftcioglu, Ö, Bittermann, MS and Sariyildiz, IS 2006b, ‘To-
ACKNOWLEDGEMENT wards computer-based perception by modeling visual
Technical design assistance by Architect Paul de perception: a probabilistic theory’, 2006 IEEE Int. Conf.
Ruiter, providing us with the scene presented in the on Systems, Man, and Cybernetics, Taipei, Taiwan, pp.
second experiment is gratefully acknowledged. 5152-5159.
Foster, J 2000, The Nature of Perception, Oxford University,
REFERENCES Oxford.
Bhatt, M, Hois, J and Kutz, O 2011, ‘Ontological Modelling Gibson, JJ 1986, The Ecological Approach to Visual Percep-
of Form and Function for Architectural Design’, Applied tion, Lawrence Erlbaum Associates, Hillsdale, New Jer-
Ontology, pp. 1-32. sey.
Bittermann, MS and Ciftcioglu, Ö 2008, ‘Visual perception Goldberg, DE 1989, Genetic Algorithms, Addison Wesley,
model for architectural design’, Journal of Design Re- Reading, MA.
search, 7(1), pp. 35-60. Knill, DC, Kersten, D and Mamassian, P 2008, ‘Implications of
Bülthoff, H, Wallraven, C and Giese, M 2007, ‘Perceptual ro- a Bayesian formulation for visual information for pro-
botics’, in B Siciliano and O Khatib (eds), The Springer cessing for psychophysics’, Perception as Bayesian Infer-
Handbook of Robotics, Springer, pp. 1481-1495. ence, Cambridge, Cambridge, pp. 239-286.
Ciftcioglu, Ö, Bittermann, MS and Sariyildiz, IS 2006a, ‘Stud- Knill, DC and Richards, W 2008, Perception as Bayesian Infer-
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 343
ence, Cambridge University, Cambridge, UK. Treisman, AM and Gelade, G 1980, ‘A feature-integration
Marr, D 1982, Vision, Freeman, San Francisco. theory of attention’, Cognitive Psychology, 12(1), pp. 97-
O’Regan, JK, Deubel, H, Clark, JJ and Rensink, RA 2000, ‘Pic- 136.
ture changes during blinks: looking without seeing Wertheim, T 1894, ‘Ueber die indirekte Sehschaerfe’, Z Psy-
and seeing without looking’, Visual Cognition, 7(1-3), chol Physiol Sinnesorg, 7, pp. 172-189.
pp. 191-211. Yuille, AL and Bulthoff, HH 2008, ‘Bayesian decision theory
Palmer, SE 1999, Vision Science, MIT, Cambridge, MA. and psychophysics’ in DC Knill and W Richards (eds),
Smith, D 2001, The Problem of Perception, Harvard Univer- Perception as Bayesian Inference, Cambridge University,
sity, Cambridge, MA. Cambridge, UK, pp. 123-161.
Treisman, AM 2006, ‘How the deployment of attention de- Zalzala, AMS and Fleming, PJ 1997, Genetic Algorithms in
termines what we see’, Visual Cognition, 14(4), pp. 411- Engineering Systems, IEE Control Eng., Series 55, Cam-
443. bridge University, New York.
344 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
Ambient Surveillance by Probabilistic-Possibilistic
Perception
Michael S. Bittermann1, Ozer Ciftcioglu2
Delft University of Technology, The Netherlands
http://bk.tudelft.nl/en/research/research-projects/computational-intelligent-design
1
m.s.bittermann@tudelft.nl, 2o.ciftcioglu@tudelft.nl
INTRODUCTION
Ambient Intelligence refers to electronic environ- support for human interactions is aimed for. In this
ments that are sensitive and responsive to the pres- vision people are surrounded by intelligent intui-
ence of people (Aarts and Encarnacao, 2006). Such tive interfaces that are embedded in different kinds
electronic environments are called as ambient envi- of objects yielding an environment that is capable
ronment, referring to the surveillance of a physical of recognizing and responding to the presence of
ambience in the computer screen environment. Am- different individuals in a seamless, unobtrusive or
bient Intelligence involves different fields including invisible way (Ducatel et al., 2001). The European
electrical engineering, computer science, industrial Commission’s Information Society Technologies Ad-
design, human machine interaction, and cognitive visory Group (ISTAG) considers Ambient Intelligence
sciences. It stems from the combination of the three an important concept, as they predict that the con-
concepts ubiquitous computing, ubiquitous com- cept will be applied to everyday objects such as fur-
munication, and intelligent user friendly interfaces. niture, clothes, vehicles, roads and smart materials.
It is considered to provide a vision of the informa- According to ISTAG, Ambient Intelligence implies
tion society, where greater user-friendliness, more machine awareness of the specific characteristics
efficient services support, user-empowerment, and of human presence and personalities, taking care of
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 345
Figure 1
A door’s functional space is
not fully encompassed by the
field of view of two cameras
(a) (Bhatt et al., 2009); the
functional space is fully
encompassed by the field of
(a) (b) (c) view of two cameras (b); the
space is fully encompassed
needs and being capable of responding intelligently situation where the door and its functional space are by the field of view of three
to spoken or gestured indications of desire (Weyrich, entirely within the fields of view of the two cameras, cameras (c).
1999). Benefits in some practical applications have thereby complying with the requirement. In Figure
been reported, see e.g. Augusto and Shapiro (2007), 1c three cameras are used, and the consistency re-
Streiz et al. (2007), Ramos et al. (2008), Augusto and quirement is also fulfilled.
Nugent (2006). Examples of application areas are In an ambient intelligent system, human su-
personal assistance by mobile devices (Richard and pervision may be important in case continuous in-
Yamada, 2007), clothing (Boronowsky et al., 2006), situ monitoring of scenes is demanded for instant
entertainment (Saini et al., 2005; Dornbush et al., human intervention. In such a case, the functional
2007), office and meetings rooms (Waibel et al., space shown in Figure 1 is to be supervised by hu-
2010), and home environments (Aarts and Died- man through monitor watching. Here the human
eriks, 2007; Nakashima, 2007). The benefits in the perception plays an important role. The actual scene
applications concern enhanced security, and utility. is surveyed by the cameras, and at this stage human
Concerning security, an issue of common relevance perception is not in the play. However, the image of
is surveillance of objects in buildings, e.g. see (Take- the functional space is propagated to a screen, and
mura and Ishiguro, 2010). The objects may concern then the human perception via the screen becomes
building elements such as doors, hallways, etc., as an issue of assessment. Such assessments should be
well as valuable articles. For instance, in an environ- quantified to understand the difference among the
ment the monitoring of people passing through the probable camera positions, or among cases where
doors may be of relevance for security purposes, so different number of cameras are used. It is empha-
that the locations where surveillance cameras are sized that two, three, or more cameras may be used
suitably placed, and the number of cameras used to cover the functional space entirely, as exempli-
to supervise the environment, are important issues fied in Figure 1b and 1c, so that compliance with
to consider. This may be relevant both during the the consistency condition described above can be
design of an ambient environment, as well as dur- achieved in several ways that are not equivalent
ing the assessment of the surveillance provided for with respect to surveillance. As the human should
an existing environment. In an existing work this realize the presence of objects and events in his
issue is addressed by verifying if a functional space mind, which is a complex brain process involving
of a door is fully covered by supervision cameras uncertainty, quantitative assessment of the human
(Bhatt et al., 2009), which is a requirement to guard perception in the ambient environment surveil-
the traffic between the rooms. This is seen in a plan lance case becomes desirable and is challenging to
view in Figure 1a, where the door and its functional accomplish. Comparing the situations in Figure 1b
space, which is shown by a rectangle, are not fully and 1c, qualitatively three cameras in Figure 1c are
covered by the fields of view of two cameras. This favorable with respect to the human perception of
yields requirement inconsistency. Figure 1b shows a the functional space, providing more visual infor-
346 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
mation about the object to the human. Following analysis by human, probability theoretic computa-
the approach of existing works, such as Bhatt et al tions are used to simulate perception of objects by a
(2009), surveillance in Figure 1b and 1c is considered human, who is immersed in the scene at the camera
to be the same, as requirement consistency is treat- viewpoints.
ed as a binary statement. Binary verification of the
requirement compliance is giving some indication Probabilistic Perception Revisited
about the effectiveness of the camera surveillance. Due to the complexity of brain processes underlying
However, this may be not enough for the case of hu- perception, perception is to be modeled as a proba-
man supervision, which is based on human percep- bilistic event. That is, there is a chance to see an ob-
tion. Based on this view, the present work intends to ject, meaning the presence of the object is realized
make some steps forward along this line, providing in mind, which implies a chance of overlooking the
measured assessment about the quality of surveil- object, too. We can term this as the uncertainty of
lance of an ambient environment based on percep- human vision (Rensink et al., 1997; Bittermann and
tion modeling. Measured assessment is desirable in Ciftcioglu, 2008). For a single unbiased observer this
particular when optimal solutions are sought dur- uncertainty is quantified as described in Ciftcioglu
ing design of an environment, for instance with re- et al (2006b), Bittermann and Ciftcioglu (2008). Con-
spect to maximizing surveillance by optimal place- sider the basic geometry as shown in Figure 2a. P
ment and orientation of sensors, or minimizing the represents an observer’s point, where he is viewing
number of cameras while sufficient surveillance is an object. We consider a perception plane located at
provided. We note that in this work we assume that distance lo from the observer, and a scope of vision
there is no automated camera system for object rec- plane orthogonal to the perception plane, having
ognition involved, although even in that case, differ- the observer’s point and the object in it. The inter-
entiation among alternative camera utilizations, in section of the perception plane and the scope of vi-
order to determine the effectiveness of the machine sion plane is the y-axis. A line perpendicular to the
recognition, still remains an issue. perception plane, passing from the point P, is the x-
The organization of the paper is as follows. The axis. The observer has a visual scope in the scope of
methodology section describes the treatment of the vision plane, defined by the angle θS=π/2, which is
probabilistic and possibilistic aspects of the surveil- termed as vision angle. He is viewing the object that
lance. The computer experiment section describes an subtends the angle θb-θa. An unbiased observer is
example application of the method for an ambient modeled, i.e. he has no preference for any direction
environment, and the section is followed by conclu- within the visual scope. This means the probability
sions. density function (pdf ) with respect to θ is given by
fθ(θ)=1/θS, as seen in Figure 2b upper. As the object
METHODOLOGY subtends the perception angle θb-θa, it has an as-
This research aims to make assessment about the
quality of human surveillance of an object based on sociated perception
camera sensed information. When a human views a , shown by the gray shaded area in Figure 2b upper.
camera sensed scene on a screen, in order to give P quantifies the probability the object is mentally re-
meaningful interpretation to the scene he infers the alized by the observer. The perception can be com-
information about the camera position and orienta- puted along the y-axis in Figure 2a by radially pro-
tion from the scene, without having been explicitly jecting the object from P on the y-axis. It yields a line
informed about these. This process of assuming of segment, spanning ya and yb, as seen in the figure.
a camera position by human is called immersion. To The uniform pdf with respect to the vision angle θ is
model this early stage of the ambient environment given by fθ(θ)=1/(π/2) and corresponds to the follow-
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 347
Figure 2
An object projected on
the perception plane and
perceived from P (a); sketch
of the probability density
function (pdf) characterizing
perception with respect to θ
(b upper); pdf characterizing
perception with respect to the
y direction for lo=2 (b lower).
(a) (b)
ing probability density with respect to y (Bittermann Figure 3. The scene subject to investigation is shown
and Ciftcioglu, 2008) in Figure 3a, presenting a plan view of two rooms
connected by a door and an associated functional
(1) space shown by a rectangular box around the door.
The plot of (1) for lo=2 is seen in Figure 2b lower. The The functional space is subject to surveillance via
perception is computed by the three cameras, where the visible portions of this
space respectively subtend the angles θ1, θ2, and θ3
(2) as indicated by the dark shaded areas in the figure.
and the result is shown by the gray shaded area in The dashed lines in the figure indicate the bound-
the figure. It is emphasized that the sizes of the gray aries of the cameras’ fields of view, where their as-
shaded areas in Figure 2b upper and 2b lower are sociated angles θS1, θS2, θS3 are taken to be the same
the same. We note that for the perception of a three in this example. The intersection among the three
dimensional object both vision angle and percep- camera scopes form a universe of discourse for the
tion angle become respective solid angles. surveillance events as shown in Figure 3b by means
of bold dashed lines. We define the following three
Union of Perception Events perception events within this universe as seen in
We emphasize that for the surveillance of the ambi- Figure 3c. The event a human observer, who is im-
ent environment being considered, the consistency mersed at camera 1, becomes aware of the function-
requirement mentioned above stipulates that the al space that is at the same time within the scopes of
functional space should be entirely encompassed camera 2 and camera 3, is denoted by event E1. Con-
by multiple cameras’ fields of view. This means a versely, the perception event from camera 2 that is
human observing the scene will obtain the infor- at the same time within the fields of view of camera
mation from multiple cameras at the same time. In 1 and camera 3 is denoted by E2. In the same way,
this respect we consider the case shown in Figure the perception event from camera 3 that is at the
1, where a single camera is not sufficient to comply same time within the fields of view of camera 1 and
with the consistency requirement, and in this study camera 2 is denoted by E3. The regions in the scene
we consider the perceptions by means of three cam- corresponding to the events are shown in Figure
eras, denoted camera 1, camera 2 and camera 3 in 3c, where the space belonging to E1 is delimited by
348 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
Figure 3
Functional space of a door
subject to surveillance by
means of three camera sen-
sors (a); universe of discourse
for the surveillance, where
θS1, θS2, and θS3 denote the
respective fields of view of the
cameras; perception events
E1, E2, and E3 , their union, and
intersection (c); Venn diagram (a) (b) (c) (d)
corresponding to the events in
figure 3c. means of red dashed lines, for E2 by means of blue responding to E1ÇE2ÇE3 from plan view, and Figure
dashed lines, and for E3 by means of orange dashed 4f shows the same region from a perspective view.
lines. The probability of the perception events is ob- The probabilities P(E1), P(E2), and P(E3) are obtained
tained by P(E1)=θ1/θS1, P(E2)=θ2/θS2, and P(E3)=θ3/θS3. It by similar computations as given by (2) but for three
is to note that E1, E2, and E3 are independent events. dimensional space, where θ becomes solid angle Ω.
With respect to ambient surveillance assessment
being aimed for in this work, the event subject to Converting the Probability into Possibility
computation is the union of the perception events It is emphasized that the computations above mod-
PU=E1ÈE2ÈE3. The union refers the event that the ob- el the perception of observers, who are viewing the
server becomes aware of the functional space either functional space being present at all three camera
via immersion at camera 1, camera 2, camera 3 or via positions. However, the scene is actually viewed on
combinations among them at the same time, while a monitor screen and not directly from locations in
the consistency condition, namely that the event the physical environment. That is, no actual object is
is to take place within all cameras’ fields of view, is being perceived in the ambient environment case,
fulfilled at the same time as boundary condition. but a visual representation of the scene on a screen
The region of space in the scene that corresponds is being perceived. This yields the immersion phe-
to E1ÈE2ÈE3 is delimited by the white dashed line in nomenon, which we can also term as virtual percep-
Figure 3c. The region of space in the scene that cor- tion. In the ambient environment case, instead of
responds to E1ÇE2ÇE3 is visualized in the same figure perception alone an assessment of the perception
by means of a yellow dashed line. Figure 3d shows is to be carried out, and this assessment should be
a Venn diagram corresponding to the perception expressed in possibilistic terms, namely as possibil-
events in Figure 3c. ity of perception. This means the probability quan-
The regions corresponding to the universe of tifying the perception of the object by the observer
discourse and encompassing the perception events should be converted to a possibility of perception.
are shown in 3D renderings in Figure 4. Figure 4a This is shown in Figure 5. Figure 5a shows the per-
shows the fields of view of the cameras from top ceptions of the functional space from the three
view in red color, as well as the cones encompass- cameras. The probability density functions fθ(θ) are
ing the respective perception events E1, E2, and E3 in integrated along angle dimension θ, yielding the
yellow color. The same regions are shown in Figure perceptions P(E1), P(E2), and P(E3). It is to note that
4b from a perspective view. Figure 4c shows the uni- each of the three integrals have their center points
verse of discourse from top view and Figure 4d from at θ=0 as seen in the figures. This is due to the sur-
a perspective view. Figure 4e shows the region cor- veillance purpose, where the cameras are oriented
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 349
Figure 4
Fields of view of the cameras
denoted by C1, C2, C3 and
the cones in which perception
events takes place from top
view (a); from a perspective
view (b); universe of discourse
from a top view (c); from a per-
(a) (b) (c) spective view (d); The region
corresponding to E1ÇE2ÇE3
from top view (e), from a
perspective view.
(d) (e) (f )
in such a way that the object subject to perception ulating the immersion, the point θ=0 also represents
is located at the center of the respective fields of a reference point for perception possibility compu-
view of the cameras. The probability of the union of tation on the monitor, as zero refers to the center
the perception events P(E1ÈE2ÈE3) is shown by the of the fields of views of the cameras, i.e. center of
hatched area in Figure 5b. Being an integral of the monitoring screen. For the possibility assessment,
uniform pdf fθ(θ)=1/θS, P(E1ÈE2ÈE3) corresponds to the possibility density is subject to integration over
an angle domain θ’, as seen in the figure. It is noted the angle domain θ’, where the integration starts
that P(E1ÈE2ÈE3) is also centered at θ=0 being the ref- from θ=0, yielding the dark gray shaded area in Fig-
erence point of the perception computation in the ure 5b, the size of which quantifies the possibility
scene as result of the immersion phenomenon. The of perception. It is emphasized that the integration
pdf has a possibilistic density counterpart, namely a starts from zero, i.e. in the middle of the screen, as
triangular possibility density function as seen in the to human perception, the possibility of perception
figure. It is noted that the possibility density is maxi- is assessed starting from the middle of the screen. θ’
mum at the place that corresponds to the expected starts from zero and maximally extends covering the
value of the uniform probabilistic density with re- interval -θS/2 and +θS/2, so that its maximum value
spect to θ, namely θ=0. Therefore, next to being the becomes θS. Figure 5c shows a sketch of the relation-
reference point for the perception computation sim- ship between possibility of perception versus the Figure 5
Perception of the functional
space from one of the cameras
(a); conversion of the union of
the perceptions to possibility
of perception (b); possibility of
perception versus perception
(a) (b) (c) (d) as sketch (c); as plot (d).
350 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
Figure 6
Camera picture taken from
camera 1, where P(E1)=.246
(a); from camera 2, where
P(E2)=.207 (b); from camera 3
where P(E3)=.310 (c).
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 351
Figure 7
Vision rays from a plan view
(a); from a perspective view
(b); perception rays from a
plan view (c); from a perspec-
tive view (d).
sect the functional space in a plan view, and these ments, and in particular provides information on the
are termed as perception rays as they simulate the remaining surveillance in the case of a camera fail-
perception events E1, E2, and E3. The same perception ure, which provides an indication of the robustness
rays are shown in Figure 7d in a perspective view. of a surveillance situation.
The perception event P(E) is obtained by P(E)=np/nv
, where np denotes the number of perception rays, CONCLUSIONS
and nv the number of vision rays. A probabilistic-possibilistic approach that models
The results from the experiment are P(E1)=.246; surveillance of a scene by human via three cameras
P(E2)=.207; P(E3)=.310, so that P(E1ÈE2ÈE3)=.588, yield- is described. The first stage in camera based human
ing possibility of perception as pp=.830. This quantifies surveillance is the immersion phenomenon, and
the possibility of perceiving an event at the func- this is modeled in the presented work by means
tional space of the door based on the camera posi- of perception computations that are probabilistic
tions considered. It is interesting to investigate what in nature. These computations reflect the fact that
the difference in perception possibility is in case two remembrance of visual information processed by
cameras are used instead of three. Considering the human vision system is not certain, i.e. it is subject
case camera 1 is not used, then P(E2ÈE3)=.453, yield- to probabilistic considerations. The second stage of
ing the perception possibility as pp=.701. In case the surveillance is conversion of the perception into
camera 2 is not used, then P(E1ÈE3)=.480 yielding possibility. The possibilistic treatment accounts for
perception possibility as pp=.729; and for camera 3 the fact that the observation event does not con-
being not used P(E1ÈE2)=.402, so that the possibil- cern perception of an object from an actual location
ity becomes pp=.642. Thus, compared to using two in space, but perception of a camera sensed image
cameras, use of three cameras increases the pos- of the object on a monitor. This way perception is as-
sibility of perception by 18.4%, 13.9%, and 29.3% sessed in the form of a fuzzy statement. In the same
respectively for the three cases. It is also interesting way as probability is due to integration of a prob-
to consider using only one camera compared to us- ability density over some physical domain, so that
ing three cameras. Using camera 1 exclusively, the it is associated to an event, possibility is computed
perception possibility is pp=.431 so that the three by means of integration of an associated possibil-
cameras entail an increase of 93%; using camera 2 ity density function belonging to the same domain.
exclusively the possibility is pp=0.371 implying an in- The domain in the present case is vision angle. The
crease for the three cameras of 124%; and in case ex- computer experiments presented in this paper con-
clusively camera 3 is used the perception possibility firm the qualitative statement, that the number of
is pp=.524 implying an increase of 58% for the case of cameras influences the possibility of perception. The
using the three cameras. This information is essen- probabilistic-possibilistic treatment described in this
tial in determining the surveillance level of environ- paper uniquely quantifies this possibility, providing
352 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
precision assessment of surveillance of ambient en- perception: a probabilistic theory’ in 2006 IEEE Int. Conf.
vironments. This implies that through the novel ap- on Systems, Man, and Cybernetics, Taipei, Taiwan, pp.
proach, subtle differences among surveillance situa- 5152-5159.
tions are distinguished, allowing for more conscious Dornbush, S, Joshi, A, Segall, Z and Oates, T 2007, ‘A human
decision making. This may have important place in activity aware learning mobile music player’ in 2007
diverse applications, such as domestic healthcare, Conf. on Advances in Ambient Intelligence, IOS Press, pp.
safety and security of buildings and cities, applying 107-122.
to both, existing situations, as well as during design Ducatel, K, Bogdanowicz, M, Scapolo, F, Leijten, J and
of new environments. It is interesting to note that Burgelman, J-C 2001, ‘Scenarios for ambient intelli-
different stakeholders may use the method for dif- gence in 2010’ in IPTS, Seville, pp. 1-55.
ferent purposes, such as verifying if surveillance is Nakashima, H 2007, ‘Cyber assist project for ambient intel-
sufficient, or verifying that it is not excessive, for in- ligence’ in JC Augusto and D Shapiro (eds), Advances in
stance for the sake of privacy of users. Ambient Intelligence, IOS Press, pp. 1-20.
Ramos, C, Augusto, JC and Shapiro, D 2008, ‘Ambient intel-
REFERENCES ligence: The next step for artificial intelligence’, IEEE In-
Aarts, E and Encarnacao, J 2006, ‘Into Ambient Intelligence’ telligent Systems, 23(2), pp. 15-18.
in E Aarts and J Encarnacao (eds), True Visions - The Rensink, RA, O’Regan, JK and Clark, JJ 1997, ‘To see or not
Emergence of Ambient Intelligence, Springer, pp. 1-16. to see: The need for attention to perceive changes in
Aarts, EHL and Diederiks, E 2007, Ambient Lifestyle : from scenes’, Psychological Science, 8(5), pp. 368-373.
Concept to Experience, Book Industry Services. Richard, N and Yamada, S 2007, ‘Two issues for an ambient
Augusto, J and Shapiro, D 2007, Advances in Ambient Intelli- reminding system: context awareness and user feed-
gence. Frontiers in Artificial Intelligence and Applications, back’ in JC Augusto and D Shapiro (eds), Advances in
IOS Press Amsterdam. Ambient Intelligence, IOS Press, pp. 123 - 142.
Augusto, JC and Nugent, CD 2006, Designing Smart Homes, Saini, P, de Ruyter, B, Markopoulos, P and van Breemen, A
LNCS, Springer, Heidelberg. 2005, ‘Assessing the effects of building social intelli-
Bhatt, M, Dylla, F and Hois, J 2009, ‘Spatio-terminological gence in a robotic interface for the home’, Interacting
inference for the design of ambient environments’ in with Computers, 17, pp. 522-541.
Spatial Information Theory, Springer, pp. 371-391. Streiz, NA, Kameas, AD and Mavrommati, I 2007, The Disap-
Bittermann, MS and Ciftcioglu, Ö 2008, ‘Visual perception pearing Computer, LNCS, Springer, Heidelberg.
model for architectural design’, Journal of Design Re- Takemura, N and Ishiguro, H 2010, ‘Multi-camera vision for
search, 7(1), pp. 35-60. surveillance’ in HNe al. (ed), Handbook of Ambient Intel-
Boronowsky, M, Herzog, O, Knackfuß, P and Lawo, M 2006, ligence and Smart Environments, Springer, pp. 149-168.
‘Empowering the mobile worker by wearable comput- Waibel, A, Stiefelhagen, R, Carlson, R, Casas, J, Kleindienst,
ing - wearIT@work’, J. of Telecommunications and Infor- J, Lamel, L, Lanz, O, Mostefa, D, Omologo, M, Pianesi,
mation Technology, (2), pp. 9-14. F, Polymenakos, L, Potamianos, G, Soldatos, J, Sutschet,
Ciftcioglu, Ö, Bittermann, MS and Sariyildiz, IS 2006a, ‘Au- G and Terken, J 2010, ‘Computers in the human inter-
tonomous robotics by perception’ in SCIS & ISIS 2006, action loop’ in H Nakashima (ed), Handbook of Ambi-
Joint 3rd Int. Conf. on Soft Computing and Intelligent Sys- ent Intelligence and Smart Environments, Springer, pp.
tems and 7th Int. Symp. on advanced Intelligent Systems, 1071-1109.
Tokyo, Japan, pp. 1963-1970. Weyrich, C 1999, ‘Orientations for workprogramme 2000
Ciftcioglu, Ö, Bittermann, MS and Sariyildiz, IS 2006b, ‘To- and beyond ‘ in Information Society Technologies Ad-
wards computer-based perception by modeling visual visory Group (ISTAG), Luxemburg.
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 353
354 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
The Jacobs´ Urban Lineage Revisited
Claudio Araneda
Universidad del Bío-Bío, Chile
claraneda73@gmail.com
Abstract. Since the almost simultaneous publication of Kevin Lynch and Jane Jacobs´
seminal and pioneer urban manifestos, the discipline has been increasingly permeated by
what could be rightly called the phenomenological impulse. While sharing methodologi-
cal principles, however, they represent two very distinct approaches to the study of
urban matters, a distinction rooted on their chosen object of study. The drawing of
this distinction constitutes this research´s point of departure. Its fundamental aim is to
help further the development of what we characterize as the Jacobs´s lineage of urban
thought. To this end, the paper outlines methodological rudiments for the development of
a methodological tool that would allow the beginning of a systematic study of the patterns
of people´s presence and absence in urban space (streets). We call it Urban Polaroid. This
work is part of a government funded (fondecyt 11110450) project.
Keywords. Urban phenomenon; phenomenology; Urban Polaroid; space syntax; Jane
Jacobs.
INTRODUCTION
Lewis Mumford famously branded Jane Jacobs` of urban thought. This said, we argue that there is
work as “home remedies for the urban cancer” (Mill- another equally distinct lineage. One that springing
er, 1986). It was more than the derisive characteriza- from Jacobs’ seminal work- for reasons that will pres-
tion of an opinionated and well-read urban scholar. ently be discussed- has remained markedly under-
It reflected the mood of a whole generation of urban developed. This lineage seeks urban knowledge not
planners that have systematically sought the source in the study of the perception of urban space (3D
of urban knowledge in the study of the already built and 2D) but in the perception of that other highly
cities and how we perceive it. This archaeological differentiated spatial manifestation in the city: peo-
kind of approach to urban studies has evolved, via ple. This paper offers methodological rudiments to
Space Syntax, into a highly sophisticated and suc- further the development of this lineage. Its central
cessful methodological corpus for the understand- argument connects with Hillier´s fundamental cri-
ing of urban space. We call this the Lynch lineage tique regarding urbanism´s historical and atavic
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 355
tendency to dogmatically prescribe as well as to Lynch´s urban approach (1960), on the other
the sheer lack of analytical tools for urban analysis. hand, was also rooted in the pedestrian´s percep-
It also connects with Ratti´s critique of Hillier´s work tion, but this time, of urban space. That is to say,
regarding the leap of faith implicit in Space Syntax Lynch´s object of study was the perception of con-
predictions based on the axial map. It differs with stant spatial patters through our daily navigation of
both, however, in the object of study to which we the city streets, the current validity of his approach
apply ourselves. becoming manifest in freshly opened avenues of ur-
ban research (Morello and Ratti, 2008). Indeed, it has
DRAWING A DISTINCTION BETWEEN been this later lineage of urban studies the one that
THE LYNCH AND THE JACOBS` LINEAGE. has seen the most dramatic developments in the
Written from the point of view of a pedestrian sen- last decades. This approach, characterized by its in-
sible to the unique and ever understudied phenom- trinsically archaeological nature, concerns itself with
enon of perceiving another human being, Jacobs the perception of urban space- inhabited or in ruin
manifesto Death and Life of Great American Cities like state- from a geometrical or topological point of
(1961) was an open and frontal attack against the view, depending on the placed emphasis.
planning tradition advocated by leading figures A representative and consistent offspring of
such as Le Corbusier and Ebenezer Howard and their this lineage of urban studies is the ground breaking
anti-autopoietic impulse towards the ruralization of body of work developed by the Space Syntax Lab at
the urban universe and the urbanization of the rural the Barttlet School of Architecture, UCL in London
universe respectively. Both of them united by their and all that has sprouted from it. Hillier- its founder
reliability on the new means of transport as a solu- father- succeeded in developing a precise tool for
tion to urban ailments, a stance widely trumpeted the study of architectural and urban layouts, discov-
by Soria Matta and already implemented in two ering in the process a close relationship between
highly praised precedents: Barcelona´s eixample and their topological configuration and the patterns of
the rebuilding of Paris. A school of urban thought pedestrian flow they describe (Hillier, 1996). It has
that at least in the United States had by then be- been its intrinsically non-discursive, phenomeno-
come the school of choice of both, planners of aca- logical stance that has rendered most of its findings
demic pedigree and business speculators alike. It irrefutable, setting a new standard not only in urban
was the already proved pernicious consequences analysis methodological consistency but also in ur-
of this school of thought that Jacobs famously per- ban data representation.
ceived in the economically informed interventions Another prolific offspring of this lineage has
of Robert Moses in Manhattan. been the work developed at the Senseable Lab in
Against this tradition, one of her prevailing concerns MIT directed by Carlo Ratti. He and his team have
was, as she called it, “the social behavior of people mainly focused on the analysis of urban data in the
in the cities”, meaning by cities the streets we walk form of electromagnetic pulses emitted by elec-
every day. Whereas it is a fact that her observations tronic devices (chiefly mobile telephones) carried
lacked rigorous analytical backup, her general meth- by people every day during their daily urban navi-
odological framework and object of study were une- gations. Interestingly enough, it has been precisely
quivocal: the experience of walking through the city Carlo Ratti, one of Space Syntax´s techniques most
focusing on the patters of people presence. Not for effective critics, who has brought Space Syntax prin-
nothing she is credited with having introduced the ciples to its last logical consequences by developing
notion of “eyes on the street”, by which she meant Digital Elevation Models (DEMs) with a view to com-
not “private eyes” but presence and co-presence in plement Space Syntax reductive two-dimensional
the Hillerian sense, particularly, that of residents. approach (Ratti, 2005). That is to say: a three dimen-
356 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
sional version of Space Syntax that aims to incorpo- in need of urban knowledge. Once this distinction is
rate sophisticated simulations of pedestrian move- made, Space Syntax is revealed for what it is, namely,
ment and view sheds, among other factors. a very specific kind of urban phenomenology: a phe-
This said, we argue that despite the great pro- nomenology of the city´s topology. One that gives
gress made by these representative techniques of us no direct knowledge about pedestrians patters of
urban analysis, they remain fundamentally specu- behaviour. Ratti has already pointed out that space
lative with regard to perception of people in space syntax´s way of proving the connection between
in the sense that none of them approaches it from these two variables is by means of surveys. This is, by
an experiential point of view. That is, from the means of “a posteriori” correlations between the ax-
point of view of an embodied, walking subject. In- ial map results and observed movement data (Ratti,
deed, whereas lines of research derived from Space 2005). In the case of Ratti´s DEMs, we see a similar
Syntax´s developments have led to the develop- procedure applied this time to the study of view
ment of agent-based models of pedestrian flow sheds. His strategy of electromagnetic signal track-
(Batty and Jiang, 1998), Ratti´s work has given rise ing on the other hand brings us closer to the pedes-
to the “wiki city” notion, approach whose object of trian who, nonetheless, remains an electromagnetic
study is made up of electromagnetic signals emitted mobile signal. As for Batty´s agent based models, we
by electromagnetic devices (Calabrese et al., 2007a; already fall into a thoroughly speculative stance re-
2007b; 2007c; 2007d; 2007e; Calabrese, 2008). In garding the study of pedestrian behaviour. In sum,
both cases, real people, understood as living hu- whereas space syntax´s approach to people study
man bodies, are nowhere to be seen. As a result of is post analysis (and at any rate not sophisticated as
their eminently speculative nature, the predictions an analytical tool), Ratti´s and Batty´s are downright
related to people´s presence on the street have re- non-experiential.
mained potentially flawed in that they do not pro- Jacobs´s approach, although still in a rudimen-
ceed from direct observation of people but from a tary stage, was a phenomenology of embodied peo-
priori speculations derived from computer models. ple perception, an object of study that from a meth-
odological point of view proved to be very difficult
PEOPLE AS OBJECT OF STUDY to map due to it being a moving target, so to speak.
As it has already been amply discussed elsewhere, Thus although the impulse latent in Jacob´s work
Jacobs and Lynch´s approaches were indeed tac- can be traced back, through Hall´s proxemics (Hall,
itly grounded on a phenomenological standpoint 1969; 1973; 1976), down to the rather unknown
(Seamon, 2012). This said there is a distinction that work of the German architect Herman Maertens
has not yet been clearly made. In phenomenol- (1884), it has ultimately remained analytically weak.
ogy- at least in the case of the proto phenomenol- The fundamental aim if this work is to help to further
ogy of Goethean extraction- the fundamental law of its development by means of introducing meth-
knowledge generation is that this should be derived odological rudiments that would allow a systematic
from a direct relationship with the chosen object of mapping of the human universe, so to speak, thus
study. Thus the unequivocal Goethean admonition: complementing the successful efforts made by the
“seek nothing beyond the object, they themselves, phenomenologists of urban space.
well contemplated, are the theory” (Seamon and Za-
jonc, 1998, p. 4) URBAN POLAROID (A METHODOLOGI-
Seen from this point of view, it becomes clear CAL OUTLINE)
that the Jacobs and the Lynch´s lineages differ not in Acknowledging from the outset that all record of an
method but in their chosen object of study. Put dif- experience is a reduction of it, the basic methodo-
ferently, they differ on the source they turn to when logical principle is the following. If what we want is
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 357
to know which are the patters of people presence CASE STUDY (CONCEPCIÓN, CHILE)
in the streets of any given urban area at any given All the streets belonging to the historical layout of
time, then what we need is a simultaneous photo- the city of Concepción are journeyed along (for a
graphic record of all the streets within the defined plan, see Figure 5). To this end, we created a patrol
perimeter. In principle, this could be done in two of photographic record composed of 26 students.
ways. One of them is by means of satellite or drone´s The format established that every street should be
aerial pictures. In this kind of record, people become walked in straight, from one end of the perimeter
dots on the street. Another way of achieving this to the other, and that all journeys should start at
without having to fly away from the streets is to re- the same time, in this case, midday. Unless impos-
sort to a photographic scanning of the streets at ob- sible, the itinerary must be made through pedes-
server level. While we consider this later path to be trian areas only. While doing so, a video record
a properly experiential one, its implementation pre- from a constant observer level and with constant
sents considerable and at the same time, interesting lens aperture is done. Depending on the amount of
practical problems. frames exported from the video record, we obtain
Just like space syntax´s axial map calculus de- a photographic record of variable intensity. That is,
pends upon the distance from all to all lines or an “n” number of frames per street segment, under-
streets, in order to obtain an accurate estimate re- standing for segment, the length of the street de-
garding actual presence of people in the street, we fined between intersections with other streets or, if
need to capture the state of all the streets within the preferred, between corners. In this case, we used a
chosen area of study at the same time. So for exam- low intensity: 2 pictures per segment. Arranged in
ple, if the intensity needed to validate the reliability filmstrip format, this raw, unedited record shows as a
of the study is, say, 3 pictures by segment (or block) result a reduced general state of the behavior of our
and the total numbers of segments (or blocks) that visual field during the journeys (Figure 1).
make up the street is, say 10, the total amount of Applying simple raster graphics, we then pro-
pictures needed for this particular street would be ceed to transform, frame-by-frame, all visual infor-
of 30. Seen from an ideal point of view, this means mation in the shape of human beings into colored
that what we really need in order to obtain a true “in- surfaces, in this case, a red surface (Figures 2 and 3).
stant” or “Polaroid” of this street are 30 different cam- This first step has the peculiar characteristic of
eras (people) taking a snapshot (in the same format: being quantitatively and qualitatively very eloquent
height, level, lens aperture, etc.) at exactly the same in that it already reveals a great deal of information
time. This process in turn should then be repeated regarding the patterns of behavior of our visual field
in all the streets contained within the chosen area. with regard to the presence of people in it. (We have
If the total number of streets within this chosen area defined three kinds of archetypal visual information
is, say, 20, the amount of cameras (people) needed in to be found within the urban universe: information
order to get the Polaroid is 600. Since logistically this in the shape of people, information in the shape of
is extremely difficult and probably counterproduc- urban space and information in the shape of nature.
tive- though not impossible-, we resorted to urban This paper only deals with the first kind.) That is, how
journeys or navigations. That is, video/photographic much surface of our photographically reduced visu-
journeys along all the streets contained within the al field is populated by information in the shape of
chosen area of study. The consistency and reliabil- people. Comparing the colored area to the total of
ity of this approach will depend exclusively on the the frame, we then obtain the percentage of visual
amount of journeys undertaken in a day, month and information in the shape of people for that particular
year per each street under observation. frame. Doing the same operation with every frame
358 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
Figure 1
Photographic filmstrips of
both, longitudinal (bottom)
and transversal (top) journeys
through the analyzed area.
of a particular street, we then obtain the average percentage of information in the shape of people for
percentage of information in the shape of people for this particular city at the particular time in which the
that particular street. Finally, repeating the same op- video/photographic journeys were done (Figure 4).
eration in all streets gives us as a result the average
Figure 2
Rasterisation and calculus
procedure for visual density
of information in the shape of
people of each frame.
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 359
Figure 3
Filmstrips of both, longitudi-
nal (bottom) and transversal
(top) journeys with informa-
tion in the shape of people in
raster form.
Figure 4
Table of quantitative levels
of visual information in the
shape of people per frame
(columns) with averages per
street (files) in the last column.
Longitudinal journeys bottom,
transversal journeys on top.
360 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
Figure 5
Two dimensional syntax of
the levels of people presence
per street.
Polaroid reduces to nil the speculation implicit in tively akin- as it can be readily gathered from the di-
most of the urban tools of analysis developed by the agrams they yield- they differ dramatically in quality.
Lynch lineage representatives. Indeed, transform- One showing potential, the other showing actuality;
ing the table numbers into graphs, the result shows one being of interest only to specialists (particularly
a remarkable similarity with space syntax findings transport engineers and property developers), the
(Figure 5). other to urbanists and citizens in general; one re-
Red, our chosen color for high visual people vealing information about the already built city, the
density, tends in this case to coincide with the axial other about the city yet to be built.
map analysis result, which, if applied locally, would
show the central streets of the chose area as the THEORETICAL CONTRIBUTIONS
most integrated one. Hence, although the knowl- One of Jacob´s central declared concerns was to
edge obtained via the two approaches is quantita- get to know “how a city works”. What she called “the
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 361
underlying order of cities” (Jacobs, 1961, p. 25). In abstract computer aided analysis. This might prove
line with Luhmann (1995), today we might say that crucial in the cases where the axial map analysis
Jacobs quest was for the discovery of the laws that does not conform to actual reality of a determined
secure the autopoiesis of the urban universe and, as street or area of the city. Moreover, it achieves this
a consequence of this, its perpetuation in time. If, as without the need of people carrying mobile phones
the Goethean maxim goes, theory building derives or any kind of microchips. This makes it less invasive
from direct object contemplation, then, in order to and more citizen friendly.
obtain urban knowledge, all we need to do is to find Future complementary applications include a
out which is the urbanist´s object of study. Jacobs Polaroid of the visually perceived built universe and
was neither explicit nor sure about the answer to another of the natural universe, aspects that might
this question. Yet her main object of study always throw light upon the other two archetypal kinds of
remained people on the streets. visual information in the city and the relationship
Very few shared with her this interest. One of between them. In sum, the Urban Polaroid approach
them was the urbanist Jaime Garretón, author of the offers a portrait, and as such, a reduced view of a
first truly general urban theory, for whom “nothing complex that we have called the “archetypal citizen”,
is definite in a city, except its laws” (Garretón, 1975, according to previous research, the urbanist true ob-
p. 273), laws that, according to him, are essentially ject of study (Araneda, 2008; 2010; 2011).
communicative laws. To be sure, the laws of commu-
nication between people. This said, neither Jacobs ACKNOWLEDGMENTS
nor Garretón developed analytical tools for the This work is part of an ongoing fondecyt project
study of their chosen object of study. To be more (11110450) and was made possible by the enthusi-
precise, neither of them built a systematic corpus of astic collaboration of the students of second year of
study cases and as a result of this, as Hillier would architecture at the Universidad del Bío-Bío in Con-
put it, they remained prescriptively strong but ana- cepción (Unit 4, 2012) and by the expert assistance
lytically weak. of architect and renowned Chilean authorial pho-
By focusing on the study of people in space rath- tographer Nicolás Sáez.
er than on space itself, this paper represents a pri-
meval impulse towards the development of general REFERENCES
analytical rudiments for the further development of Araneda, C 2009, Dis-Information in the Information Age Cit-
the Jacobs´ lineage. Whether the urbanist´s own ob- ies. The Size of the American Block as an Urban Anachro-
ject of study is urban space or people remains too nism, VDM Verlag, Germany/UK/USA.
big a question to be answered in these pages. Araneda, C 2010, Urban Protophenomenon. Introducing
the Notion of Primordial Phenomenon in Urbanism,
CONCLUSIONS Proceedings of the eCAADe Conference, Zurich, Switzer-
Even at these early rudimentary stages, the Urban land, pp. 207-215.
Polaroid technique of urban analysis has demon- Calabrese, F., Kloeckl, K., and Ratti, C. 2007a, ‘WikiCity:
strated to be a most useful as well as didactical com- Real-Time Location- Sensitive Tools for The City’. Pro-
plement to the abstract techniques championed by ceedings of the 10th International Conference on Com-
the Lynch lineage advocates. It does not only allow puters in Urban Planning and Urban Management (CU-
the systematic exploration of a thoroughly under- PUM07). Available at: http://senseable.mit.edu/papers/
studied, parallel universe, to the one by them ex- pdf/2007_Calabrese_et_al_WikiCity_CUPUM.pdf [Ac-
plored. More important still, by being grounded in cessed: 02/05/2011].
experience, it renders unnecessary all speculation Calabrese, F., Kloeckl, K., and Ratti, C. 2007b, ‘WikiCity: Con-
regarding patters of people presence inherent in necting the Tangible And the Virtual Realm of a City’.
362 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
GeoInformatics, 10(8), pp. 42--45. Jacobs, J 1961, The Death and Life of Great American Cities.
Calabrese, F., Kloeckl, K., and Ratti, C. 2007c, ‘WikiCity: real- The Failure of Town Planning, Penguin Books, UK.
time location- sensitive tools for the city’. Conference on Luhmann, N: `Social Systems´, Stanford University Press,
Communities and Technologies - Digital cities. Available California, USA, 1995.
at: http://senseable.mit.edu/papers/pdf/2007_Cala- Lynch, K 1960, The Image of the City, MIT Press, Cambridge
brese_et_al_WikiCity_Digital_Cities.p df. [Accessed: MA.
02/05/2011]. Maertens, H 1884, Der Optsiche Masstab in den Bildenden
Calabrese, F., Kloeckl, K., and Ratti, C. 2007d, WikiCity. ‘Real- Kuensten, Wassmuth, 2nd edition, Berlin.
time urban environments’. IEEE Pervasive Computing, Morello, E & Ratti, C 2008, ‘A Digital Image of the City: 3-D
6(3), 52-53. IEEE Computer Society. doi: http://doi. Isovists and a Tribute to Kevin Lynch’. Available at:
ieeecomputersociety.org/10.1109/MPRV.2007.69 http://senseable.mit.edu/papers/pdf/2008_Morello_
Calabrese, F., Colonna, M., Lovisolo, P., Parata, D. and Ratti, Ratti_Environment%20and%20Planning%20B.pdf [Ac-
C. 2007e, ‘Real- Time Urban Monitoring Using Cellular cessed: 02/05/2011].
Phones: a Case-Study in Rome’. Available at: http:// Mumford L & Miller D (ed.), 1986, The Lewis Mumford Reader.
senseable.mit.edu/papers/pdf/2007_Calabrese_et_al_ Pantheon Books, USA.
Rome_Unpub.pdf [Accessed: 02/05/2011]. Ratti C, 2005, ‘The lineage of the line: space syntax parame-
Calabrese, F., Kloeckl, K. and Ratti, C 2008, ‘Wikicity: Real- ters from the analysis of urban DEMs’, Environment and
Time Location-Sensitive Tools for the City’. In Foth, M. Planning B: Planning and Design 32(4) 547 – 566.
(Ed.) Handbook of Research on Urban Informatics: The Batty M, Jiang B, Thurstain-Goodwin M, 1998, ‘Local move-
Practice and Promise of the Real-Time City. Hershey PA, ment: agent-based models of pedestrian flow’’, WP 4,
IGI Global. Centre for Advanced Spatial Analysis, University Col-
Garretón, J 1975. Una Teoría Cibernética de la Ciudad y su Sis- lege London, http://www.casa.ucl.ac.uk
tema. Ediciones Nueva Visión, Buenos Aires, Argentina. Seamon, D and Zajonc, A. (eds.) 1998, Goethe’s Way of Sci-
Hall, E 1969, The Hidden Dimension, Anchor Books, New ence. A Phenomenology of Nature, SUNY, USA.
York. Seamon, D 2012, “Jumping, Joyous Urban Jumble:Jane
Hall, E 1973, The Silent Language, Anchor Books, New York. Jacobs’s Death and Life of Great American Cities as a
Hall, E 1976, Beyond Culture, Anchor Books, New York. Phenomenology of Urban Place”, Space Syntax Journal,
Hillier, B 1996, Space is the Machine, Cambridge University, 3(1), pp.139-149.
UK.
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 363
364 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
Collaborative and Human Based Performance Analysis
Mathew Schwartz
Advanced Institutes of Convergence Technology, South Korea
www.smart-art.org
mat@cadop.info
Abstract. This research presents methods for simulation and visualization of human
factors. This allows for a performance based analysis of buildings from the local human
scale to the larger building scale. Technical issues such as computational time and
mathematically describing a buildings geometry are discussed. The algorithms presented
are integrated in a 3D modeling software commonly used in design and architecture
through a plugin.
Keywords. Universal design; human analysis; collaboration; education; disability.
INTRODUCTION
In general, the field of Building Information Mod- are not usually considered necessary to include. For
eling (BIM) is related to structural or environmental one, many important aspects of a building in re-
analysis. This is likely due to the main users of BIM: gards to humans are regulated in laws, for example,
Architecture, Engineering, and Construction (AEC). the American Disabilities Act (ADA) (Dept. of Justice,
By utilizing BIM approaches these fields are able to 2010). Similarly, many of the components related to
reduce human error and cost from design to con- humans are standardized, which has been needed
struction (Azhar et al., 2008). While BIM can include as the AEC community is not focused on researching
many forms of analysis, human factors have not ergonomics or biomechanics, large research fields in
traditionally been included. Additionally, the defi- themselves. Additionally, excluding emergency situ-
nition of BIM is still being debated and developed ations, the cost of an error in construction is much
(Eastman, 2008). For this reason, this paper concen- greater than a door knob being difficult to use.
trates on the term building performance to discuss While these reasons are valid, they only represent
the relationship of human factors to overall building the minimum of what design can be.
analysis. However, this term is only for clarity, and The philosophy of Universal Design brings to
the work presented is wells suited to be integrated question the lack of human factor analysis for build-
as a feature of BIM software. ing performance. Exclusion of human factors from
As with BIM, building performance is largely fo- the design and analysis stage while relying on stand-
cused on structural and environmental factors that ards and prescribed law can be stigmatizing and
can be quantified. When the human is included in may need to be fixed later, demonstrating the failure
an analysis of building performance it is usually re- of design (Story, 1998). While the ADA is specific to
lated to the environmental effects of a human, or disabilities, Universal Design is meant to benefit all
multiple humans, such as heat transfer or acoustical people. Universal Design has become part of many
properties (Mahdavi, 2011). There are a few reasons curriculums in architecture schools (Vance, 2012),
the ergonomic and physiological aspect of humans however, understanding the problems with designs
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 365
related to human factors can require costly physi- ronment by comparing the reach ability to desired
cal experiments or communication with experts in reach locations. The inverse kinematic (IK) system
other fields. Schubert et al. (2011) present a method used to control the manikin arm was developed in
for integrating physical tools with the design work- MAYA, however, additional research into disability
flow, and Maver et al. (2001) demonstrate the ways reach would greatly improve the simulations. Cur-
in which universal design concepts can be explored rently one of the most used and advanced simula-
through virtual reality. Although both of these pre- tion tools for ergonomics uses a reach envelope, vis-
sent alternative methods to real world experimenta- ualizing the extent of a persons reach (Blanchonette,
tion, they still require physical space and money. 2010). This is generally okay when a user is able to
This paper is meant to function as a reference rotate to one side or another, however, if the user
on the types of quantifiable human factors, an ap- has a specific type of reach ability a more complete
proach to education, and a method for collaboration map needs to be simulated.
between the complicated fields of ergonomics and A voxel volume was used to create a complete
biomechanics with architecture and design. A 3D 3D map of the reach ability. Each voxel is given a
Manikin plugin is presented with a graphical user value based on the ability for the IK solver to find
interface allowing access to the underlying algo- a solution for the voxels around the initial voxel as
rithms that simulate and visualize human factors of seen in equation (1).
the manikin with the space at different scales. Algo-
rithms written in python are presented for five main
areas: Reach, Vision, Zone, Search, and Movement.
Aspects of biomechanics software are integrated
with the plugin to create a platform for collabora-
tion. To cohesively demonstrate the application of
each component a manikin in a wheelchair is used.
366 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
Vision if the light is within the vertical 45 degree angle is
Designing with a persons vision in mind requires seen in (3).
two aspects, items that should be seen, and items
that should not. The former case includes items (3)
such as signs and navigational cues, while the latter Second, the location of the light relative to the
includes direct lighting. An important aspect of vi- head must be found. For this, a reference point is
sion is neck mobility. If for instance a person is in a placed at any distance in front of the head in the di-
wheelchair, the range of movement in the neck may rection it is facing. A triangle is constructed from the
be limited. In this case, as well as in many elderly head position, reference position, and light position
people, it cannot be assumed that a building occu- with the light projected onto the head up axis, as-
pant will turn their head to view an item. In the case suming y is the up axis (4).
of direct lighting, many times a person will be un-
comfortable, or worse, unable to see in the distance
resulting in a dangerous situation.
The importance of vision is well known in archi- (4)
tecture and design. In Tilley et al. (2001), a diagram where H is the Head position, R is the reference po-
showing many aspects of the human vision is used sition, and L is the light position. Using the law of
as a reference for designers. The problem with the cosines, the side angle is calculated and checked
diagram is the difficulty in understanding and trans- against the 94 degree limit (5).
lating the content to ones own design within both
practice and education. The diagram can be repre- (5)
sented in 3D space by referencing both top and side The lights found to be in the disabling glare
maximum angles. For displaying the vision regions zone can be marked in the model and the total
each one of the four points, referred to as Limit, that number can be displayed for the designer.
describe the region are input at both an initial dis-
tance and the distance to where the visualization Zone
should end. Equation (2) shows the formula to calcu- The space around a person can be classified in mul-
late where each point should be placed. tiple ways. There is the space someone needs to
be physically present, space that makes someone
comfortable, and space in order to complete a task.
(2) The latter two are most often related to psychology
In order to analyze the building for direct light- as well as physiology. The importance of the space
ing at a specific point the location of the manikin around a person can be seen in design reference
head as well as direction is calculated in regards to books (Zelnik and Panero, 1979) as well as in research
each light. According to the diagram in Tilley, the papers (Lantrip, 1993). When designing the spatial
disability glare zone is 45 degrees above the persons needs of a wheelchair the designer must understand
eye level and the extent a person can see to the side the many situations that exist. For example, if a hall-
is 94 degrees. The domain is found with two calcu- way is designed to be wide enough for a wheelchair
lations. First, the angle above the eye level is found to pass through, it may not be wide enough for the
creating a right triangle between the head position wheelchair to turn around, or more commonly, for a
and light position with the hypotenuse being the person in the opposite direction to pass by.
distance between the two points. The adjacent side The implementation of a spatial zone is relative-
is the horizontal distance and the opposite side is ly easy in a CAD program. The spatial zone needed
the vertical distance. The algorithm for determining can be visualized by a transparent cylinder around
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 367
the manikin as seen in the results section. In the Figure1
case of multiple people more than one cylinder can Left: Key locations exist in Car-
be displayed. The coordinates in the mesh are gen- tesian space. Edge values are
erated the same as in (2) where Limit is defined as the distance between them.
the distances from the center of the manikin. As the Right: If the building walls
generated mesh contains a Cartesian domain, it is are included it is easy to see
possible to analyze environmental interference in the problem with calculating
regards to the generated geometry. distance directly.
“floor”. Anything following that name is irrelevant
Search and can be used for the designers own organization.
Out of many human factors that can be included The second assumption is that the manikin is placed
in building analysis, the distance and way in which over any valid ground. If these conditions are satis-
someone moves throughout is one of the most dif- fied, the algorithm will send a ray from the top of the
ficult to integrate, yet is extremely important. While manikin head to the floor and return the Cartesian
laws exist for wheelchair ramps and elevators, know- position. Each valid ray is checked against the mini-
ing how a building layout or size affects the oc- mum required space, in this case, a wheelchair turn-
cupants is required to design above the minimum. ing radius. The algorithm then expands outward and
There are many forms of analysis of human factors continues to store valid node locations as seen in (6).
that are useful at the human scale, including egress
and accessibility. Two components needed to ana-
lyze movement throughout a building are: math-
ematically describing the building and creating the
path of movement. From this it is possible to run nu-
merous simulations of human factors to analyze the
path created. The current implementation refers to
the path as the energy required.
The easiest way to describe a building using a
node graph is to make landmark areas nodes and
create connections, known as edges, between them. (6)
When the fastest way between two nodes is desired, The edge values are the key to analyzing the
a search algorithm calculates the edge values from building. As each node is created, so are the edge
one point to the other (Dijkstra, 1976). In general, costs to each of the available nodes surrounding it
the distance between each landmark is the value (7).
given to the connection. However, the distance
between each landmark must be the length of the
route from start to end, not the Cartesian distance.
This creates a problem when attempting to translate
a 3D model of a building into a searchable graph
without individually measuring each path (Figure 1).
Using the internal raytracing function of MAYA,
a technique was developed for mathematically de-
scribing a 3D model of a building. The first assump-
tion is that any geometry that should be considered
as ground should begin with the naming convention (7)
368 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
Figure 2 tegrating human factors in design workflows is the
The GUI inside of PYQT. The extra knowledge of human biomechanics required
tabs can be seen on the top. of designers. As both a teaching and collaboration
tool, this research demonstrates a method for inte-
grating the two fields.
Many types of analysis in biomechanics use joint
angles. To integrate this workflow a python library
for creating matlab style graphs (Hunter, 2007) was
implemented within the plugin. As some design
modeling tools do not have animation features, all
time related elements are stored in comma separat-
ed value (CSV) files. The file structure implemented
is the same as those from the exported joint angles
of biomechanics software.
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 369
Figure 4
Left: Student points out
objects in the way of naviga-
tion. Top-right: the path
from manikin to a location is
drawn. Bottom-right: objects
placed over the ground are
accounted for, creating a path
that avoids the object.
class led by Sean Vance at the University of Michigan lems needed to be addressed. Some of these issues
was documented (Vance, 2012). The students were include obstacles in the way of a wheelchair (Figures
not aware or directly influenced by this research. 4 and 5), the line of sight from a person in a wheel-
Multiple physical experiments were conducted by chair (Figure 6), and items out of reach (Figures 7
the students to simulate the relationship of disabili- and 8).
ties to the built environment around them. While While it is possible to quantify all of the present-
it is very useful for students to have a hands-on ed human factors relative to a building model, the
knowledge it may not always be possible. The tool largest scale quantifiable factor is in navigation. The
presented here is capable of simulating and visual- graph search algorithm is very flexible and can be
izing the same ways in which students felt the prob- used for a variety of situations, one of which is seen
Figure 5
Left: Student points out
problems with the distance
between toilet and wall
for turning radius. Right:
Wheelchair dimensions are
visualized in a 3D model of
a room.
370 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
Figure 6
Left: Student draws line of
sight to point out the difficulty
in communication due to the
high partition. Right: Two
different vision cones are visu-
alized. Two perspectives are
shown, one from the building
participant and a normal 3D
model view.
Figure 7
Left: Student shows problems
with the placement of differ-
ent items in a room. Right: The
reach ability map is simulated
and visualized with transpar-
ency and color.
in Figure 4. Another aspect of the graph search is someone to move from one point to another. In the
the ability to analyze the amount of time it takes for case of a wheelchair, depending on the type of dis-
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 371
Figure 8
Left: Student drawing angles
of human body to analyze
limits. Right: Motion capture
data of reach is displayed
through a graph.
ability and wheelchair, a person will move between steeper incline, the path generated will reflect the
~50 and ~80 meters per minute (Beekman et al., best method. This situation is ideal for bringing in
1999). Taking data from a subject with paraplegia in a biomechanics expert, or in the case of education,
a standard wheelchair a rate of 75 meters per minute biomechanics students. As the biomechanics side
can be used for analysis. When a designer runs the is able to analyze the movement of a person during
search algorithm the GUI will display the amount of wheelchair propulsion and quantify the results, the
time it takes to move along that path (Figure 9). design of the ground floor can be changed. Using
In addition to the speed of movement it is this alongside the colored visualization shown in
possible to calculate the energy expenditure of a Figure 8 creates an opportunity for informed itera-
movement. While there is a lack of human subject tive design.
testing that can give an accurate simulation for all
terrain, some basic estimations can be made. This CONCLUSION
approach can be used when designing a ramp and This paper presents a variety of functions, algo-
the designer is deciding if the ramp should be short, rithms, and systems that can better integrate human
with the maximum allowed slope, or longer with a factors with the design workflow. Each algorithm
lower slope. As the search algorithm will find the has potential to be both improved with speed and Figure 9
lowest cost method to get to the end point, if the expanded on for functionality. The significance of Left: Student demonstrates
algorithm creates a higher cost of movement for a the work is not limited to BIM and Building Analy- the narrow hallway does not
allow for a wheelchair to turn
around. Student notes the
length of the hallway with no
outlets can cause arm fatigue.
Right: GUI Displays estimated
time to travel using a wheel-
chair. Color coded ground
shows results of simulated
values. Red areas show either
narrow hallways or a steep
slope.
372 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
sis, but can bring human factors to the forethought gineering, IEEE Transactions on 8(1), 94-106.
of designers and architects, greatly influencing the Hunter, J 2007, ‘Matplotlib: A 2D Graphics Environment’,
style in which designs are created and simultane- Computing in Science Engineering 9(3), 90-95.
ously benefiting the building occupant. Recognizing Jacquier-Bret, J, Rezzoug, N and Gorce, P 2008, Synergies
the impossibility of designers to be a master of eve- during reach-to-grasp: A comparative study between
ry field related to human ability, and the advantages healthy and C6-C7 quadriplegic subjects, in ‘Engineer-
of expert collaboration, a system has been present- ing in Medicine and Biology Society, 2008. EMBS 2008.
ed in which biomechanics, architecture, and design 30th Annual International Conference of the IEEE’, pp.
can collaborate. Although some schools have been 5366-5369.
able to implement physical experiments to teach Dept. of Justice 2010, 2010 ADA Standards for Accessible De-
students Universal Design, many schools do not sign, Dept. of Justice, USA. http://www.ada.gov/regs20
have the resources for these lengthy and costly ex- 10/2010ADAStandards/2010ADAstandards.htm
periments. Integrating human factor simulations in Lantrip, D B 1993, ‘Environmental Constraint of Human
design programs would allow for students to learn Movement: A New Computer-Aided Approach’, Pro-
the same basic principles. ceedings of the Human Factors and Ergonomics Society
Annual Meeting 37(15), 1043.
ACKNOWLEDGEMENTS Mahdavi, A 2011, The Human Dimension Of Building Per-
I thank Sean Vance for allowing me to document his formance Simulation, Proceedings of Building Simula-
Universal Design Class, as well as Karl Daubmann tion 2011, Sydney, Australia, pp. K16-K33.
and David Lantrip for their valuable feedback. I Maver, T, Harrison, C, Grant, P, de Vries, B, van Leeuwen,
would also like to acknowledge the contribution of J and Achten, H (eds), 2001, Virtual environments for
both Forest Darling and Robert Van Wesep for their special needs - changing the VR paradigm, Springer, pp.
help with programming and algorithms. 151-159.
Schubert, G, Artinger, E, Petzold, F and Klinker, G 2011,
REFERENCES Tangible tools for architectural design : seamless inte-
Azhar, S, Hein, M. and Sketo, B 2008, Building information gration into the architectural workflow, Proceedings of
modeling (BIM): Benefits, risks and challenges, Proceed- ACADIA 2011.
ings of the 44th ASC National Conference. Story, M F 1998, ‘Maximizing Usability: The Principles of Uni-
Beekman, C E, Miller-Porter, L and Schoneberger, M 1999, versal Design’, Assistive Technology 10(1), pp. 4-12.
‘Energy Cost of Propulsion in Standard and Ultralight Summerfield, M 2007, Rapid GUI Programming with Python
Wheelchairs in People With Spinal Cord Injuries’, Physi- and Qt: The Definitive Guide to PyQt Programming, Pear-
cal Therapy, 79(2), 146-158. son Education.
Blanchonette, P 2010, ‘Jack Human Modelling Tool: A Re- Tilley, A and Henry Dreyfuss Associates 2001, The Measure
view’, DEFENCE SCIENCE AND TECHNOLOGY ORGANI- of Man and Woman: Human Factors in Design, Wiley.
SATION VICTORIA (AUSTRALIA) AIR OPERATIONS DIV. Vance, U S (ed) 2012, Equilibrium: Universal Design Primer,
Dijkstra, E 1976, A discipline of programming, Prentice-Hall, MPublishing.
Incorporated. Zacharias, F, Borst, C and Hirzinger, G 2007, Capturing ro-
Eastman, C 2008, BIM Handbook: A Guide to Building Infor- bot workspace structure: representing robot capa-
mation Modeling for Owners, Managers, Designers, Engi- bilities, Intelligent Robots and Systems, 2007. IROS 2007.
neers and Contractors, Wiley. IEEE/RSJ International Conference on, pp. 3229-3236.
Eriksson, J, Ek, A and Johansson, G 2000, ‘Design and evalu- Zelnik, M and Panero, J 1979, Human Dimension and Inte-
ation of a software prototype for participatory plan- rior Space: A Source Book of Design Reference Standards,
ning of environmental adaptations’, Rehabilitation En- Crown Publishing Group.
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 373
374 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
Visibility Analysis for 3D Urban Environments
Abstract. This paper presents a visibility analysis tool for 3D urban environments and its
possible applications for urban design practice. Literature exists for performing visibility
analysis using various methods and techniques, however, tools that result from such
research are generally not suitable for use by designers in practice. Our visibility analysis
tool resides in Grasshopper, Rhino. It uses a ray casting method to analyze the visibility
of façade surfaces from a given vantage point, and of a given urban setting, in particular,
buildings and roads. The latter analysis provides information on the best visible buildings/
building facades from segments of roads. We established a collaboration with a practicing
architect to work on a design competition together, using this tool. The paper elaborates
on the visibility analysis methods, presents the tool in detail, discusses the results of our
joint work on the competition, and briefly reflects on the evaluation of the use of the tool
by design practitioners.
Keywords. Visibility analysis; pedestrian design; urban space quality; design practice.
INTRODUCTION
This paper presents a visibility analysis tool for 3D methods of analysis using terms such as “visual ab-
urban environments and its possible application for sorption”, “visual corridor” or “visual intrusion” (Lynch,
design practice. Visual perception of space is one of 1976). A view analysis example is an ‘isovist’ analysis
the factors that defines spatial experience and cog- which measures a volume of space that is visible
nition of architectural/urban space. Analyzing the from a single point in space. The term was introduced
impact of design decisions on perception of space by Tandy in 1967 (Tandy, 1967). This research gave
may help to significantly improve the quality of ur- raise to the development of a multitude of methods
ban developments (Bittermann et al., 2008). for quantitative analysis of space perception. Ben-
Many design and architectural researchers inves- edikt was the first who introduced a set of analytic
tigated the relation between urban space morphol- measurements of isovist properties (Benedikt, 1979).
ogy and its experiential qualities as perceived by us- In the field of landscape architecture and planning
ers. Among them are Appleyard et al. (1964), Lynch there is a similar concept called “viewshed” (Turner et
(1960), Benedikt (1979), and Thiel (1961). Kevin Lynch al., 2001), which analyzes the visibility of an environ-
stipulated on the importance of view analysis and mental element from a fixed vantage point.
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 375
Quantitative methods for visibility analysis can analysis performed using different computational
be roughly divided into the following categories: a) implementations. The research underlines that dif-
scientific landscape evaluation (LE) provides meth- ferent computational methods tackle different as-
ods for ‘quantitative description of natural landscape pects of spatial analysis and provoke different ways
visual quality or impact prediction’ (these approach- of thinking about a problem. Therefore, a computa-
es do not consider human perception); b) methods tional tool can become a flexible element that sup-
such as ‘isovist’ concentrate on the visibility of an en- ports creative thinking during design process.
vironmental element from a fixed vantage point and Turner et al. (2001) uses visibility graph method,
neglect the landscape resources (He et al., 2005). first introduced in De Floriani et al. (1994), for spa-
The most common examples of utilizing visibil- tial analysis of architectural space. This research in-
ity analysis methods and tools in the field of urban vestigates how visual characteristics of a location
design are analysis of visibility from important (stra- are related and how this can have a potential so-
tegic) points (e.g., large transportation hubs, major cial interpretation. The graph representation that is
public spaces, etc.) to dominants (e.g. tall buildings, used incorporates isovists to derive a visibility graph
monuments, etc.), which can help to improve navi- of mutually visible spots in a given spatial layout
gation of pedestrians in the city. Another case is the (Turner, 2001). This leads to the definition of some
preservation and/or strategic use of views to natu- measures that describe both local and global spatial
ral landscape elements such as a river or park. This properties that may relate to the perception of the
is especially relevant to high-density urban areas built environment.
that are still undergoing an extensive development The literature discussed above presents research
process, such as Moscow, Hong Kong or Singapore. for performing visibility analysis using various meth-
Uncontrolled development in such big cities leads ods and techniques. An issue that arises concerning
to fragmentation or complete blockage of views to the tools that result from such research is that the
valuable landscape resources, which are more de- tools are not suitable for use by designers in prac-
sirable for people than man-made structures (He et tice. Most designers do not have knowledge and
al. 2005). This results in a drop of real estate values skills of programming, or using specialized software.
and deterioration of city fabric. In this context, He et This has several reasons, e.g., time pressure in a de-
al. (2005) presents an approach to visual analysis of sign project. Designers also don’t tend to use spe-
high density urban environments, which quantita- cialized analysis software during the early design
tively integrates human visual perception (analysis phase, because these are difficult to use, and the
from a fixed vantage point) with the visible land- model usually needs to be exported and imported
scape resources (LE), using GIS as database and tech- back and forth between the analysis and modeling
nical platform. This approach can help architects software. Performing analysis on the model in the
to take more informed decisions at an early design familiar modeling environment would increase the
stage regarding the preservation of valuable land- usability of these tools. Furthermore, developing
scape resources and view corridors. Another exam- the tools with their use by designers in mind would
ple is the work described in Fisher-Gewirtzman et al. increase their usability. Our research development
(2005), which compares various coastal urban mor- aims to introduce visibility analysis tools in the ur-
phologies with the variation of density levels and ban design practice.
their influence on the visibility of the water front. The most recent visibility analysis methods that
The assumption is that the morphological results designers and architects use today rely heavily on
can be used as criteria for future urban planning. computing power. Some of the well-known analysis
Do and Gross (1997) present a set of tools for software such as, Ecotect, Space Syntax and ArcGIS
spatial analysis among which are tools for visibility offer methods for visibility analysis. However, these
376 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
Figure 1
Analysis of visual pollution by
advertisement billboards.
offer very limited methods for visibility analysis of boards and other large signs create a view pollution
building facades, or as we call it in this paper, anal- of building façades on this street. The definition of
ysis of 3D urban environments. In addition to that, view pollution may be interpreted differently in dif-
all this software are standalone applications that do ferent contexts. For instance, billboards and signs
not support 3D modeling. Every new design ver- characterize Times Square in New York, as these
sion must be imported and analyzed in a modeling form the identity of place in this context. However,
software. This approach does not support dynamic on this pedestrian street in Moscow, uncontrolled
manipulation of the design model and slows down placement of advertisement billboards results in a
the design process. We developed a tool for visibility complete blocking of 18th century historic heritage
analysis in Grasshopper, parametric plug-in for the buildings. Furthermore, the scene created by the
Rhinoceros modeling platform. Rhino is widely used signs do not contribute positively to the identity of
among architects and designers today. Our tool can the place, on the contrary, it diminishes the overall
be used to analyze models directly in Rhino, and quality of public space.
dynamic changes can be made and revised models In our current work we aim to investigate poten-
analyzed by the tool in real time. Our tool uses a ray tial uses of our tool for design practice. Therefore, we
casting method to analyze the visibility of façade established a collaboration with a practicing archi-
surfaces. tect to work on a design competition together, us-
Our tool combines two possibilities, referring to ing the 3D urban settings visibility analysis tool.
the two quantitative methods for visibility analysis This paper elaborates on the visibility analysis
described earlier in this section: a) analysis of visibil- methods, presents the tool in detail, and discusses
ity from a given vantage point and; b) visibility anal- the results of our joint work on the competition. We
ysis of a given urban setting (in particular, buildings end the paper with a brief evaluation on the use of
and roads). The latter analysis provides information the tool by design practitioners, and directions for
on the best visible buildings/building facades and future work.
segments of roads that ‘see’ most of the buildings.
The view pollution analysis became a first case THE VISIBILITY ANALYSIS TOOL
study for the tool (Koltsova et al., 2012). An example This section elaborates on the functionality of the
that we analyzed is one of the pedestrian streets in visibility analysis tool and its development process.
the historic center of Moscow, Russia (Figure 1). Bill- We used Grasshopper, the parametric environment
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 377
for Rhinoceros, as the development platform. In Figure 2
Grasshopper it is possible to write your own code Custom Grasshopper
in C# .NET or VB .NET and create a custom tool (or component for visibility
component) that performs the specific function. analysis. Inputs: road network
Such custom components require potential users (N), building geometry (B),
(architects and urban designers) only to know what mesh tessellation (M), terrain
to feed in as an input (curve, points, geometry, etc.) analysis (optional, (T)), max
and what the output would be. We developed two viewing distance (D), max
custom tools that perform the following functions: view angle (A).
visibility analysis of building geometry, and visibil-
ity analysis of the road network (Figure 2). Visibility ceeds to the analysis of the whole mesh. Intersec-
analysis uses a ray casting method. The algorithm tion calculation of the ray and bounding box takes
requires the following inputs: less time then ray-mesh intersection, which helps to
• building geometry as Breps considerably reduce calculation time.
• terrain as a mesh surface The main parameters that the tool uses are:
• road network as curves or polylines • the view distance from a view point to a façade
The algorithm converts the building geometry surface,
(Breps) into a mesh. The possibility to define mesh • maximum visual angle (vertical and horizon-
tessellation for building and terrain surface geome- tal), and,
try individually is embedded in the tool. This is done • angle from the view point to a façade surface.
due to the difference in scales and analysis preci- For different design tasks specific parameters
sions for the two geometry types. are retrieved by the tool. For example, for the analy-
The road curves are selected automatically by a sis of city dominants (tall buildings or city monu-
“Pipeline” component (Figure 2). This is the in-built ments), the tool solely checks if the object is visible
Grasshopper component that allows for automatic or not from a certain point or path (Figure 4a). Con-
selection of a specified type of geometry by object sidering factors such as the visibility of city domi-
layer. The road network is split into segments and nants during the design of new public spaces can
at intersection points. The length of every segment improve navigation within a city. For pedestrians it
can be defined according to the design scale. The is easier to choose the direction of movement if they
smaller the segment the more precise the analysis see a dominant and know the location of it. Visual
is. The mid points of segments become visibility connections in the city also help to create better
nodes. The algorithm generates rays between mid
points of the curves and mid points of mesh faces of Figure 3
building/terrain geometry. Then, the algorithm re- Analysis results (best visible –
turns intersection points between vectors and each yellow; non-visible – white),
face’s mid points and checks if there is any obstruc- viewing points are distributed
tion between the viewing point and façade surface. along the pedestrian walks
Depending on the result it assigns each face a color: with a span of 20 meters.
gradient between yellow (best visible and blue –
worst visible; white – non-visible) (Figure 3).
In order to save calculation time we use bound-
ing box of building meshes at first iteration step to
check for possible intersections. If generated ray
intersects a bounding box then the algorithm pro-
378 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
Figure 4
In red – viewing point,
gradient shows the best/worst
(yellow/blue) visible building
facades:
a) Tool checks for visible/non
visible buildings – true or false
b) Distance to façade surface
is added c) Distance and angle
to façade surface are added
d) Direction of pedestrian
movement and its view angle connected public spaces (network instead of iso- bility tool it is possible to set a starting point and an-
is added lated spots). alyze how far one can get by walk/car or bus within
For the analysis of how detailed pedestrians can a certain time period. In this case destination points
see the facades and which are the most exposed are the mid point of previously generated segments
surfaces, the maximum distance and angle from of the road network (refer to the visibility tool de-
a view point to façade mesh faces is added. An an- scription before). The input parameters for this com-
gle closer to 90 degrees and less distance to façade ponent are:
means better visibility. Gradient illustrates the best/ • max walking distance, or;
average/worst visible façade surfaces (Figure 4b, c). • time and speed by car/walk/public transport
For the moment the influence of distance and angle (in which case max walking distance is calcu-
on the analysis result is 50/50. Naturally, the impor- lated based on these two parameters).
tance of each of the parameters can vary depend- We use the graph component to analyze structure
ing on the design task. Therefore, we plan to further and create topology of the road network (Figure
evaluate the tool with architects and revise it based 5). This information in turn is used by the Dijkstra’s
on their feedback. We have already added additional algorithm to calculate the shortest path between
constraints such as the horizontal and vertical view starting and destination points.
angles to be able to analyze what a person can see Combining the two types of analysis methods
while walking in a specific direction (Figure 4d). It is provides the possibility to analyze how far one can
possible to activate or deactivate the functions de- go within a certain time span and what one can
scribed above by right-clicking the title of the com- see while walking this route (Figure 6). Figure 7(a)
ponent and checking/unchecking them (angle to shows the accessibility analysis results and (b) what
surface, distance to surface, one direction). This is a one can see while walking this path. The resulting
feature that can be programmed by a tool developer path is used for the visibility analysis of best visible
in Grasshopper. façade surfaces from the path. Rays are created be-
In our work we combined two types of urban tween the road segment and building mesh faces. If
Figure 5 analysis: visibility and accessibility. With the accessi- a mesh face is visible from the road segment then
Custom component for the algorithm assigns a segment ID to the mid point
accessibility analysis. Input of the mesh face. The more segments “see” a certain
parameters: network topology mesh face the higher the mesh face’s visibility value
(G), starting point of move- becomes (in terms of color: yellow – best visible,
ment (P), speed (V), duration/ blue – worst visible, white – non-visible).
time of movement (T), max Using our tool it is also possible to analyze best
walking distance (D). visible buildings. In this case the algorithm stores
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 379
building IDs instead of individual mesh faces and Figure 6
analyzes what are the buildings that most of the Combination of accessibility
road segments can “see”. The same logic applies to and visibility analysis custom
road segments. The more buildings/mesh faces a components.
road segment can “see” the higher visibility value
(closer to yellow color) is assigned to it (Figure 7b, c).
Using the tool it is possible to analyze the visibil-
ity of a single building and the road segments that
can “see it” (Figure 7d). The algorithm principle is the
same, with the exception that the information of the
road segment is stored as a boolean (True/False).
380 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
Figure 7
a) Accessibility by walk
within 15 minutes, b) Vis-
ibility – what one can see
walking 15 minutes, and most
visible buildings (from all the
analyzed visibility points)
and road segments that “see”
most of the buildings, c) Most
exposed façade surfaces and
road segments that “see”
the most of the surfaces, d)
Building with index number
56 is analyzed, in gray road
segments that can “see’ the
building, black – not.
really useful as soon as the 3rd dimension comes project, then it becomes quite hard to estimate the
into play. Architects are trained and can estimate visual impact of the new design and its perception
what a person can see on the plan. However, when from different city locations. In his opinion, our tool
elements of context such as a complicated terrain can be used for the projects with, as he called it,
with high-density developments are a part of design “multiple levels and dimensions”. Based on the feed-
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 381
Figure 8
Top: project site and design
proposal; right: visibility
analysis from strategic points
(street view, tram stop, bus
stop, point in the city)
back we introduced additional function that allows the two geometry types (Breps – buildings, and sur-
for terrain surface visibility analysis. The meshing face - terrain). The parametric nature of the model
of the terrain surface can be controlled individually allows for an interactive change of the design form
due to the scale difference and analysis precision of in order to improve the visibility.
382 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
CONCLUSIONS Benedikt, ML 1979, ‘To take hold of space: isovist and isovist
This paper demonstrates the working process be- fields’, Environment and Planning B, 6(1), pp. 47-65.
tween a research group and a design practitioner. Bittermann, MS and Ciftcioglu, O 2008, ‘Visual Perception
The application of parametric tools for design prac- Model for Architectural Design, Journal of Design Re-
tice has the potential to establish a better commu- search, Vol. 7, pp. 35-60.
nication between design theory and practice, and Do, E Y L and Gross M D, 1997, Tools for visual and spatial
improve the quality of future urban spaces through analysis of CAD models, Computer Assisted Architectur-
better informed design processes. We will proceed al Design Futures, R Junge (ed), pp. 189-202.
with collaborative work with architects in order to Fisher-Gewirtzman, D Shach Pinsly, D Wagner, IA and Burt,
enhance our methods and adapt them to the needs M 2005, ‘View-oriented three-dimensional visual analy-
of the design practice. sis models for the urban environment’, Urban Design
In our future work we also plan to enhance the International, 10, pp 23-37
functionality of the presented tool by introducing He, J Tsou, JY Xue, Y and Chow, B 2005, ‘A Visual Landscape
additional inputs based on architects’ feedback. For Assessment Approach for High-density Urban Devel-
example, it is important to consider in the analysis opment’ Proceedings of the 11th International CAAD
the type of urban space and the type of movement Futures Conference, Austria, pp 125-134
it implies. In more specific terms, square/piazza or a Koltsova, A, Tunçer, B Georgakopoulou, S and Schmitt, G
shopping street implies lingering. The road between 2012, Parametric Tools for Conceptual Design Support
the transportation hub and business district would at the Pedestrian Urban Scale: Towards inverse urban
most probably have linear/directional type of move- design, Achten, H Pavlicek, J Hulin, J Matejdan, D (eds.),
ment. The perception of space by pedestrians large- Digital Physicality - Proceedings of the 30th eCAADe
ly depends on these factors and we will work on the Conference - Volume 1, pp. 279-287
ways to introduce this information into our paramet- Lynch, K 1976, Managing the sense of a region, MIT Press,
ric tools which would result in more accurate results. Cambridge.
Lynch, K 1960, The Image of the City, Cambridge, MIT Press,
ACKNOWLEDGEMENTS Cambridge.
The authors would like to thank architect Michael Tandy, CRV 1967, ‘The isovist method of landscape survey’,
Gueller for his valuable input during our collabora- in Symposium: Methods of Landscape Analysis, HC Mur-
tion and Lukas Kurilla for his support in the tool de- ray (ed), Landscape Research Group.
velopment. Thiel, P 1961, ‘A Squence Experience Notation for Architec-
tural and Urban Space’, Town Planning Review, V 32,
REFERENCES pp. 33-52.
Appleyard, D K Lynch, K and Myer, J 1964, View from the Turner, A Doxa, M O’Sullivan, D and Penn, A 2001, ‘From iso-
Road, MIT Press, Cambridge. vists to visibility graphs: a methodology for the analy-
Batty, M 2001, ‘Exploring isovist fields: space and shape in sis of architectural space’, Environment and Planning B:
architectural and urban morphology’, Environment and Planning and Design, 28 (1), pp. 103-121.
Planning B: Planning and Design, V 28, pp. 123-150.
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 383
384 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
Human Activity Modelling Performed by Means of Use
Process Ontologies
Armando Trento1, Antonio Fioravanti2
1
Sapienza, University of Rome
1
www.armandotrento.it, 2http://www.dicea.uniroma1.it
1
armando.trento@uniroma1.it, 2antonio.fioravanti@uniroma1.it
Abstract. Quality, according to Pirsig’s universal statements, does not belong to the
object itself, nor to the subject itself, but to both and to their interactions. In architecture
it is terribly true as we have a Building Object and Users that interact with it.
The problem we approach here, renouncing at the impossible task of modelling the actor’s
“libero arbitrio”, focuses on defining a set of occurrences, which dynamically happen
in the built environment. If organized in a proper way, use process knowledge allows
planners/designers to represent usage scenario, predicting activity inconsistencies and
evaluating the building performance in terms of user experience.
With the aim of improving both, the quality of buildings and the user experience, this
research explores a method for linking process and product ontologies, formalized to
support logic synchronization between software for planning functional activities and
software for authoring design of infrastructures.
Keywords. Design knowledge modelling; process ontology; knowledge management.
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 385
ficiencies of property and infrastructure investments In order to get this overall performance, build-
affect the public finances, even if current spending is ings and cities behaviour has to meet various techni-
much more relevant. cal and non-technical requirements (physical as well
Through BIM which is accompanied by a more as psychological) placed upon them by owners, us-
efficient information management, the sector may ers and society at large.
acquire a production quality typical of more mature Research in this field will be seeking to reduce
industries. the gap between technology and society, to in-
The efforts of the community identified as In- crease the quality of building production, by means
ternational Alliance for Interoperability (IAI), estab- of open and participatory approach.
lished by scientific communities in partnership with In terms of technological solutions, the product
key players in the commercial sector, in the last 10 knowledge has been fairly studied and a number of
years aimed at establishing BIM standards for the modelling techniques have been developed. Most
use of object technology in construction and facili- of them are tailored to specific products or specific
ties management. aspects of the design activities.
These standards, known as Industry Founda- Current research on AEC product modelling can
tion Classes (IFC) are now contained within the most be classified in two main categories:
comprehensive model of design, construction and • geometric modelling, used mainly for support-
Facility Management information yet created. All the ing detailed design, and
main software developers in this industry segment • knowledge modelling, aimed at supporting
worldwide are committed to producing IFC-compli- conceptual aspects of designs.
ant software. Specifically, on the need to govern the symbio-
Studying the IFCs structure, we can observe that sis between building and its functions, so that com-
they have been developed by means of a space- puters can support every phase of construction (e.g.
components product approach, successful in terms Solibri program), it is necessary to have information
of data exchange and information interoperability models based on an adequate knowledge represen-
between programs, not intended for human under- tation, formally computable.
standing. This lack of semantics is reflected in the This kind of knowledge, oriented to solve com-
modelled buildings, once it is required to simulate plex technical problems, cannot avoid to qualify the
its behaviour in terms of usage, safety and comfort. product building through its relationship with the
More specifically, to predict human behaviour in context and with the actors.
a building during its usage, by means of the actual In terms of social contributions, on the other
standards, tools and technologies is an urgent open side, we need to clarify roles and identify responsi-
problem which challenges knowledge engineers bilities of actors involved along the building life cy-
and building designers since long time. As well it cle, starting from the client, through designers, pro-
involves a lot of resources in terms of industrial re- viding for the participation of users from the early
search and developments in the fields of army and stages of design concepts.
videogames. The BIM methodology assumes that there is a
client able to schedule formally a process of briefing,
FUNCTIONAL PROGRAM VS. BUILDING design, production and management, for example
PRODUCT DESIGN using “template” for the programming of functions
A shared goal, typical of all AEC industry products, is and activities, and thus reducing the level of ambi-
to functionally facilitate its direct and indirect users’ guity in the requirements definition.
activities, being aesthetically pleasing (Fioravanti et Client, especially if they must also manage the
al., 2011a, p. 185). constructed facility, are the largest beneficiary of the
386 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
process-product models development, because of There are at least two problems with the way all
their risk-based reasoning approach drives the opti- applications typically represent process information:
mization of contract management. • They use their own internal representations,
Designers, challenged to become more aware therefore communication between them, a
of product and process models, are the key to the growing need for industry, is nearly impossible
spread and development of the most advanced in- without some kind of translator.
formation systems. An open area of research works • The meaning of the representation is captured
on the interface between designer and tool, to en- informally, in documentation and example, so
able the first to clearly face pre-defined patterns and little automated assistance can be given to the
then customize them while using the software they process designer.
are familiar with. In terms of Process Knowledge Modelling, at the
Users, generally, as well known, play the central state of the art, it is important to refer to some on-
role in Architecture. The problem we approach here, going researches at the international level.
renouncing at the impossible task of modelling the
actor’s “libero arbitrio”, free unpredictable will, focus- NIST CPM
es on defining a set of occurrences that dynamically A design repository project at NIST attempts to
happen in the built environment. model three fundamental facets of an artifact repre-
Planners’ traditional approach consists in enter- sentation: the physical layout of the artifact (form),
ing planned processes (expertise, technical regula- an indication of the overall effect that the artifact
tions, best practices, etc.), in an architectural schema creates (function), and a causal account of the op-
(Wurzer, 2009; Wurzer et al., 2010). However, those eration of the artifact (behaviour).
processes are correct only if the planner can correct- The NIST Core Product Model (CPM) has been
ly anticipate and inform the usage of the building by developed to unify and integrate product or as-
different building user groups. sembly information [1]. The CPM provides a base-
If organized in a proper way, it is possible to rep- level product model that is: not tied to any vendor
resent usage scenario, predicting activity inconsist- software; open; non-proprietary; expandable; inde-
encies and evaluating the performance of the build- pendent of any one product development process;
ing in terms of user experience. capable of capturing the engineering context that
At the same time it is possible to design a build- is most commonly shared in product development
ing use programme if it can be re-modelled during activities. The entity-relationship data model influ-
the building design process. ences the model heavily; accordingly, it consists of
With the aim of improving the quality of user two sets of classes, called object and relationship,
experience, this paper explores a method based on equivalent to the UML class and association class,
process-product knowledge, formalized to support respectively.
logic synchronization between the planning of ac-
tivities and design of infrastructures The buildingSMART
Standard for processes (formerly known as the Infor-
STATE OF THE ART IN META-PROCESS mation Delivery Manual or IDM [2]) specifies when
MODELLING RESEARCH certain types of information are required during the
Many applications use process information, includ- construction of a project or the operation of a built
ing production scheduling, process planning, work- asset. It also provides detailed specifications of the
flow, business process reengineering, simulation, information that a particular user (architect, build-
process realization, process modelling, and project ing service engineer, etc.) needs to have at a point in
management. time and groups together information that is need-
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 387
ed in associated activities: cost assessment, volume levels of need, which is its serviceability. It rates fa-
of materials and job scheduling are natural partners. cilities—supply—in performance language as a first
Thus the buildingSMART standard for proces offers a step toward an outline performance specification.
common understanding for all the parties: when to A set of tools was designed to bridge between
exchange information and exactly what is needed. “functional programs” written in user language on
The linked Model View Definition (MVD) turns the one side and “outline specifications and evalua-
the prerequisites and outcomes of the processes for tions” written in technical performance language on
information exchange into a formal statement. Soft- the other. Although it is a standardized approach, it
ware developers can take the standard and specific can easily be adapted and tailored to reflect the par-
Model View Definitions that derive from it and incor- ticular needs of a specific organization.
porate them into their applications [3]. The detailed
information for this is described in the ISO standard: Limits
ISO 29481-1:2010 Building information modelling -- Building Modelling is not an objective process, but
Information delivery manual -- Part 1: Methodology rather subjective, aimed at very specific purposes
and format. that depend, first and foremost, on contractual ty-
ISO 29481-1:2010 specifies a methodology and pology. On process models there are a lot of mis-
format for the development of an Information De- leading quarrels, in the sense that many models
livery Manual (IDM). ISO 29481-1:2010 specifies a have always appeared very reductionist and sim-
methodology that unites the flow of construction plistic in relation to the complexity of the real and
processes with the specification of the information the articulation of the reasons of the different actors
required by this flow, a form in which the informa- involved.
tion should be specified, and an appropriate way to Typically, in architecture, when a product design
map and describe the information processes within falls, analysts want to insert a design process to fix
a construction life cycle. the bad design. However, a one-size-fits-all design
process does not exist. Experience teaches that it is
ASTM Standard Scales quite hard to force a fixed process on a design team
The ASTM standard scales provide a broad-brush, that every actor must follow. Every designer has
macro level method, appropriate for strategic, over- their own unique way of solving design problems.
all decision-making [4]. The scales deal with both de- Design domain experts, usually, argue that bad
mand (occupant requirements) and supply (service- product design is fixed by hiring good designers not
ability of buildings) (McGregor and Then, 1999). They by adopting a better design process.
can be used at any time, not just at the start point There is a need to produce not more models,
of a project. In particular, they can be used as part but environments where it is more easily possible
of portfolio management to provide a unit of infor- to reformulate the existing process-product models.
mation for the asset management plan, on the one Specifically, process models influence the Infor-
hand, and for the roll-up of requirements of the busi- mation Modelling much more than drafting based
ness unit, on the other. The ASTM standard scales methods. Each actor instinctively wants to rearrange
include two matched, multiple-choice question- the software built-in model, because a single infor-
naires and levels. One questionnaire is used for set- mation model cannot meet all the Requirement.
ting workplace requirements for functionality and To set up an information modelling process
quality. It describes customer needs—demand—in since briefing phase, implies reasoning primarily on
everyday language, as the core of front-end plan- the building functions and on the physical environ-
ning. The other, matching questionnaire is used for mental solutions, such as energy modelling or usage
assessing the capability of a building to meet those planning.
388 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
USE PROCESS KNOWLEDGE MODELLING pens during the AEC design. Each ‘pole’ is constitut-
To provide a reliable, comprehensive and up-to-date ed by knowledge-based system in its respective do-
knowledge base on use process, we thought of rely- main. In particular on the knowledge of the product
ing on a general structure for knowledge represen- (building - with its components and its multidiscipli-
tation already presented and discussed among the nary aspects), context (site - with reference to physi-
scientific community by this research group (Car- cal, legal, planning, ecological and climatological as-
rara et al., 2009; Fioravanti et al., 2011a; 2011b), and pects), the actors involved (humans - professionals,
working to extend its application field to this spe- contractors, customers and non-humans - agents,
cific purpose. intelligent assistants) and procedures that regulate
This general process representation model is this process (such as commitment, design phases,
linked to a specific Building Knowledge Model (BKM) economic and financial aspects, administrative and
structure, oriented to formalization and description organizational rules). All these ‘poles’ evolve in time.
of each entities composing design product (spaces, This Research Group (RG) has structured and
building components, furniture, equipments, etc.). formalized product knowledge, through a logic
Each entity is represented in its main features decomposition of the building organism. “Product
and in its relations with other entities by means of ontologies” were implemented, starting from IFC
the ‘knowledge template (Carrara et al., 2009) based standards and developing a method for explicitly
on the already discussed “Meaning-Properties-Rules” modelling the rules that qualify the intrinsic mean-
structure. ing at different levels of aggregation.
Starting from this representation model, already The RG approached has structured and formal-
applied to represent building design products, the ized context knowledge, both physical-environmen-
new challenge is to extend it to the representation tal and jurisdictive, implementing with the same
and evaluation of spatial and technological require- method the “Context ontology “, allowing for ad hoc
ments defined according to user needs. support during decision-making processes of archi-
Specifically, the interdisciplinary processes tectural product design-programming.
which BKM aims to support include the following: In the last few years RG has been studying the
• Design of Use Functional Program to be per- “Actors ontology”, approaching the problems related
formed in an existing infrastructure; both to modelling specialist profiles involved in the
• Design of an infrastructure in accordance with design-programming process, and profiles involved
a defined Use Functional Program; in the process of use. Some rules governing the ob-
• Design of an infrastructural renovation in ac- jective part of user behaviour have been identified.
cordance with a defined Use Functional Pro- This paper reports on early results of a study
gram and / or rescheduling of activities defined which explores a method for structuring “Process
by Use Functional Program on the basis of the ontology”. The backbone lemma of this tetrahedron
existing infrastructure. “knowledge realm” is the recognition of the dynamic
dimension that characterizes every process model.
Tetrahedron Of Knowledge “Tetrahedron of knowledge” finds its most com-
Scenario in which a building project is delineated plete application in real AEC problems because, un-
by means of the outlines and guidelines is marked like the existing knowledge structures, allows actors
by four ‘poles’ of a Knowledge symbolic Tetrahedron to dynamically model process-product structures,
that represent the different kinds of knowledge: with explicit semantics.
product, context, actors and procedures (Fioravanti The BKM system based on the tetrahedral
et al., 2011b). knowledge structure, enables actors to intervene in
The four ‘poles’ of knowledge shape what hap- the course of work on the definition of process enti-
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 389
ties and rules. The system supports the re-modula- (activities) and some related research efforts have
tion of the constraints and objectives of the process been conducted. For example, a web-based proto-
that are bi-univocally related to functional and be- type system for modelling the product development
havioural properties of the product. process using a multi-tiered DSM is developed at
“Situatedness” of development processes is MIT. However, few research endeavours have been
a key issue in both the software engineering and found on design rationale (Peña-Mora et al., 1993)
the method engineering communities, as there is Events: particular process entities, “milestones”
a strong felt need for process prescriptions to be that occur in the dynamics of the activities. Emer-
adapted to the situation at hand. gencies necessary to structure the causal and de-
Specifically, the formalization of Use Process On- pendency relationship between Use Process entities.
tology, qualifies and is qualified through rigorously
structured relationship with the product-context- Use Process Requirements, Performance,
user ontologies. Behaviour
To model use process entities and rules means From a computational point of view, use process
governing the integration between product form, requirements can be defined as variables, because
function and behaviour and vice versa. they establish a mapping between a set of process
entities and a set of values which express some of
Use Process Design Knowledge their qualitative (and quantitative) aspects.
Use Process Knowledge is represented by means The specific values that satisfy a particular use
of Use Process Ontology, a structure based on Use process requirement in a particular situation (con-
Process Entities, qualified by a system of Use Process text and objective dependent) can be defined as use
Rules. On one hand these process rules govern ac- process performance.
tivities planning and on the other hand they control The set of all use requirements and performanc-
relationship with the rest of knowledge realms: who es can be defined as the behaviour of the represent-
does what, where, when and how. ed process entity/class in terms of use.
Use Process Knowledge can be described by
means of process classes, at different levels of ag- Design Goals Knowledge Structure
gregation: Design process goals can be stated as desirables
Use Process Actions: elementary class entities performance measures of the sought solution. Alter-
structuring the Use Process Ontology. They repre- natively they can be stated as set of constraints that
sent the process based on user’s minimum ergo- the proposed solution must satisfy.
nomic function. Each constraint indicates the specific level of
Use Process Activities: a set of Use Process Ac- performance a design solution should achieve in a
tions structured in time and space, oriented by the particular category or an acceptable range of perfor-
functional programme. They qualify the relation mance values.
between users and building (spaces, components, It can be represented formally using this general
facilities, equipment, etc.) annotation:
Use Process Rationale: aggregation of Use Pro- constraint ( value | range )
cess Design Activities. The importance of represen- where the vertical bar stands for ‘or’. A constraint can
tation for use rationale has been recognized but it is be stated in terms of a specific value it must satisfy
a more complex issue that extends beyond artifact or a range of values.
function. It is function of social-economical-environ- The function of the goals is thus to group a
mental sustainability. (The Design Structure Matrix number of related constraints that should all be sat-
(DSM) has been used for modelling design process isfied together (Carrara and Fioravanti, 2003). More
390 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
formally, goals can be represented by this general and rule formalizations. In order to interrogate De-
notation: sign Solutions, Ontology Rules have been imple-
goal ( { goal } | { constraint } ) mented in SWRL and tested on prototype instances
This definition is recursive: a goal can be stated of developed Ontology Classes to check use process
in terms of constraints, or in terms of goals. There - product constraints:
is no inherent difference between goals and con- • Space configuration and topological relation-
straints. Rather, they form a hierarchical structure ships among spaces;
where terminal nodes represent constraints and in- • Furniture and equipment dotation for each
termediate nodes represent goals. building unit;
The conditions under which a constraint is con- • MEP system, Structural elements and Space
sidered satisfied must be established and eventually configuration compatibility.
modified during the design process by the actors, In this specific case of study, the process repre-
according to the internal and external requirements. sentation is oriented to the use programming and
designing, so as to match the Activity Program, de-
LOGICAL IMPLEMENTATION PATH fined by means of traditional project management
The implementation pipeline, is oriented to predict software, together with the design solution of space
and evaluate the performance of a building based configuration.
on (planned or to be re-planned) usage scenarios By means of BKM general knowledge structure,
and vice versa modelling scenarios of use in a (exist- it is possible to connect a labelled graph of inten-
ing or to be renewed) building. tions, called strategy map, as well as its associated
This work focuses on a multi-model view of pro- flowchart guidelines to layout solutions.
cess modelling which supports this dynamicity. The It has been implemented a critical path diagram
approach builds beside the BKM product represen- of Hospital operating room renovation, and now
tation (geometric and non-geometric), a BKM pro- we are working on the actual link to Process Activi-
cess representation. ties Gantt chart. This map is a navigational structure
Since BKM provides a semantic structure and a which supports the dynamic selection of the inten-
standard language (XML, OWL) what we are working tion to be achieved next and the appropriate strat-
on is the implementation of a bidirectional synchro- egy to achieve it.
nization between software for Programming and A set of task guidelines, intended to help in the
software for Authoring space solutions. operationalisation of the selected intention, repre-
The assumption of this process modelling ap- sents some basic ergonomic rules about flow of pa-
proach is that process prescriptions should be se- tients, staff, equipment and material.
lected according to the actual situation at hand, i.e. Once accomplished the task of formally repre-
dynamically in the course of the process. senting Use Process and Product Knowledge accord-
To implement this process, the proposed Build- ing to the BKM Knowledge Structure, the implemen-
ing Knowledge Model, a formalized extension of tation steps are namely (Figure 1):
actual Building Information Models, includes repre- 1. Connect Product Design Ontologies and Use
sentation of both the characteristic of the ontology Process Ontologies (e.g. expressed in OWL lan-
entity and the constraints. By means of Protégé, an guage by means of ontology editors, e.g. Pro-
ontology editor, we implemented some representa- tégé);
tive use process design requirements on top of 2. Connect Use Process Ontologies with actual
some building ontology entities. BIM, or IFC (by means of API, or using Beetz et
Knowledge Representation allows queries and al. (2006; 2010) transcription of IFC in OWL lan-
constraint-verifications by means of proper reasoner guage);
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 391
Figure 1
Building Entities and Goals
Knowledge Modelling.
392 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
and instances of the implemented classes in the pro- etc.) and to evaluate the building quality before its
tégé Knowledge Structure. construction will increase the chances for the client
Similarly, this approach has been used to real- to be satisfied and will provide more guarantees
ize the link between OpenProject software, used to to success in terms of future efficiency and perfor-
manage the XML-OWL process instances, to the Use mance.
Process Knowledge Base, in Protégé.
Revit and OpenProject represented entities REFERENCES
are associated to instances of the BKM Knowledge Beetz, J, van Leeuwen, JP, de Vries, B 2006, ‘Towards a topo-
Structure; data associated to entity Properties can logical reasoning service for IFC-based building infor-
be “extracted” from the BIM model while other fea- mation models in a semantic web context’, Proceedings
tures can be manually specified in Protégé accord- of the Joint International Conference on Computing and
ing to the implemented Knowledge Representation Decision Making in Civil and Building Engineering, Mon-
Structure. tréal, Canada, pp. 3426-3435.
Linking the database allows keeping consisten- Beetz, J, de Laat, R, van Berlo, R and van den Helm, P 2010,
cy between IDs from the two different environments ‘Towards an Open Building Information Model Server
referring to the same represented concept. - Report on the progress of an open IFC framework’,
Proceedings of DDSS, P-18, pp. 1-8.
CONCLUSIONS Carrara, G, Fioravanti, A, Loffreda, G and Trento, A 2009, ‘An
There is an urgent need for tools able to link and Ontology based Knowledge Representation Model
translate business rules and programme-project for Cross Disciplinary Building Design. A general Tem-
processes to check where business processes are plate’, in G Çağdaş, and B Colakoglu (eds), Computa-
not following policies and rules. tion: the new Realm of Architectural Design, Istanbul, pp.
A benefit of the proposed knowledge represen- 367-373.
tation is to provide automated assistance for process Carrara, G, Fioravanti, A and Nanni, U 2004, ‘Knowledge
development by defining the semantics of process Sharing, not MetaKnowledge. How to Join a Collabora-
entities in a computer-manipulable way. For exam- tive Design Process and Safely Share One’s Knowledge’,
ple, many businesses have rules, policies, space- Proceedings of InterSymp-2004 Special Focus Symposium
activity requirements, that their processes are sup- on Intelligent Software Systems for the New Infostructure,
posed to follow. Baden-Baden, Germany, pp. 105-118.
However, the representation of these, typically Carrara, G and Fioravanti, A 2003, ‘Needs Requirements Per-
do not enable tools to check whether they are con- formances Vs Goals Constraints Values in Collaborative
sistent. BKM represents rules about processes in the Architectural Design’, Proceedings of SIGraDi Confer-
same way as the processes themselves, and uses a ence, Rosario, Argentina, pp. 253-255.
formalism that supports automated reasoning. Fioravanti, A, Loffreda, G and Trento, A 2011a, ‘Computing
Introducing and enhancing reasoning mecha- Ontologies to Support AEC Collaborative Design - To-
nisms it will go beyond the potential of existing wards a Building Organism delicate concept’, Proceed-
commercial tools for supporting decision making ings of eCAADe Conference, Ljubljana, Slovenia, pp.
activities. 177-186.
The proposed knowledge-based system sup- Fioravanti, A, Loffreda, G and Trento, A 2011b, ‘An innova-
ports process traceability and, consequently, allows tive comprehensive knowledge model of architectural
responsibilities recognition and re-usable experi- design process’, International Journal of Design Sciences
ences collection. & Technology, 18(1), pp. 1-16.
The possibility to coordinate design process be- McGregor, W, and Then, D 1999, Facilities Management and
tween different actors (including clients, final users, the Business of Space, Arnold, London.
Models of Computation: Human Factors - Volume 2 - Computation and Performance - eCAADe 31 | 393
Peña-Mora, F, Sriram, RD and Logcher, R 1993, ‘SHARED- game-oriented environment’, Proceedings of eCAADe
DRIMS: SHARED Design Recommendation-Intent Man- Conference, Zürich, pp. 389-394.
agement System’, Proceedings of Enabling Technologies:
Infrastructure for Collaborative Enterprises, Morgatown, [1] http://www.mel.nist.gov/msid/conferences/talks/
WV, IEEE Press, pp. 213–221. rsriram.pdf (last access 30-05-2013)
Wurzer, G 2009, ‘Systems: Constraining Functions Through [2] http://www.buildingsmart.org/standards/idm (last ac-
Processes (and Vice Versa)’, in G Çağdaş, and B Colako- cess 30-05-2013)
glu (eds), Computation: the new Realm of Architectural [3] http://www.iso.org/iso/home/store/catalogue_tc/
Design, Istanbul, pp. 659-664. catalogue_detail.htm?csnumber=45501 (last access
Wurzer, G, Fioravanti, A, Loffreda, G, Trento, A 2010, ‘Func- 30-05-2013)
tion & Action - Verifying a functional program in a [4] http://www.wbdg.org/design/func_oper.php (last ac-
cess 30-05-2013)
394 | eCAADe 31 - Computation and Performance - Volume 2 - Models of Computation: Human Factors
3D Model Performance
Abstract. Various Rapid Prototyping methods have been available for the production
of physical architectural models for a few years. This paper highlights in particular the
advantages of 3D printing for the production of detailed architectural models. In addition,
the current challenges for the creation and transfer of data are explained. Furthermore,
new methods are being developed in order to improve both the technical and economic
boundary conditions for the application of 3DP. This makes the production of models with
very detailed interior rooms possible. The internal details are made visible by dividing
the complex overall model into individual models connected by means of an innovative
plug-in system. Finally, two case studies are shown in which the developed methods are
applied in order to implement detailed architectural models. Additional information about
manufacturing time and costs of the architectural models in the two case studies is given.
Keywords. Architectural model, CAAD, Rapid Prototyping, 3D printing, architectural
detail.
INTRODUCTION
Various Rapid Prototyping (RP) respectively Additive facturing). The application of these Rapid Prototyp-
Manufacturing (AM) technologies, which enable the ing technologies for the production of architectural
direct implementation of 3D drafts in models, have models provides a number of advantages over the
already been available for a few years. Today the conventional model production. For example, it al-
most popular technologies among these are 3D- lows models to be created in minimum time with
Printing 3DP with plaster powder and Fused Layer a greater degree of details. Furthermore, the repro-
Modelling FLM with plastic filament. A common fea- duction and variation of drafts and models are also
ture of these technologies is that the models are cre- simplified considerably.
ated directly from the 3D-CAAD-data. Another advantage in addition to this imple-
The physical 3D models are manufactured gen- mentation speed is the low costs for the systems
eratively, i.e. the models are created layer by layer by and materials used, resulting in a considerable re-
adding material (hence the name Additive Manu- duction of the model costs. However, there are cur-
rently still problems with regard to the data transfer terms, this means that the 3D-CAAD data have to
and the preparation of the models for 3D printing, be converted from the original file formats into a
which stand in the way of further expansion of this format that can be read in by the RP systems. Data
technology (Sullivan, 2012). These problems are based on CAAD are usually complete and consist-
highlighted and dealt with in this paper. ent. However, there are still some problems with re-
gard to the interfaces from the CAAD or BIM system
CURRENT CHALLENGES FOR THE PRO- to the RP software. RP devices only accept a neutral
CESSING OF 3D DATA TRANSFER FOR format, notably STL or VRML, but no native formats
RAPID PROTOTYPING from individual commercial CAAD system manufac-
With all Rapid Prototyping respectively Additive turers.
Manufacturing technologies, the 3D-CAAD data are The simple data format STL only reproduces the
imported first and the Rapid Prototyping prepared surfaces of 3D objects. In doing so, the 3D object is
as part of pre-processing. System-specific software approached with triangles, allowing the degree of
is available for this purpose. The actual construction detail and hence the data volume usually to be set.
of the model in layers then takes place in a Rapid However, this format does not provide information
Prototyping device. Finally, the model has to be on the colour or texture of surfaces, with the effect
post-processed, e.g. in order to remove supporting that monochrome models are created. The advan-
structures or improve the stability of the model. tage of VRML format is the opportunity to reproduce
The 3D-CAAD data may come from different surfaces but also coloured textures.
sources. On the one hand, 3D data created by means With data based on 3D scanners, there are usu-
of a commercial 3D-CAAD or BIM system are usu- ally no problems with regard to the data format,
ally already available for new projects. But, on the since Reverse Engineering Software often uses the
other hand, only 2D drawings are often available for STL format themselves. However, the same prob-
existing buildings. No plans are often available for lem occurs time and again that the data records of
historical or even archaeological buildings. 3D scan- point clouds by the 3D-Scanning systems are incom-
ners are often used nowadays in these cases in order plete, since the scanners, which use optical sensors,
to register the contours of the exterior façades and find it difficult to register areas in which no light is
interior rooms. reflected. These “shaded” areas, such as grooves and
The different data sources must be prepared in recesses, re-appear as “holes” in the data record and
such a manner to allow them to be processed by the have to be removed with complex software opera-
RP technologies as shown in Figure 1. In concrete tions by the use of a Reverse Engineering Software.
lar, floors, the roof and adjoining buildings. Indus- the individual models (Santos et al., 2006). This pre-
trial buildings can be divided according to their vents the individual models from being assembled
functions, e.g. office and workshop areas, adjoining in the wrong combination. It also reduces the risk of
buildings, supply facilities. This gives the model a damage due to incorrect assembly considerably.
structure and allows the individual models to be
designed in detail. The design usually comprises the CASE STUDY 1: SINGLE-FAMILY HOUSE
reproduction of flooring and exterior walls. Further- A single-family house as shown in Figure 2 was 3D-
more, supporting interior walls and non-supporting printed in this case study. The building was divided
lightweight walls can be reproduced. The function into the individual models: cellar, two living floors,
of the individual walls can be depicted by different roof and adjoining building.
wall thicknesses in the models. Interior staircases as Each floor was created with a floor plate and
well as pillars and supports are also reproduced. side walls. Furthermore, the interior walls and all
Further details, such as windows and door open- openings (windows and doors) were reproduced. As
ings as well as gates and skylights of industrial build- needed, the functions of the rooms could be applied
ings can be integrated in the model. This allows the in the form of writings to the floor plate to provide
room layout to be recognised. Since the models are the constructor with a better understanding. Also
set up floor by floor and are open at the top, it is pos- details like the grey painting of the oriel at the fa-
sible for the architect and customer to discuss and çade in the front could be demonstrated.
assess the design. Since the model was created at a scale of 1:100,
The individual models are equipped with con- the interior and also exterior walls could also be re-
nection elements to make it easier to handle them. duced to the scale without falling short of the mini-
They are simple plug-in connections which allow mum requirements of the 3D printing system (see
the overall model to be quickly assembled or disas- Figure 3). The individual model parts are joined by
sembled. The Poka Yoke method is used for this pur- means of plug-in connections. This allows the roof
pose, i.e. the plug-in connections are positioned in and the individual floors to be raised in order to ob-
such a way that there is only one way of connecting serve and assess the underlying areas. Lines of sight
can be seen in the building due to the open design essary details to the constructor.
of the building. The scale 1:200 was applied to this model to al-
low even the largest individual model (cellar) to fit
CASE STUDY 2: INDUSTRIAL BUILDING into the construction space of the 3D printer (204
The 3D printing of an industrial building is imple- mm x 253 mm x 204 mm, LxWxH). In this case, the
mented in the second case study as demonstrated wall thickness of the interior and exterior walls had
in Figure 4. It is used as a test centre for persons and to be adjusted (i.e. enlarged) in order to adhere to
motor vehicles. It consists of a cellar, several test the minimum wall thickness of the 3D printer and,
halls and an office building with common rooms. in this way, create a stable, durable model. As shown
The building is initially divided into storage, test, of- by the view onto the internal structure of the build-
fice and social function areas. However, these areas ing in Figure 5 this distorted the scale to a certain
are so large that some of them need to be divided degree, since the lengths and heights of the build-
even further into floors in order to illustrate all nec- ing are true-to-scale, but not the wall thickness.
Figure 4
CAAD-Model (left) and
physical model of complete
industrial building (right),
Scale 1:200.
The cellar, which covers all areas, serves as the ing days, although, in this case too, printing can be
basis for the overall model. The individual models performed overnight to accelerate the availability of
are plugged onto the cellar either entirely or floor by the model. In addition it takes in both case studies
floor. The office and common areas are particularly some hours to maintain the preparation of the data
challenging, since they comprise numerous details. because only 2D-drawings are available.
When comparing the manufacturing costs, a
COMPARISON OF THE MANUFACTURING distinction must be made between material and
TIME AND COSTS machine costs. The material costs consist of the
The manufacturing times and also the costs for the costs for the polymer plaster powder and binding
two case studies are specified in Table 1. The manu- agent used during the production phase. The costs
facturing time is based on the actual construction for ink are negligible in these examples. The material
time for the model and the time required for post- costs also include the costs for the resin used dur-
processing, which consists of cleaning the model to ing post-processing for the infiltration and hence
remove residual powder and the subsequent infil- the increase of the strength of the models. The ma-
tration with resin. The total manufacturing time for chine costs are based on different boundary condi-
the single-family house is considerably shorter than tions (e.g. acquisition costs, service life, depreciation,
that for the industrial building due to the lower con- interest) used to calculate the hourly machine rate.
struction volume, the lower number of individual The personnel costs are not included in this calcu-
parts and the lower complexity of the geometry. lation, since, by experience, they vary considerably.
The single-family house is completed within In the both case studies the material costs of the
a working day and can be printed, for example, industrial building are more than the double of the
overnight. In contrast, printing and reworking the costs of the single-family-house. The machine costs
industrial building is expected to take 1.5 work- of the industrial building are almost the triple of the
EIGENCHAIR
The project Four Chairs and all the others opens the Eigenchair is a concept that results from the ef-
possibility of an alternative definition of design. fort to design a chair that refers to the genealogy
Rather than offering yet another thesis in support of of chairs, yet carries the potential of all chairs that
linear design development, it emphasises its polyse- might ever be created in future (Figure 1). This is the
mantic nature by understanding design process as central subject of the project Four Chairs and all the
an open field of possibilities, which not only explore others - Eigenchair which is observed and explored
physical limitations of space, but also react to con- as a sum of ideas. Prefix Eigen is commonly used in
temporary social and cultural phenomena. In order linear algebra in compounds such as eigenfunction,
to explain the idea, specific techniques were used to eigenstate, eigenvector. It comes from the German
replace simple design concepts with a series of par- word eigen which means “own”. The basic tool for the
allel narratives, thus provoking new and unexpected design of the population of chairs - i.e. “all the oth-
situations. The primary field of interest of this pro- ers” - is the Principal Component Analysis algorithm
ject becomes the intersection of different domains (Abdi and Williams, 2010). It is a standard tool for
of human knowledge, especially architecture, cul- contemporary data analysis that has been adapted
ture and information sciences. to various needs, from the neuroscience to comput-
er graphics, and which is now being applied in the creation and modification of rules and systems,
field of design (Sirovich and Kirby, 1987; Turk and which then generate an abstract machine - or a
Pentland, 1991). Principal Component Analysis reduc- population of objects. The designer therefore does
es a given data set to a set of principal components, not manipulate the “artifact” itself, but rather the
i.e. eigenvectors. The key feature of this algorithm rules and systems which generate it. The emphasis
is the intersection and interconnection of all data, is no longer on the creation of physical objects, but
whose result adapts and changes according to the on conceiving meta-objects in the possibility space.
required point of view, i.e. subjective interpretation.
The objective of this project is to show strate- Recycling Information
gies and concepts of designing by using information The postmodern condition defines a set of critical,
technologies. What happens with objects when they strategic and discursive practices which, as their
are abstracted and reduced to a set of data? What main tools, use concepts such as difference, repeti-
are the potentials of data driven design? tion, simulacrum, hyperreality, in order to destabilize
modernist concepts such as identity, linear pro-
ALTERNATIVE DEFINITION OF DESIGN - gress of history, or unambiguity (Aylesworth, 2013).
DESIGN APPROACH The supermodern condition, on the other hand,
Radical view of the world and society is today me- is not focused on the creation or identification of
diated through advanced technological systems. the existing “truths“, but on the filtration of useful
Thanks to – or perhaps due to – such circumstances, information among the plenitude of new media
the design seeks new ways of thinking and concep- cultural practices. In order to avoid postmodernist
tualizing, as well as producing objects and ideas. The tautological nihilism, the supermodernist paradigm
informatization of the society and the computer- approaches the deafening cacophony of sings in an
aided design are opening a whole range of new ide- active manner. This paradigm also operates within
as about the perception of time and space we live in. the field of design, in which it is no longer the object
The algorithmic design is now based on new param- that is in the focus of research, but its characteristics,
eters: design of ideas, narratives, procedures, popu- features, relations, ratios, structures, indexes. The in-
lations, digital production, and new understanding formation age enables a redefinition of postmodern
of the materiality. Generative design methods mean techniques such as collage, assemblage or bricolage,
all of which define an object by collecting and reas- design process becomes an abstract definition of al-
sembling various information and elements. The gorithms. Instead of focusing on a “perfect” chair a
newly created object is now a fusion of different ob- whole population of chairs was designed (Figure 3).
jects’ data but it is also completely unique and inde-
pendent in form. This project is an example of digital Imposed Materiality
recycling which, recycles information and data of In generative object design, materiality of an object
chairs (Figure 2). is not a precondition for its final manifestation. The
choice of material has so far served as the basis for
Elitism And Exceptionalism Of Singular determining the design process, defining the ex-
Object Vs. Individual Populism Of Generic pected execution of details, connections and tex-
Objects tures. Today, the generative system design enables
So far, the field of design understood practices the imposition of materiality to the object. The form,
which dealt with singular objects, that is, the crea- uncomplimentary to certain material, can now be
tion of unique and specific “ideal” objects. Such an attached to it by mere use of intellectual control.
approach was closely related to the modernist para- Therefore, the objects, previously described by fixed
digm. Today, however, the emphasis is moving from geometry, can now gain relative geometry that can
the design of an object to the design of an idea. The be rendered into reality via 3D printing. Its materi-
new paradigm changes designer’s relation to a static ality is the last, almost arbitrary decision done by a
object by putting an emphasis on conceptualiza- designer (Figure 4).
tion, interaction of the components, systems, and
processes. What was once the design of a perfect, Designing Narratives
unique object featuring specific materiality is today By rethinking the notion of “good design”, one
the design of a population of objects featuring any comes to the conclusion that design is just a tan-
materiality. Instead of a specific object, the designer gible fragment of reality, which narrates one of the
creates an algorithm. Elitism and exceptionalism many stories that surround us. Design never appears
contained in the idea of a singular object is replaced in silence. What we call “good design” nowadays is
by “individual populism” of generic objects. The key imbued with a series of narratives constructed by
role is taken over by generative systems that of- different discourses: formal, ideological, psycho-
fer methodology and theoretical world view in the logical, and theoretical. Only one part of the design
framework that go beyond dynamic processes. The process is constituted by its material and formal as-
pects, while most of it is built upon stories which de- Wire Chair by Charles and Ray Eames, Panton Chair,
scribe it and the individuals who transfer the stories and Ghery’s Wiggle Side Chair (Vegesack et al., 1996).
or identify with them. Therefore, besides designing Their main mutual link is specificity and uniqueness
an object, it is also necessary to design a narrative of the material and their respective technological in-
which defines its meaning. novation, depending on the context in which they
The research focus of the project Four Chairs and were designed. It is the richness of meaning and
all the others is the design of a chair which does not historical references of these examples that are re-
carry on the heritage of iconic or functional pieces sponsible for enabling us further creation of analo-
of furniture, but a one which contains information gies, stories and narratives, which, in turn, fertilizes
about “all chairs ever created“, for which the term viewer’s active participation in the process of visual
Eigenchair is used - to describe a sum of ideas. The representation.
algorithm database contains a number of “other
chairs”. Their fusions enable an infinite variety of MULTIDIMENSIONAL VECTOR - TECHNI-
possible results. In order to achieve a certain con- CAL APPROACH
trol over the results, out of “all other chairs” we have The project Four Chairs and all the others deals with
chosen four chairs as a precondition for creating manipulating data, thereby generating new objects.
identity and narrative. Fusions of characteristic parts A whole library of chairs, that is, their geometric and
of those four chairs with all the others are defined spatial characteristics, along with their historical im-
by user made maps that define the transformations, portance and their narratives, is taken as the start-
upgrade the performance of the Principal Compo- ing point of the project. By using open source 3D
nent Analysis tool, and enable the control of the re- models of chairs from Google warehouse, their ge-
sult (Figure 5). The project Four Chairs and all the oth- ometry is appropriated through a set of algorithms,
ers refers to four iconic chairs: Thonet’s Chair No.14, after which the Principal Component Analysis algo-
Figure 5
New chair as a fusion of
Panton Chair and Wiggle Side
Chair.
In order to achieve these goals, Principal Component also played by a series of open source libraries, espe-
Analysis computes new variables, called principal cially the Marching Cubes Algorithm (Lorensen and
components or Eigenvectors, which are obtained as Cline, 1987), responsible for generating watertight
linear combinations of the original variables. The mesh objects ready for 3D printing. All codes were
first principal component is required to have the written in Java programming language.
largest possible variance. The second component is Having in mind referential and recycling dis-
computed under the constraint of being orthogonal course, it is important to note that the algorithms
to the first component and thus having the second used in the project, e.g. Principal Component Analy-
largest possible variance. The other components are sis algorithm and Marching Cubes Algorithm, are al-
computed likewise (Figure 7). ready in practice. They are thoroughly adapted and
According to the size of the initial bounding box, functionally redirected, recycled to fit the needs of
a voxel-based space is created. Each voxel receives design.
values from txt files exported in the first step. With
the use of Principal Component Analysis we can rep- ARTICULATING INDEXES - THEORETICAL
resent each chair by using only a set of Eigenweights, APPROACH
e.g. (-5673, -85184, 50, -25533, 31594). By changing
the values of principle components, i.e. Eigenweights, Information
we are able to achieve linear transformations be- The key term which best describes and corresponds
tween all the chairs (Figure 8). to contemporary society and science is information.
The third part is the Algorithm for Mapped Mor- Information technologies are entering all spheres
phing. It is an upgrade from linear Principal Compo- of society: from the ways in which we organise our
nent Analysis transformation to a nonlinear mapped everyday life, to the ways in which we think about
transformation. An RGB map, in which each color natural sciences and humanities. This leads to the
represents a particular chair, is projected to the vox- conclusion that is impossible to understand human
el-based space. This enables us to define and control environment only in material terms of energy and
the nonlinear transformations and fusions of three matter; in order to create a comprehensive world
different chairs into a new one. Thus created, chairs view, the analysis must take into consideration the
can be used again as input chairs for the second step, category of information. At the same time, being
and achieve a new nonlinear variability (Figure 9). surrounded by excessive amount of information, the
The rest of the algorithms served to prepare the analysis requires a stable environment, which ena-
data for Principal Component Analysis and to help bles their observation and use.
with their final visualization. An important role was
Reflection On The Real between the real and the virtual, with Deluze’s and
It is impossible to comprehend or examine what is Guattari’s negation of the linear approach to the
“real”, because it depends on quantisation and for- real. Such understanding of reality is supported by
malisation of ideas. The hierarchy and the relation the vanishing of boundaries and the influence of the
between the original and its copies, which was the virtual on the real. Simulation is a process that pro-
key concern of the materially oriented society, have duces the real, and vice versa (Massumi, 1987).
become completely irrelevant in an age in which vir-
tual reality dominates human lives. Depending on Abstraction
the ways of our understanding and accepting of the The Internet age is exactly such condition, in which
“unfamiliar”, we legitimatise and comprehend the immaterial information is part of what we call real-
real. Brian Massumi perceives this in a multifaceted ity. In this context, the only way of manipulating
way, by comparing Baudrillard’s interpretation of with information is abstraction, and it can be ad-
the reality-simulation, in which there is no division equately used only by those who are, in a mass of
information, able to define their context as a flexible,
Figure 8 adjustable field of possibilities with multiple mean-
Main EigenChair application ings. The project Four Chairs and all the others con-
interface – left Voxel based siders the abstraction of objects to the degree that
space – right – control board. enables their manipulation and the creation of new
meanings (Figure 10). If objects - chairs, or whole
populations of objects, are abstracted to the level
of multidimensional vectors, i.e. to a series of num-
bers in a line - indexes, they become very potent and
manipulative (Figure 11). Such abstract objects, i.e.
indexes, are placed in a meta-space, which contains expected. This project shows that design is able to
potentials of all objects present there (Figure 12). manipulate predetermined potentials, while filling
Governed by the Principal Component Analysis al- them, at time same time, with narratives. Design is
gorithm, meta-space is able to correlate indexes of not a part of the endless evolutionary process aimed
all objects, creating thus an open logistic network, at creating the next new ideal object, but a part of
a possibility space. This marks the level of articula- a defined context and chosen references with their
tion of different elements and the creation of whole respective genealogies.
populations of objects of the same “kind”. By looking
at objects through the level of their abstraction, we EIGENCHAIR - DATA DRIVEN DESIGN
realise the potency of information (in meaning and By using information manipulation and various spa-
shapes, with which we can manipulate), but at the tial conceptions, algorithmic design approaches an
same time its complete emptiness when perceived object in a completely abstract manner, separating
on the index level alone. it thus from the “reality”. While making it extremely
flexible for different interpretations and contextu-
Meaning, Context And Narrative alisation, it also contributes to the instability of the
Post-traditional society offers new perspectives on process as a whole. The object can easily be reduced
old concepts, to which we give new meanings or to a geometry exercise. Therefore, the key feature of
judge them by creating our own context. The mass design is not only the definition of algorithms, but
of information shapes our world: text, visual rep- also the construction of parallel narratives around
resentation, music, money. However, the idea that the object. It is therefore necessary to re/turn to the
“information does not carry meaning” offered by the postulates of the pre-Socratic philosopher Empedo-
information theory pioneer Claude Shannon, has cles who claimed that “nothing comes out of noth-
become rather liberating in the academic discourse; ing and nothing disappears into nothing”. Such phil-
information carries unlimited freedom of manipu- osophical re/turn marks an effort to observe context
lation. It is important to emphasise that contextu- and processes as more important factors for defin-
alisation and the successive creation of narratives ing the object, than those implicit in the Objectiv-
“fill” the systems of information. They gain power by
careful selection of data implanted in them, taking Figure 10
care at the same time that the contextualisation and Levels of abstraction.
the creation of stories which surround them rely on
culture and history (Figure 13). It is also important to
note that in the process of contextualising the ge-
neric before the generation itself, there is a whole
scale of possibilities which had been predetermined,
but which are also opening the potential for the un-
ism (Terzidis, 2012). The advantage of the processual shift of design’s limits. Finally, the algorithmic design
design in contemporary world is its ability to refer to should adopt strategies and dynamics which deal
the sum of global knowledge and to use it effective- with the creation of narrative and contextualisation.
ly. The result of such turn/over is the creation of new This project tries to show – by conceiving and shap-
perspectives in defining objects, as well as a gradual ing the idea of a chair for the 21st century – the ne-
Figure 12
Meta space - possibility of
interconnection and interrela-
tion of all active data.
cessity of perceiving design through three equally GRAPH Comput. Graph., 21(4), pp. 163-169.
important, interdependent positions: design, theory Massumi, B 1987, ‘Realer than real’, Copyright no.1, pp. 90-
and technology. Design is now data driven. 97.
Sirovich, L and Kirby, M 1987, ‘Low-dimensional procedure
REFERENCES for the characterization of human faces’, JOSA A, 4(3),
Abdi, H and Williams, LJ 2010, ‘Principal component analy- pp. 519-524.
sis’, Wiley Interdisciplinary Reviews: Computational Sta- Terzidis, K 2012, Algorithmic Architecture, Taylor & Francis.
tistics, 2(4), pp. 433-459. Turk, M and Pentland, A 1991, ‘Eigenfaces for recognition’,
Aylesworth, G 2013, ‘Postmodernism’ in EN Zalta (ed), The Journal of cognitive neuroscience, 3(1), pp. 71-86.
Stanford Encyclopedia of Philosophy. Vegesack, A, Dunas, P and Schwartz-Clauss, M 1996, 100
Lorensen, WE and Cline, HE 1987, ‘Marching cubes: A high Masterpieces from the Vitra Design Museum Collection,
resolution 3D surface construction algorithm’, SIG- The Museum.
Abstract. This study aims to investigate the relation between evolving graphic
representations and due to new digital tools and how they affect architects’ approach to
design process. In order to do this, Yapı Magazine being published since 1973 in Turkey
will be retrieved and data related to types of architectural design representation used will
be recorded. The study will conclude with an evaluation of new representation means such
as 3D render, other 3D digital products and diagrams and how they have influenced new
approach to design.
Keywords. Digital design tool; architectural representation, architectural design
thinking.
INTRODUCTION
This study is interested in transformative reflections read this shift through a collection of architectural
of digital design developments at two levels: archi- representations.
tectural graphic representation and architectural
stand. METHOD
The notion of “generic design” proposes that This study will attempt to demonstrate a relation be-
there are great similarities between design acts tween the shift in graphic representation and design
(Gero and Purcell. 1998) independent of the domain stand of architects in respect to architectural press,
(Zimrig and Caine, 1994). On the contrary, there are Yapı Magazine. Yapı, the oldest established magazine
also opinions supporting the presence of significant still in press today in Turkey, will be utilized as a tool in
differences depending on design situations (Visser, order to evaluate the chronological period between
2009). Visser (2009) enhances the notion of generic 1973, the year Yapı had first been published, and
design and states that there are different forms of 2012 to understand how digital tools have affected
design. He defines three dimensions as sources of architectural representation and approach. Yapı Mag-
differences in design consisting of the process, the azine, as a tool to navigate through time and variety
designer and the artefact of projects, will provide evidence for claimed mutual
Here it is hypothesized that as new tools of de- evolution between digital design tools and architec-
sign are adopted, such as digital tools, a relevant tural design approach that is proposed to be read off
shift in design stand takes place. This study aims to architectural graphic representation.
to their percentage of use. The increase in the varie- of representation among these three (Figure 2).
ty of colours between 1973 and 2012 represents the These data have also been re-interpreted in
variation in representation medium used in present- terms of by the design teams they have been pro-
ing architectural design projects (Figure 1). duced. Expected frequency of 3D renders, other
As it may be read on Figure 2, first 3D render has 3D digital products and diagrams have been much
been encountered in 1994, first 3D digital product higher in contemporary design projects. However,
consisting of wireframe, perspective or axonomet- since the data is acquired from a local magazine a
ric views with no intention regarding photo-realistic question of whether the results may come up as
images in 1997 and first diagram is encountered in expected had this study been conducted through
1999. It may also be observed that use of diagrams a magazine of another nation or an international
are usually aligned with use of 3D digital products magazine. Results have shown that the number of
and 3D renders are always the most preferred type international design teams using diagrams are triple
Figure 2
Beginnings of representing
with 3D digital media and
diagrams
the number of local design teams. The number of in- Previously, architectural representation was once a
ternational design team using 3D renders are almost language that can be understood merely by archi-
equal to local design teams. The number of interna- tects, planners and related disciplines but now it is
tional design teams using other 3D digital products transformed into a language that can be understood
are almost double the number of local design teams by everyone. Even traditional representations such
utilizing these representation mediums to present as plan, section and elevation have transformed into
their project (Figure 3). a simpler and schematic form with reduced level of
The third phase of this study was an evaluation detail and high level of abstraction (Figure 4 and 5).
of qualitative features of the architectural represen- As Kalay (2004) mentioned, main mechanism
tations. In the last years digital technology has influ- that transforms an idea into a communicable mes-
enced architectural representation and transmission sage is abstraction. Abstraction, extracts and distills
of design ideas with new methods and tools. With the meaning of the message, focusing attention on
new possibilities of expression in architecture, trans- its salient characteristics. Higher degree of abstrac-
mission of ideas has differentiated from traditional tion makes communication more efficient and it
architectural representations. helps to focus the receiver’s attention on the parts
In this direction, the presence of multi-discipli- of the message that the sender considers most im-
nary approach such as graphic techniques based portant. According to the results of the third phase,
on diagrams and schematic drawing, the use of ab- simple graphical expressions, schematic drawings
stract representations, more simple drawings even and diagrams become a more efficient way of repre-
cartoonish, the presence of simple mathematical senting design ideas, an ideal method of communi-
expressions can be found in architecture milieu. cating ideas to others (Figure 6).
Figure 5
Evolving Representation,
Elevation, Section (Yapi Maga-
zine, 2010).
act as final reports of a process but represent the sign proposal is presented as a finished product. This
process itself. They “explore, explain, demonstrate or representation type is specifically chosen for pres-
clarify relationships among parts of a whole”. (Kalay, entation purposes rather than aiding design devel-
2004) Similarly, according to Rowe (1987) diagrams opment phase. These images are used to aid those
are used to explore, analyze and synthesize ideas. who are not architects or professionals in familiar
Diagrams may be utilized to establish design prin- fields but individuals who can not read construction
ciples that help the designer reflect on and prepare documents.
for subsequent exploration (Rowe 1987). Although this representation medium needs to
Architectural diagrams do not only represent be evaluated differently than diagrams and other
physical elements, but also forces and flows. In the 3D digital products, it also serves for the same pur-
early phases of designing, architects draw diagrams poses: inclusion and exposition. Similar to diagrams,
and sketches to develop, explore, and communicate 3D render images also tell stories. They are used to
ideas and solutions. Design drawing is an iterative reveal how the space created during different times
process. It involves externalizing ideas to store them of the day or different days of the year. Through the
and recognizing functions as well as finding new photo-montages made, they give clues regarding
forms and integrating them into the proposal. Thus how the spaces may be used and what kind of life
drawing is not only a vehicle for communication will take place once its inhabited. These images are
with others. It also helps designers understand the used for revealing a certain experience provisioned
forms they work with (Edwards 1979; Do and Gross, for designed space.
2001). According to Bares-Brkljac (2009), these im-
With a more thorough approach, Oxman (2000) ages inherit accuracy, realism and abstraction. It is
states that diagrams play a role in visual reasoning. through these features that non-professionals be-
And through what this representation medium pro- lieve in what they see. According to this, accuracy
vides, what Schon (1992) refers to as “reflection in aids non-professionals to be acquaintance with the
action”, what Lawson (1980) describes as having a space. It is related to scale, distances and relations
“conversation with the drawing” takes place and aids of volumes and spaces. (Bares-Brkljac 2009) It is also
the design process. related to chosen view points regarding angle and
height. Human eye angles are preferred on pur-
What 3D Products Tell pose so that the viewer can imagine himself in the
According to Lopes (1996) due to techno cultural picture. Realism helps the viewers understand and
changes, pictures are re-emerging. They now play a evaluate the proposal the same way they perceive
role in terms of storage, manipulation and commu- the environment. (Bares-Brkljac 2009) Abstraction
nication of information. refers to reduced information about design. (Bares-
Beginning with 1994, 3D render images have Brkljac 2009) A high level of abstraction may not
evolved into photo-realistic images where the de- sufficiently present the proposal where a low level
Abstract. This paper discusses the integration of physical and digital models in the
context of building technology teaching. It showcases projects that explore the design
possibilities of a chosen structural system with the use of parametric and behaviour-based
computational modelling. It uses detailed mock-ups as vehicles to study, optimize, and
evaluate the design as well as to provide feedback for student learning and the direction
in which future designers may engage computational design. Finally, it investigates
digital-to-physical design translations, the importance of which becomes more and more
critical in the context of the current, computer-intensive architectural education and
professional practice.
Keywords. BIM; building information modelling; parametric construction details;
construction assemblies.
INTRODUCTION
With digital tools firmly established in professional ceptualization framework; and the lack of physical
practice and academia, the question of the contin- considerations are just some of the issues waiting to
ued relevance of physical and traditional methods be addressed by the computational creative frame-
is often overlooked or unexamined. Certainly, there work.
are passionate statements being formulated on This paper specifically looks at materiality em-
both sides, with analog thinking more and more bedded in architectural models, their physically
on the defensive. However there is a need for closer based behaviour, and the haptic feedback designer
investigation of the analog-to-digital and digital-to- and makers receive when interacting with their
analog phase changes to further improve the devel- products. The emerging question is what forms of
opment of computational tools and digitally driven digital software and interface would provide a com-
creative processes. parative level of interactivity: what software features
This paper looks at the close integration of and design interface would facilitate full virtualiza-
physical and digital models in design practice and tion of the design process.
investigates the ways both design environments in-
form each other. The goal of this paper, however, is PHYSICAL-TO-DIGITAL TRANSLATIONS
not to justify why we need physical and traditional To research the topic, students investigated struc-
modes of thinking, but rather to point to needs in tural systems that actively informed architectural
the further development of computational design tectonics (form-active structures) and explored their
thinking, which in many aspects is still not up to par design possibilities with the use of parametric and/
with the traditional (not digital) design process. The or behaviour-based computational modelling. Once
intuitive, even haptic, use of tools; a natural con- the research phase was completed, students deve-
loped a number of physical mock-ups of the final uisites started to point to the solutions of using light
designs to compare their behaviour with computer metal framing with a possible fabric enclosure.
simulations they developed earlier for the same de- In the second stage, students developed a
sign. number of conceptual studies that allowed them
This allowed students to reflect on the material- to apply researched systems into new spatial con-
ity of digitally designed architecture, to understand figurations and test their appropriateness. After
the opportunities and limitations various design developing a number of designs, both physically
tools provide, and to visualize structural behaviour and digitally, students focused on the solution that
in more intuitive and direct ways that available with followed the logic of the Hoberman Sphere. Simi-
digital tools alone. The following examples illustrate larly to Huberman’s design, the student structure
the process and discoveries students made. was capable of folding down to a fraction of its
fully deployed size. It also used a version of a scissor
Adaptive Forms mechanism. Instead of a sphere-like configuration,
A number of projects looked at scissor-like mecha- students experimented with a cylindrical form with
nisms to develop shading and spatially adaptable the ability to expand both vertically and horizontally
systems. The project in Figure 1 investigated a de- by increasing the cylinder radius.
ployable assembly that can be a temporary struc- After completing the chipboard model and
ture or an adaptive space. interacting with it, students realized that the pro-
Initially, students researched various relevant posed structure did not have the desired rigidity
precedents that dealt with temporary, portable, and and durability. Components had difficulty support-
deployable structures. This research gave students ing themselves, resulting in sizable deflections.
insight into different kinetic systems, assemblies, When expanding and contracting the structure,
and material applications. Students were particu- individual components were subject to twisting in
larly interested in the ability for the design form to the joints, resulting in kinetic friction and deforma-
be contracted into a relatively small volume and to tions. While this is rather obvious observation with
have a low total weight. Immediately, these prereq- a model made of chipboard, students also noticed
possible issues with the actuation of the kinetic as- corporate three-dimensional composition (Figure
sembly. While displacing only some, not all joints, 3). Students similarly started by analysing various
at the same time, the softness of structural com- expandable designs that used scissor-like mecha-
ponents was causing the entire system to deform, nisms. Their focus was on using scissors both as a
putting additional stress on connections and caus- structural element and as an adaptable enclosure/
ing material fatigue. This was important feedback shading. By testing various scissor joint geometries,
for students, since it suggested that the scaled-up they looked at possible shapes and the resulting
structure, even when made with higher-grade mate- planar tiling to provide a variety of expressions of a
rial, may still have similar rigidity and stability issues. façade shading system.
The subsequent study model introduced more The physical and digital explorations revealed a
rigid material (acrylic glass), sized up the cross-sec- number of intricacies, both technical and geometri-
tion of individual components, and doubled vertical cal, that were not immediately evident at the begin-
structural members. The locking mechanism was ning of the project. What seemed like a straight-
added to further stabilize the structure by introduc- forward design quickly became a complex project,
ing triangulation in the vertical supports (Figure 2). particularly when multiple instances of a scissor
The second project started as purely two- mechanism were interconnected into larger as-
dimensional shading system and evolved to in- semblies (Figure 4). The attachment details became
Figure 3
Façade screen mock-ups.
more involved, with diverse rotating and sliding themselves in a fixed position, physical prototypes,
motions occurring within the component connec- due to their relative imprecision and material flex-
tion. The connection had to account for competing ibility, gave a better indication of the overall as-
movements between various sub-elements. One sembly behaviour. They were also more informa-
of the studies employed a three-dimensional ver- tive because they provided a tactile feedback that
sion of the scissors mechanism to form a dome-like helped to advance design. While laser-cut mock-ups
structure (Figure 5). To accommodate the three-di- allowed for a high level of precision, the initial pro-
mensional rotation of scissor plates, students deve- totypes were developed in the more forgiving me-
loped wedge-like adapters to control the curvature dium of chipboard, as compared to later prototypes
of the resultant form. Unlike other groups working made of acrylic glass. This helped to track kinetic
on kinetic designs, this team relied heavily on physi- movements, particularly registering material fatigue
cal models to complement their digital simulations. and failures for further design refinements.
Students felt that the tactile qualities of physical While physical and tactile feedback was im-
models gave them valuable feedback about the lev- portant to the team, there were also limitations in-
els of friction within joints and material resistance. volved in deferring exclusively to physical mock-ups.
Particularly in the situations when digital models It was often difficult to distinguish between minus-
were getting easily over-constrained and locking cule kinetic transformation and fabrication toleranc-
Figure 5
Applying scissors mechanism
to a dome-like structure.
es as well as material’s ability to hide stress through to parametric models. They were able to implement
deformations (strain). Whereas material deforma- complex assemblies in more informed and intuitive
tions may seem a desirable quality, these deforma- ways.
tions may ultimately lead to material fatigue and
assembly failure. To address these concerns, the de- Kinetic Movements
sign team used parametric digital models to validate Inspired by Theo Jansen‘s kinetic sculptures, stu-
their findings and fine-tune the final set of physical dents investigated parametrically defined adaptive
mock-ups. These parametric models allowed for ef- structures that mimics skeletal systems. They started
fective tracking of numeric values and maintaining with the exact replica, both physical and digital, of
geometrical relationships between various compo- Jansen’s Strandbeest kinetic mechanism. Then, with
nents (Figure 6). parametric models, students investigated how spe-
Once students established a general under- cific component dimensions and radii impact the
standing of kinetic system behaviours, they became kinetic behaviour of the entire system (Figure 7).
significantly more efficient in developing variations Parametric definitions allowed for fluid changes to a
Figure 7
A study of the kinetic behav-
iour of the entire assembly.
Original Jansen’s design (left),
and student design explora-
tions (centre and right).
digital model and for immediate feedback on its ki- ing of kinetic mechanisms developed over time
netic behaviour. This helped to understand the role with multiple prototype reiterations. To shortcut the
individual elements played within the entire assem- discovery process, students started with an already
bly and the types of motions these elements were resolved design and investigated ways the logic for
capable to produce. this particular mechanism can be extended to other
While these parametric studies became effec- forms of movement. While a physical working pro-
tive tools in understanding how Jansen’s kinetic totype was an ultimate goal for the project (Figure
sculptures worked, it became difficult to extrapolate 9), it was easier to experiment with variations of the
these findings into new meaningful movements. To base mechanism using digital modelling.
overcome this issue, students started with changing However, conventional three-dimensional mod-
element proportions, folding ratios, and adding ad- elling software was not effective for this type of
ditional components (Figure 8). These speculative prototyping. The design team turned to parametric
explorations led students to propose and develop software, such as Revit (parametric BIM) and Grass-
an adaptable vertically climbing mechanism that hopper (graphical algorithm editor for Rhino), that
used core principles of Jansen’s models with chang- was capable of dealing with constraints and pass-
es to the types of constraints and possible motions. ing these constraints between various assembly
Kinetic designs such as Jansen’s sculptures that components. In addition to these two software ap-
mimic walking structures, or Hoberman’s expand- proaches—parametric and prescriptive—the team
ing dome, require close and detailed understand- briefly looked into VFX packages such as 3DMax and
Figure 9
Kinetic movements, final
assembly.
Maya with inverse kinematics (IK) capabilities. While can be seen as the limitation of the system, not only
IK provided ready-made functionalities that could from the visual but also from the occupancy view-
be applied to walking structures, the inability to di- point, since the origami-generated forms are hard
rectly “hack” into the algorithms behind IK functions to reconcile with horizontal surfaces such as floors,
became a significant deterrent in using them as ex- both because of these forms’ flatness and their
ploration tools. Dealing with actual parameters— changing height. However, they can still be effec-
angle values and component dimensions—allowed tively employed in other enclosure surfaces.
students to get more direct confirmation of their
initial design propositions and develop a stronger LEARNING FROM PHYSICALITY
intuitive feel for the entire mechanism. In discussed cases, students worked with additional
constraints defined by a number of component and
CONTINUOUS ENCLOSURES connection types to simplify manufacturing and as-
With adaptive designs, the issue of the continuous sembly. These became important design bounda-
weather tight exterior enclosure resistant to mate- ries, focusing students on pursuing optimal solu-
rial fatigue is a major challenge. When elements tions and driving questions of component assembly
move or stretch, they wear off connection seals and and functionality. While in some cases students did
may cause material failures. To address these issues, not produce an actual one-to-one mock-up, the
students looked at form-active designs, particularly scaled-down models became effective in setting the
those that deploy tensile (fabric), pneumatic and stage for understanding the overall kinetic system
foldable strategies in conjunction with kinetic as- behaviour and speculating on further development
semblies. of design by giving students direct feedback. The
One of the approaches looked into rigid origami haptic feedback included not only the component
as continuous yet spatially reconfigurable forms that movements but, more importantly, the levels of ma-
do not rely on material deformations (Figure 10). terial resistance to deformations, joint frictions, and
While hinged joints provide opportunities for mate- material fatigue. Additionally, physical mock-ups be-
rial fatigue, the rigid plates are durable, with all the came a lesson in understanding issues of manufac-
performance qualities of traditional wall systems, turing precision and design and construction toler-
including thermal and structural. Since rigid origami ances. These mock-ups allowed students to feel the
solutions carry a particular design signature, the behaviour of the material and the entire assembly in
underlying structural framework would naturally addition to visually understanding its movements.
follow the same geometry, both from performance Furthermore, the discrete numericals used in
and aesthetic considerations. To some extent this defining computational models do not help to un-
the Mediterranean area). In this way the similarities village) there are collective structures such as
and the differences between the two constructive barns, shops, religious buildings and private
systems have been compared. The two artifacts be- homes. The houses have courts and they are
long to the same category, domestic building and all juxtaposed to form a compact urban tissue
have a similar size. Both case studies analyzed by which facilitates defensive actions. The main
the author belong to wider researches coordinated constructive material for all the structures is
by Prof. M.C. Forlani (G.d’Annunzio University, Chieti- adobe (the composition is similar to the cob,
Pescara, Italy). the difference is that the dough is shaped into
bricks (using frames) and dried in the sun; in
General description some cases it is possible also to find cut stone.
• Loreto (Abruzzo, Italy): The house (Figure 1
left) is located near Loreto, a small village in the Issues to be faced
countryside of the Abruzzo region, in central • Loreto (Abruzzo, Italy): The artifact is aban-
Italy. In this region there is a long tradition of doned and in ruin. Due to different climatic
raw earth buildings and to date a number of conditions (mainly snowfall and rainfall), the
over 800 artifacts have been surveyed (Forlani, artifact is badly damaged, the roof and the
2011). The typology of the artifact is a tower floor are partially collapsed and also the pe-
type, like many other buildings in raw earth in rimetral walls present fissures. Moreover, the
the region, and it is located in a rural area with dwelling is surrounded by vegetation, in par-
no other buildings nearby. The constructive ticular blackberry bushes. With this situation it
technique is the cob (made by clay, sand, straw, was dangerous to enter inside the building or
water and earth), but other dwellings in the re- to move around easily and take more informa-
gion are also made by rammed earth. tion and pictures.
• Figuig (Marocco): The analyzed building (Fig- • Figuig (Marocco): In this second case the main
ure 1 right) is located in the city of Figuig in difficulty was due to the impossibility of visit-
the eastern area of Marocco, at the border with ing the place personally, together with the ne-
Algeria. The city, built around an oasis, consists cessity of analyzing a non-standard building
of seven ksour which are typical fortified vil- system.
lages in north Africa. In a ksar (that is a single
to be modeled have been the four perimeter walls, ters) in wood. The sills on the outside of the open-
built with the technique of the cob and tapered up- ings over the windows are in brick tiles (“pianelle”).
ward. The wall surfaces, because of the taper effect, The pitch roof has a bearing structure with a double
are therefore inclined both inside and outside of the wooden grid. On the second structural grid, simi-
artifact. Three sides of the building are mutually or- larly to the floor, is positioned a lattice of rods that
thogonal, while one side has a different inclination. constitute the base of a mantle in earth-straw. The
The influence of this detail in the technical-con- external layer is made up by clay roof tiles (Figure 2
structive solutions clearly emerged during the study right).
of the floor. The floor consists of a double frame of In order to provide some quantitative data on
wooden beams: a main structure (beams of square the number of elements for this house two exam-
section, 15 cm x 15 cm) and a secondary one per- ples have been chosen: the tiles, about 310 for the
pendicular to the first (rafters of rectangular section, pavement of the ground floor and about 366 for the
8 cm x 3 cm). The beams are partially embedded in flooring of the first floor (where have also been used
the masonry load-bearing and probably the rafters on the input threshold of the house) and roof tiles,
at both ends are embedded in the walls. The space about 1007. Obviously, this numerical and quanti-
between the beams is always constant, the only tative information is indicative because there is the
exception is the resulting surface between the last awareness of being in the presence of elements
rafter parallel with the rest of the secondary and not which do not result from manufacturing, and that
the one built into the wall at right angles to the oth- can also significantly vary in size from one another.
ers. On the secondary structural grid is positioned a For both projects a set of textures to communicate
lattice of rods which constitutes the basis of a layer the materiality of the technical elements has been
of clay (about 8 -10 cm thick) on which it is resting created.
in the flooring tile brick (called “pianelle”) of rectan-
gular shape (Figure 2 left). The structure in raw earth The case study of Figuig
did not allow large openings in the walls, which is The collected basic information include a photo-
why the number and size of windows are limited. graphic documentation, technical data on slabs and
On the first floor there are three windows, two on roofs, a two-dimensional survey in Autocad, which
the ground floor. The openings have frames (lintels, includes plans, elevations, and a section with the
jambs and sills) and fixtures (including the shut- description of the materials. The artifact of study is
inserted within a compact urban settlement, char- This wide margin of difference comes both from the
acterized by buildings with courtyard. The walls bor- variable dimensions of the environments, and from
der with other houses - therefore they are common the irregular thickness of the wall (Figure 3).
walls - or with paths, some of which are covered. In order to cover all the spatial units in a con-
The building has a courtyard and three levels, two structively rational way and to understand the tech-
of which are practicable: the ground floor, the first nical problems that could arise from this construc-
floor and the roof/terrace. tive technique, the beams and the karnef have been
The floor plans in Autocad have been cleaned by manually positioned, as if they were actually build-
dimensions, crosshatchings and other non essential ing the construction in a traditional way (Figure 4).
details, and imported in the 3D modeling software, Each beam has been rotated, spaced from the pre-
3D Studio Max. The use of a 3D modeler, instead of vious and adjusted in its length to correctly adapt
CAD software used for precision drawing like Au- itself to the spaces that have to be covered and to
toCad, is the proper tool to model and manage a adjust the upper layer of the karnef. The position of
non standard artifact composed by a high number the karnef on the same row is alternated and each
of objects on screen. Almost all the basic elements row of karnef is mirrored with respect to the previ-
have been created through extrusions, and the most ous row.
relevant exception is made of karnef, semi-triangular The digital model also allows to calculate/ as-
wooden elements that constitute the base of the sume the quantity of some technical elements
palm trees. necessary to build intermediate floors between the
This digital reconstruction investigated and ground floor and the first floor (both floors of the
documented the constructive aspects of the artifact, court and those of the individual rooms): about 286
therefore for this reason special attention was paid palm timbers on which are placed about 4209 kar-
to the analysis of the grid of the slabs. The grid is nef (1951 for the ceiling of the court and 2258 for
made of palm wood beams and the lower closure of indoors).
the roof in contact with the beams is in karnef, which
are elements that also play a structural function. REPRESENTATION AND ORGANIZATION
The span between two beams where the karnef are OF THE INFORMATION IN WORKSHEETS
placed is of about 34 cm; the karnef have an average In both digital reconstructions all the technical ele-
size of 30 cm and they are between 3 and 5 cm high. ments have been grouped and divided per layer,
The length of the palm wood beams varies between according to categories of homogeneity, belonging
a minimum of 140 cm and a maximum of 270 cm. to a floor plan or a specific field. The first prepared
graphic work is the representation of the techno- to combine all the information gathered and elabo-
logical breakdown: an axonometric exploded view rated during various inspections and researches.
of different levels of the artifact, from the ground
floor to the slab of the roof. The levels are related to DISCUSSIONS OF THE RESULTS AND
re-adapted categories, from the UNI Norm 8290 of FINAL REMARKS
the technological breakdown. This scheme clearly The realization of the digital models of the two
communicates the affiliation of each layer to one dwellings in raw earth located in Marocco and Italy
or more of these categories, and therefore its func- have allowed an in-depth study of their construc-
tion. Other graphic works are made of analysis work- tive systems and to hypothesize technical solutions
sheets of the main technical elements. For Figuig adopted in some critical points which could be dif-
the masonry (Figure 5 left) and the slab (Figure 5 ficult to analyze and document in other ways. The
right) have been documented. The worksheets have produced information is useful to undertake tech-
detailed information about the materials, the con- nically efficient interventions on the built environ-
structive techniques, the constructive phases and ment, without compromising the local architectural
the performances of materials and technical ele- and constructive culture. As for other researches un-
ments, referred to specific classes of requirements. dertaken by the author in the field of digital media
The methods used to document the different con- and cultural heritage (Di Mascio, 2009), the three-
struction phases, called evolutionary characteristics, dimensional model was recreated in 3D Studio Max,
have been described and analyzed by the author in because the digital reconstruction has required
other publications (Di Mascio, 2012a; 2012b). The tools able to guarantee a better control of the fre-
obtained information has been reorganized taking quent modifications and to quickly visualize various
into account the possibilities of realizing a database hypothesis on the adopted constructive solutions.
In these cases a certain level of creative interpreta- Most of the literature available on vernacular ar-
tion is also necessary, because there are still many chitecture consists of descriptive texts, pictures and
information gaps to fill. This methodology is particu- drawings (sketches and two-dimensional reliefs).
larly suitable in the study and analysis of non-stand- The use of 3D digital reconstructions is another
ard artifacts pertaining to vernacular architectures step forward in observation, analysis, documenta-
as for example the Turchinio’s trabocco (Di Mascio, tion and management of these artifacts, because it
2009). During the digital reconstruction work a ma- allows the investigator to assess and monitor addi-
jor technical problem due to the high number of 3D tional parameters, such as spatial and quantitative,
elements used for the roof tiles, the karnef and the that methods and tools so far available did not al-
tiles of the floor has also been tackled. low to analyze with the same precision and effec-
The two case studies, although they have ma- tiveness. Obviously the study and testing of these
terial and technical-constructive characteristics in digital technologies must be accompanied by a
common, clearly present evident differences, mainly continuous processing and verification of theoreti-
due to matters linked to their geographic localiza- cal and methodological apparatus. The introduction
tion, but also to different life styles: different roofing of new methods and tools always leads to a criti-
systems, the presence or absence of windows, differ- cal re-evaluation of what has been done so far and
ent organization of the spaces / environments, dif- the opening of new avenues of study and research.
ferent materials used, etc.. In the study of vernacular architecture (as in many
INTRODUCTION
In the book Digital Design Media, Mitchell and Mc- for the possibility of doing themselves the work, to
Cullough (1995) describe a design studio fully inte- use freeware and open source software as much as
grating traditional and digital media. Basically the possible in all the workflow from data processing to
Building is at the centre and several paths are dis- the final models scaling and orientation, and to pro-
played between the building and its possible forms pose a theoretical framework for the teaching and
of representation, being Digital Models one of those learning process of this subject among architecture
forms. According to the presented and well known students.
scheme, the shortest path between the building and 3D Digitization was offered as an optional
the Digital Model is through electronic surveying. course at our University. It could be attended by Ar-
This electronic surveying encompasses what can be chitecture, Urbanism and Design master students
designated as 3D Digitization. during their fourth or fifth year of their studies, and
There are multiple techniques of 3D Digitiza- by PhD Architecture students during their first year.
tion, passive and active (Lillesand et al., 2004), range This paper is structured in six sections: a) a short
based and image based (Remondino, 2006), using history and related work, b) the theoretical frame-
different kinds of light, etc. work, c) the instrumental framework, d) the practical
This paper describes an experience of imple- exercises proposed, e) Results of the exercises and
menting a 3D Digitization course in Architecture discussion, and f ) conclusions and further work.
curriculum.
The goals of this experience were to make ar- A SHORT HISTORY AND RELATED RE-
chitecture students familiar with 3D digitization SEARCH
techniques and tools, to develop their awareness Traditionally the survey of the built environment
clouds, from a terrestrial laser scanning survey of a The statement of the exercise consisted of 10
building, was given to all students. They were asked steps.
to follow a described procedure to align the point • Defining a reference frame in the object. In the
clouds, to dissipate accumulated errors, and to pro- simpler form it could be set out as a couple of
duce a final merged and sub-sampled point cloud measurements with a measuring tape in a flat
model. This contributed to a broader investigation rectangular surface from which one could re-
on the development of an expeditious method for trieve the coordinates of at least four control
dissipating what we designated as a closure error points (CP) as it can be seen in Figure 1 (left
that arises from the alignment of a closed ring of and center). Although control points should be
point clouds. widely spread about the object, for the exercise
it was allowed to consider them more locally,
RESULTS AND DISCUSSION whilst noting that the first procedure is better
In this section we describe the workflow that was since it diminishes the scaling and orientation
adopted to do the two aforementioned exercises. error.
While describing the steps of the exercise we will • Image acquisition. This step involves the un-
also describe the competences that were supposed derstanding of the SFM principle. Images need
to be acquired by the students as well as the difficul- to be taken with small base distances, what
ties that were felt. means that the camera view point in space
must always be changing and large amount
First exercise - 3D digitization small or of images of images with a high level of re-
medium scale object using image based dundancy is to be taken. It is noticed here that
techniques there is no need to use high resolution imag-
The objective of the first exercise was to produce a es. In fact it is better to use more images with
textured mesh model of a small or medium scale ob- less resolution and adopt a hierarchical strat-
ject in an architectural context using automatic im- egy while taking the pictures. This means that
age based techniques following the structure-from- several rings of images at different camera/
motion (SFM) principle. First of all it was declared object distances should be considered. This im-
that this kind of techniques have severe limitations plies another constraint to have in mind when
in their use to record poor textured surfaces or very choosing the object to digitize; the surround-
reflexive ones, since they rely on the texture of im- ings of the object have to be accessible.
ages to recover 3D information. This fact should be • Image processing is the step where colored
understood as constraint about the kind of object dense point clouds are generated. Although
that should be chosen; rich texture objects were this is done automatically we believe that the
more adequate. correct approach is to provide to students
registration (orientation) errors would increase. dent should visually verify if the quality of the
This task was divided by all the students. registration. This can be easily done by inspect-
• Then, the set of cleaned point clouds was ori- ing the point clouds with at least two mobile
ented by a specific order and criteria; each plans with different orientations and analyzing
point cloud should only be registered with the the section that they produce in the model.
previous one and a particular point cloud was • When closing the loop, that is, when orienting
set as the reference frame, what means that its the last point cloud (that is simultaneously the
position is given by an identity matrix. This way, first) with the previous one (the last of the set)
the final results could be compared. After each its matrix position represents the accumulated
registration step, after optimization with the error. We refer to this as the closure error as in
iterative closest point (ICP) algorithm, the stu- topography (Casaca et al., 2000).
• If the closure error is found to be under the ac- CONCLUSIONS AND FURTHER WORK
ceptable tolerance, then we can consider that The practical results obtained showed that, with a
no gross errors occurred and we can distribute minimum of theoretical framing, satisfactory results
the error through the poses (matrices) of the can be obtained by agents that are not usually from
point clouds as mentioned above. the field of specialized surveying, and almost with
• Finally the point clouds were merged and deci- no costs. We notice that, at the PhD level some of
mated to produce the final model. the students were architects that didn’t even knew
In Figure 3 we present an example that resulted about the existence of some of the methods and
from the second exercise. tools discussed.
With this exercise it was possible to verify the It was also interesting to notice that that fact
need to be careful when dealing with multiple point didn’t prevent them from accomplishing all the ex-
clouds because if the work is not well controlled, it ercises with high quality results. This means a shift
is easy to accumulate large errors or even to make in the paradigm for architectural recording and
blunders. Students are then told that there are com- means bridging a gap between fields of knowl-
plementary methods, such as topography, to con- edge that traditionally were separated. At the end,
trol the overall quality of the process. But it is also it was achieved the idea that new possibilities arise
noticed that for medium scale objects, such as the by adding these methods and tools to the architect
church surveyed, it is possible to use TLS as a stand- toolbox. It was possible to understand that the tech-
alone method. As with SFM/MVS it was also under- niques presented can be used alone or together or
lined that there are surfaces that aren’t good can- complemented with other techniques, such as to-
didates for laser scanning recording. Among those pography or manual survey.
there are glassy surfaces or low reflectance surfaces. We also noticed that, in some cases, the lack of
Abstract. The aim of this paper is to discuss the role three-dimensional models play
in addressing performance issues in virtual reconstructions of the heritage buildings.
Heritage visualisation is considered here as a process of representing knowledge about
space, time, behaviour, light, and other elements that constitute cultural environments.
The author aims to analyse the process of digital reconstruction of heritage buildings
and the impact of the decisions taken during its development on the final performance.
Based on the examples drawn from practice, various stages of development are discussed,
confronted with the principles of London Charter.
Keywords. Virtual reconstructions; cultural heritage; 3D modelling; London Charter.
BACKGROUND
Information technologies support a number of do- (Kepczynska-Walczak, 2003). However, the use of 3D
mains, including - among the others - virtual model- modelling in the virtual reconstructing of heritage
ling of built heritage. One of the earliest examples buildings is no longer a subject of research itself.
of such projects was a reconstruction of ancient It rather opens new fields of research and applica-
buildings in Bath, which was done as early as 1983 tion. First, it is necessary to indicate approaches to
(Dave, 2005). Another example might be Winches- considering a heritage building reconstruction as
ter cathedral which was modelled in 1984-1986. A a data container. For example Boeykens and Neu-
decade later the Urban Simulation Team from the ckermans (2009) studied the possibility to improve
University College in Los Angeles was commisioned and increase information by adding supplementary
a real-time visual simulation model of the Forum of metadata to the 3D model through “metadata en-
Trajan, the largest of the Imperial Fora in the Forum richment”. According to the authors “this structured
Romanum for the exhibition at the Getty Center. The information can, in turn, facilitate the retrieval and
project aimed at exploring the historical, cultural, recovery of such models, when searching or brows-
and technological information contained within an- ing for design information through online architec-
cient works of art as well as examining new ideas in tural repositories”. Another interesting project was
archaeology, conservation, scholarship, education, the use of BIM deployed in the historical reconstruc-
and digital technology (Jepson and Friedman, 1998). tion of the Vinohrady synagogue in Prague (Bo-
For many years, digital reconstructions have eykens et al., 2012). It is worth mentioning here the
been presented and discussed at the eCAADe con- book devoted to the former Viennese synagogues
ferences. The author also contributed to this subject that were destroyed and disappeared from the city
It is necessary to stress that all the above-men- uments Fund as one of 100 most endangered sites
tioned objects exist, so the process of digital recon- in the world since 2006.
struction required a high quality realistic represen- The inventorial measured drawings and pho-
tation in accordance with Rule 6 of LC: “the creation tographic documentation were used as the initial
and dissemination of computer-based visualisation material for digital reconstruction. The inventory
should be planned in such a way as to ensure that was made using a hybrid method that combines a
maximum possible benefits are achieved for the traditional analogue and digital techniques of doc-
study, understanding, interpretation, preservation umenting heritage buildings. The range of meas-
and management of cultural heritage (…) The aims, urement drawings included not only the shells of
methods and dissemination plans of computer- buildings, but also their interiors. The high level of
based visualisation should reflect consideration of accuracy was obtained, which can be seen on some
how such work can enhance access to cultural herit- of details drawings. Therefore, such comprehensive
age that is otherwise inaccessible due to health and data enabled to create very detailed digital models
safety, disability, economic, political, or environmen- (Figure 2 and 3). Unfortunately, it was impossible
tal reasons, or because the object of the visualisa- to use 3D scanning due to high costs. Despite the
tion is lost, endangered, dispersed, or has been de- growing knowledge on this technology among the
stroyed, restored or reconstructed.” conservators, the financial barrier makes 3D scan-
Among the principal goals of analysed cases ning in Poland not widely used in heritage docu-
was an education, including the dissemination of mentation practice.
Lodz cultural heritage, allowing access to these Pursuant to Rule 4 of LC the goal was clearly
magnificent buildings which are not open to public defined - to reflect the existing state: “4.4. It should
due to their current state and use. What is more, the be made clear to users what a computer-based visu-
Scheiblers chapel has been listed by the World Mon- alisation seeks to represent, for example the existing
state, an evidence-based restoration or an hypo- including, in particular, the problem of texture map-
thetical reconstruction of a cultural heritage object ping and performance of the same texture in differ-
or site, and the extent and nature of any factual un- ent lighting conditions (Figure 5). Texturing turned
certainty”. out to be a very difficult task, many attempts have
In the context of the above the question arises been done to achieve an effect similar to reality. It
whether - referring to the London Charter principles was impossible to use textures from photographic
- a model and subsequent visualisation, made on pictures since in different lighting conditions the
the basis of the inventory, are sufficiently reliable for same material performed different appearance.
“study, understanding, interpretation, preservation Another interesting observation was a selection
and management of cultural heritage”? On the oth- of lighting - mimicking the actual lighting condi-
er hand, however, one of the principles of LC is that tions in a virtual environment, the virtual textures
“the costs of implementing such a strategy should changed their characteristics unlike to what could
be considered in relation to the added intellectual, be observed in reality. What is more, a colour palette
explanatory and/or economic value of producing of the interior successfully reproduced in one visu-
outputs that demonstrate a high level of intellectual alisation, turned up different from the actual interior
integrity”. appearance in another visualisation.
Discussed reconstructions present high level of It is worth to confront the observations with
details - not only exteriors but also interiors were one of the objectives of the London Charter, which
modelled carefully (Figure 4). Special regard was “seeks to establish principles for the use of comput-
paid to the issues of lighting and texturing objects, er-based visualisation methods and outcomes in the
research and communication of cultural heritage in The main obstacle of such tasks lies in limited source
order to (…) ensure that computer-based visualisa- materials. On the other hand, it is relatively easy to
tion processes and outcomes can be properly un- accept the achieved results, since it is impossible to
derstood and evaluated by users”. compare them with the actual building. On the con-
To sum up this section, it is necessary to stress trary, when the existing object is a subject of model-
that the ability to confront the results achieved in ling, it is perfectly possible to achieve its geometry
the process of creating the virtual model with the through the measuring or scanning. However, there
actual state allowed the ongoing verification of the is much stronger pressure on reliable representation
decisions and to introduce necessary adjustments. of real appearance. It is not easy if not just a general
It might be argued that the situation was comfort- impression but the knowledge about the object is
able since modelled objects existed. Nonetheless, it to be represented. What is more, the problems as-
was impossible to avoid the compromises because, sociated with modelling of existing structures make
as the experience has shown, a reliable digital repre- clear that reconstructions of non-existent objects
sentation depends not only on the input data. may occur extremely imperfect.
Similar problems apply to other fields of art -
SUMMARY AND CONCLUDING REMARKS such as sculpture. For example, replicas made in a
The considerations put forward in the first part of different material, although keep shapes of originals,
this paper relate to the reconstruction of non-ex- trigger different aesthetic experience. The topicality
istent, destroyed objects and to existing structures. of the above-mentioned issues can be proved by
the solution adopted in the Tate Gallery on-line cata- Taking above issues into account, the author be-
logue, in which objects could be seen in a different lieves the paper will contribute to the discussion on
light exposure, allowing their better understanding, performative values of virtual reconstructions in the
including texture and other features (Stanicka-Brze- cultural heritage domain.
zicka, 2012).
To summarise, the author aimed to analyse the REFERENCES
process of digital reconstruction of heritage build- Albayrak, C, Tunçer, B 2011, ‘Performative architecture as a
ings and the impact of the decisions taken during its guideline for transformation: Defense Line of Amster-
development on the final performance. dam’ in T Strojan Zupancic, M Juvancic, S Verovsek and
Assuming that the imaging is treated as a visu- A Jutraz (eds.) Respecting Fragile Places: Proceedings
alisation of knowledge, these issues are of particular of the 29th Conference on Education and Research
importance, since contemporary culture is based on in Computer Aided Architectural Design in Europe,
the visual perception, in which not intellect, but the eCAADe / University of Ljubljana, Ljubljana, pp.501-
senses are activated to experience the past [figure 510.
6.]. What is more, the image acts as the dominant Alkhoven, P 1992, ‘The Reconstruction of the Past: The
form of memory. According to Szpocinski (2009) Application of New Techniques for Visualization and
memory visualisation is a phenomenon which es- Research in Architectural History’, in G N Schmitt (ed.)
sence is the dominance of visual events in the pro- Computer Aided Architectural Design Futures: Educa-
cesses of transmission and perception of the past. tion, Research, Applications, Friedrich Vieweg & Sohn
Abstract. This paper provides a critical overview of some of the fundamental issues
regarding the adoption and integration of BIM – both as a method and as a technology –
in Architectural education. It aims to establish a common ground for the rationale behind
such integration and reflects on the past and present state of the cultural, intellectual,
professional and technological context of Architecture. The paper will introduce the core
issues to be considered in order to succeed in this challenging and transformational
process. It will also introduce a framework for a gradual and progressive adoption of BIM
and integrated design in the architectural curriculum.
Keywords. Architectural education; BIM and integrated design; distributed cognition;
integrated design studio.
INTRODUCTION
The emerging visions for an “Integrated Practice” in as a stepping stone in order to be more efficient and
building industry, through BIM (Building Informa- effective. So how do these ambitions affect archi-
tion Modelling), carry potential to fundamentally tects and architectural education at large? The RIBA
transform the way in which architectural education believes that architects have a central role to play in
engages with issues of design knowledge, technol- ensuring that the construction industry responds
ogy, representations and collaboration (Ambrose et to the opportunities offered by BIM in both public
al., 2008). In this article we aim to develop a frame- and private sectors and has developed a new Plan of
work for the integration of BIM into architectural Work (launched in May 2013) as an important piece
education. We also aim to identify the core issues to of new guidance for architects and co-professionals
be considered in order to succeed in this challeng- [1]. However, there is yet no guidance or a roadmap
ing transformational process. for architectural schools/institutions as to how they
In UK, the government has set out an ambitious could adapt to the forthcoming challenges in the in-
plan to have fully collaborative BIM, with all project dustry and to educate the future architects accord-
and asset information, documentation and data be- ingly.
ing electronic, on all public sector projects by 2016. There are both complementary and contradic-
The UK programme based on this new BIM strategy tory views as to “if” and “how” BIM – either as a soft-
is seen as one of the most ambitious and advanced ware, or as a process or in any combination – should
government led programs to embed the use of BIM be integrated into the academia’s curriculum struc-
across all centrally procured public construction pro- ture. Some of the resistance stem from a shared set
jects. Through this Government-led incentive, the of concerns which have been outlined by some of
construction industry is getting ready to utilize BIM the contributors of a recently edited book by Deam-
INTRODUCTION
Performance in architecture
The idea of performance in architecture has been and phenomena in architectural artefacts.
extensively debated during the last years, in exam- Performance is an important consideration in
ple in the “Performative Architecture” symposium many other industries, ranging from education to
organized in 2003 by Kolarevic and Malkawi (2005). commerce. For example, terms such as Performance
Discussion has focused on the “apparent disconnect Indicators or Key Performance Indicators – although
between geometry and analysis” despite the variety still a jargon from industry that lacks clear defini-
of the available digital tools (Kolarevic and Malkawi, tion – are “items of information collected at regular
2005) and on performance perceived as a qualitative intervals to track the performance of a system” (Fitz-
criterion in architecture. For the recipient of the built Gibbon and Tymms, 2002). From this perspective,
environment and the critical thinker, performance is performance is also the success factor in design. The
an objective quality measure, which offers rationale design process and the design object are the two
and clarifies the multiplicity of current approaches sides of the same coin. Yet, in architecture, arguably
Figure 1
Foci for researching BIM
(adapted from Vrijhoef, 2003).
Owners & FM
Benefits from using BIM per
Consultants
Contractors
Authorities
Regulatory
(Number of
Architects
actor of the AEC industry and
Engineers
Managers
Suppliers
project phase in absolute references in
numbers of the sources. the sources)
Initiative 53 43 35 22 29 12 26 21
Design 124 78 107 52 64 20 42 30
Construction 54 57 74 29 65 27 27 20
Use 16 27 23 17 15 13 48 16
Construction scheduling
BIM features and SC research
Environmental analysis
Facilities management
perspectives correlation references in
Preliminary massing
Quantity take-off
Mechanical tools
absolute numbers.
Feasibility tools
Cost estimation
Clash detection
Visualisation
Owners & FM
(%) Summary of references on the
Consultants
Contractors
Authorities
Regulatory
Architects
Engineers
Managers
actors and phases of the AEC
Suppliers
industry benefited from BIM.
managers have authored 30.30% of the research on Table 5 emphasises with bold the mentions of
BIM. This element strengthens our initial proposition the most prominent BIM features and with italics
that BIM is considered more as a tool to achieve an of the most underused. The aim here is to indicate
occasional high performance, rather than as a per- the features of BIM that improve the performance
manent project management tool or a process to of AEC industry throughout the whole supply chain
be used towards the integration of the construction (A2). According to the literature review, BIM features
supply chain. At the same time, the fact that not such as visualisation, clash detection and collabo-
only junior but also a lot of senior researchers are ration tools are the most researched by far, which
keeping busy with BIM reveals that they are already on the one hand increases the performance of the
convinced about its potential and the impact and building product but on the other hand contributes
are committed to put their expertise in action. The to the performance of AEC only incidentally. For in-
majority of the publications (63.64%) use case stud- stance, quantity take-off and facility management
ies and experiments to validate their hypotheses. tools – employed mostly for facilitating the suppliers
Table 4 contains the data of Table 2 in percent- and the owners respectively – are either neglected
ages and indicates with bold the number of sources for certain research perspectives or only appear
where the actors and the phases experience more in the 6 to 9% of research into BIM. Likewise, while
benefits from the employment of BIM and with ital- there are tools for the construction field, such as di-
ics where the actors and the phases profit less. Ap- rect fabrication tools (Table 5), they are seemingly
parently (A1) the architects and the engineers are not widely applied or reported. Other BIM features
the actors who are either the participants more in- mentioned but not included here are laser scanning
volved in the research and adoption of BIM – or are and tools for safety on the building site.
simply considered the primary actors – in BIM litera-
ture (Table 4). Surprisingly, construction managers DISCUSSION AND CONCLUSION
are not equally prominent to these primary actors,
as one might have expected but they are more in- Discussion
volved in all phases of a project, while contractors The research design answered sufficiently the re-
are referred to mostly in the construction phase. search questions. Comparing this research to other
Suppliers are also referred in the construction phase studies, the most apparent difference is to be found
but are limited to peripheral roles in the rest of the in the methodology. Identifying benefits and quan-
building life-cycle. On the other hand, owners and tizing performance via literature review is not the
regulatory agencies seek immediate involvement norm in this domain. The present research shares
but achieve only fragments of presence mostly dur- common concerns and limitations as publications
ing the initiative and use stages. based on case study research. Comparing it with
Construction scheduling
Summary of correlations (%)
Environmental analysis
Facilities management
between BIM features and SC
Preliminary massing
Collaboration tools
Specification tools
research perspectives.
Contracting tools
Quantity take-off
Mechanical tools
Feasibility tools
Cost estimation
Clash detection
Visualisation
Time & cost 16 7 11 3 8 11 14 6 4 17 5 18 6 3
Facility mgmt 3 1 8 3 6 8 6 5 7 3 3 3 0 9
Design process 16 12 23 14 44 29 22 9 14 12 8 9 6 8
Engineering 11 9 16 6 19 19 17 12 8 8 7 12 5 3
Constr. mgmt 10 12 14 4 12 22 12 4 2 11 7 22 6 3
Constr. field/site 6 9 12 4 10 10 10 7 2 8 6 15 5 3
Sustainability 7 0 3 1 5 5 5 1 10 3 1 1 0 3
Bldg product 7 3 9 2 9 11 5 4 9 4 2 3 1 7
HR & roles 4 5 4 2 6 16 6 0 2 2 2 3 1 4
Technology & data 7 6 25 4 20 20 15 10 9 8 7 12 4 7
previous studies, it focused on the performance of ure 2). Undoubtedly, with the still rapidly evolving
the AEC process via the use of BIM rather than “dis- state of the information age, research directions
cussing how information systems can further con- in the field may change day by day. However, this
tribute to this research domain” (Merschbrock and study has showed that there are certain neglected
Munkvold, 2012). There are again limitations over research areas and correlations that arguably ex-
how exactly to measure performance, a problem plain the low performance of the AEC industry. The
already mentioned in other studies (Barlish and Sul- content of this paper offers a guide to improving
livan, 2012). A solution to this problem is be the clas- the behaviour of the neglected project phases and
sification of benefits as having a positive or a nega- actors by integrating the construction supply chain.
tive impact, as suggested in research on case studies Concerning the methodology, the research adds on
(Bryde et al, (In Press)). Apart from sharing common how to conduct literature research with an eye not
concerns and limitations with existing researches, only in the semantics and external characteristics of
the present study has the dual advantage of includ- the scientific material but also in the overview level
ing all the involved participants in the AEC industry of the scientific material. The present method could
and referring to all the stages of the AEC. also be employed in the future by either focusing on
Although the findings presented here do not a narrower research field or including certain types
cover the full extent of the research conducted – of publications. Lastly, anticipating the criticism over
due to paper length limitations – the main results the credibility of a literature review, we defend the
already suggest concrete directions for further use. selection of this research design by restating the
From the summarising tables in the results section quality assessment that was incorporated during
(Tables 4 and 5) we indicate certain directions that the employment of the experiment.
require further attention and investigation (Fig-
Utilizing parametric BIM components as smart early design tools for large-
scale urban planning
Abstract. The paper describes the development of a set of smart BIM components to
facilitate and accelerate the creation of large-scale urban models in the early design
phase in a BIM software environment. The components leverage the analytical,
parametric and modelling capabilities of the BIM environment to support adaptive
parameter-driven building geometry, patterning of different building types, early
numerical and graphical design evaluation, various simulation methods and the
exploration of design alternatives. The toolset consists of the most common building
shapes, but can be extended with additional shapes and their respective area and
volumetric calculations when necessary. The rapid large-scale deployment of the
components has been achieved by diverting existing tools from their intended use.
Keywords. BIM; urban planning; early design; rule-based design; parametric design.
with a lot of potential regarding the adaptability of a ure 1c) by drawing a number of lines to gener-
large number of objects to varying geometric condi- ate the subdivisions.
tions: Mass surfaces could be rationalized by using The actual toolbox consists of several types of build-
the “divide surface” functionality and subsequently ing masses created as pattern-based elements and
be populated with “pattern-based curtain panels”. adaptive components that can be hosted on and
Revit 2011 saw the introduction of the “adaptive rapidly distributed across divided surfaces. Depend-
components” functionality: placement point based ing on the desired outcome, two separate modelling
components that can adapt to varying spatial condi- strategies can be applied for populating the grid
tions [5]. Lastly, with the 2013 version came the “re- with the building masses:
peat and divide” workflow that can be used to cre- 1. For a simple pattern, the divided surface can
ate more complex arrays of objects (Dieckmann and be assigned a pattern-based component (Fig-
Kron, 2012) and facilitate the large-scale distribution ure 2a), essentially distributing instances of the
of reactive components (“reactors”) as described by same building block across the entire grid of a
Woodbury (2010). block. Exceptions can be defined by selecting
Surely none of these functionalities were de- individual instances and manually switching
signed with large-scale urban planning in mind – their type or altering their instance properties
most of them are typically used for the creation of (Figure 2b).
curtain wall systems and other building elements 2. More complex patterns of several alternating
– but they can be “abused”. In the context of the pro- building types can be created as one or two di-
ject, the aforementioned tools are used as follows: mensional arrays by employing the repeat and
1. The footprints of city blocks are created as divide workflow (Figure 2c). In addition, this
mass surfaces (Figure 1a). workflow allows for the rapid deployment of
2. These mass surfaces can then be subdivided context-aware adaptive components that can,
into lots using the divide surface functionality, for instance, react to the proximity of other ob-
creating a grid within the city block. The grid jects in the model (Figure 2d). A common ap-
can either be generated automatically (Figure plication for this method would be the increase
1b) using a layout algorithm (e.g. number of of density towards certain zones in the urban
Figure 2 subdivisions in U/V direction) or manually (Fig- model (see below).
From left to right: a) Di-
vided surface populated with
pattern-based components, b)
Manual exceptions,
c) Patterning with divide & re-
peat functionality, d) Reactor
pattern with context-aware
adaptive components.
4a). This is done by hosting all the dimensions on the controlled by a parameter, making it easy to change
horizontal work plane of the first placement point. the orientation of the component (front, right, back
The geometry of a pattern-based component by and left side of the lot) as well as the building shape
default inherits the orientation of its host, i.e. the di- (I, L, U, O). This also allows for the subsequent crea-
vided surface of the city block. That means that ver- tion and substitution of other building shapes es-
tical elements created in the lot component would sentially making it a modular system. Additionally,
rather orient themselves according to the surface all the parameters that control the building shape
normals of the city block than vertically at their point (building depth for all sides of the lot and building
of placement. By changing the orientation mode of height) are also passed to the subcomponent. As
the placement points the lot component geometry stated above, the building subcomponents merely
can however be forced into a strictly vertical orienta- consist of the building geometry driven by the lot
tion. The placement point location can then be pro- component parameters and thus warrant no further
jected upwards by means of vertical rays. On sloped description.
lots, the building may have to be moved up or down For the purpose of calculating the building foot-
so as not to be fully or partly immersed in the ter- print and related data like floor space and building
rain. This can be achieved by creating a horizontal volume, the central zone is again subdivided into
datum between the aforementioned rays (Figure nine zones, this time by using the building depths
4b) that can be moved by manipulating a parameter for the four sides. Again, the depth for each side is
that controls the vertical offset of the datum. user-controlled with a safeguard against overlaps as
The horizontal datum serves as the placement described above for the street offsets. The footprint
plane for the building component itself. It is subdi- of each building type can now be calculated as the
vided into nine zones by projecting the street off- sum of some of the zone areas (Figure 5), depend-
set for all four sides of the lot onto the datum (Fig- ing on the selected building type, e.g. the footprint
ure 4c). These offsets can be controlled by the user of the O-shaped building would be the sum of all
through four parameters. In case the street offsets zones except for the central zone. The zone areas
of opposing sides of the lot overlap, the user inputs themselves are calculated on the basis of the report-
will be substituted by a “safe” value that is automati- ing parameters (see above) using Heron’s formula
cally calculated. and the law of cosines. Subsequently, all other data
The four intersection points of the street offsets necessary for evaluation such as cubic index, floor
form the location for the placement points of the area ratio or site occupancy index can be derived
building component (Figure 4d) and also mark the from the building footprint, the number of floors,
vertices of the central zone that forms the basis for the floor height and the site area. In Revit, custom
the building footprint calculations (see below). Once component parameters can not be scheduled or
a building component is placed here, its type can be annotated in the project environment by default.
Thus, in order to have the data readily available in ric relationships that aid with the area calculations
the project for evaluation, they need to be declared for the standard building types (I, L, U, O). Instead, it
as so-called “shared” parameters making them avail- contains a center point for the free-standing build-
able globally (in the component itself as well as in ing geometry that can be moved parametrically in
the project). U and V direction on the lot surface. The building
geometry that is hosted on the point in turn has a
Component Variations rotation parameter to allow for flexible alignment of
The lot component can be used as a template to cre- the building mass.
ate further variations. They can either be different A reactor component (Figure 6b) as described
building types than the four types described above, above can use either the solitaire component or the
more complex parametric components that utilize standard lot component as nested subcomponent.
the lot component as a subcomponent or a combi- It is basically an adaptive component that sets up
nation of both. rules for the behaviour of its subcomponent. It has
The solitaire component (Figure 6a), for instance, one or several additional placement points that
makes use of the spatial and parametric framework act as sensing devices. By hosting these additional
of the lot component. However, it needs neither placement points on certain fixed points in the pro-
the street offset grid nor the majority of paramet- ject and measuring their distance from each placed
Figure 6
From left to right: a) Solitaire
component, b) Reactor
component.
instance of the reactor component, the components able in the model. The schedules can utilize condi-
gain spatial awareness. This information can then tional formatting to highlight lots that do not meet
be used to control the geometric properties of each certain requirements like, for instance, a cubic index
placed subcomponent, e.g. the number of storeys. that exceeds a certain limit (Figure 7a).
A schedule is, however, just one way of looking
WORKING WITH THE TOOLKIT at information. The same information can also be
The typical workflow has been, at least in part, de- visualized in isometric, perspective or plan views,
scribed above already: The city blocks are created displaying the information in a spatial context. In Re-
as mass surfaces and subsequently subdivided into vit, model views can be reformatted with so-called
lots. Depending on design intent, several distribu- view filters. By means of a few view filters a perspec-
tion methods (uniform, uniform with exceptions, tive view of the project can be colour-coded accord-
patterned and reactive/parametric) are available ing to value ranges of any given parameter like, for
(Figure 1). The component type(s) assigned to a instance, the cubic index of each lot, with different
block, a lot or a pattern can be changed and their colours for different value ranges (Figure 7b).
instance properties can be modified. The shapes of Often, the building type has a significant influ-
the mass surfaces themselves and the number of ence on the measurable characteristics of a building.
their respective subdivisions can also be modified For instance, the energy use of a building depends
at any time. Moreover, several out-of-the-box func- quite heavily on the activity within that building.
tionalities like design options (managing different There are some statistical resources available for
design alternatives) and phasing (managing the that kind of information, like the Buildings Energy
temporal properties of elements, i.e. differentiating Data Book by the U.S. Department of Energy [6].
between existing and new building blocks) can be However, for the purpose of this paper, the authors
utilized to structure and control the design. have focussed on costing. In a lot of countries, there
The main reason for using a BIM environment are statistical data available on the building costs for
for urban design, however, is the ability to create various building types. For the german market, this
information-rich content and leverage that informa- data is made available by the BKI Baukosteninfor-
tion to evaluate the design. All the numerical data mationszentrum (2013). In Revit, external data can
produced by the placed components can be easily be inserted in the form of so-called key schedules,
scheduled. Each lot component contains a flag pa- either by inputting it manually or by using third-
rameter that facilitates the creation of a schedule party applications [7] to import it from Excel. A row
that only displays the lot components placed in the of values from a key schedule can be assigned to a
project and ignores all other site components avail- placed component by means of a key parameter.
Abstract. Design activity is pervasive as it is increasingly expanding into all sectors and
every day it is increasingly difficult to anticipate the often unpredictable changes resulting
from new inventions and changes in technology, tools, methods and social customs using
current design systems, and at the same time we need to preserve and store knowledge
and experiences that can help facing aforementioned problems. The present paper
illustrates an innovative Rule Layer overlying existing commercial software in order to
model Reasoning and Performance verification Rules to be applied to design instances.
The authors developed two different prototypes, one on BIM and one on CAD commercial
software in order to validate the proposed approach. Results demonstrate the general
system potentials opened up to further research development and deepening.
Keywords. Building ontologies; building design reasoning; BIM/CAD; collaborative
design.
INTERLEAVED WORLD
Comparing our “era” with the past, people in deve- in the world completely absorbed what was elabo-
loped countries obviously live in better conditions rated from most advanced scientific and philosophy
than before due to the organization of society and researches: the importance of science (also social)
technological evolution. On the other hand, sociality facts, of measurable quantities, referred to phenom-
has been replaced by competitiveness, mainly as a ena expressed in mathematical-analytical formulas.
result of increasing complexity and changing needs Hence fundamental science courses were
that require new approaches in all human activi- taught, on which ‘objective’ base the following dis-
ties in order to meet increasing demands (Einstein ciplines were set up. This ‘functional’ logic character-
2006). ized scientific as well as humanistic Schools.
The problems are mainly related to the ‘idea’ of In short: avant-gardes inquired into “first” prin-
science and science law we have. In the past, our ciples until the First World War; afterwards, the re-
general conceptual elaborations were based on sults of these researches were applied to well lim-
Thirties period. That time all academic institutions ited scopes. It was usual to describe phenomena by
Database Translation
Revit Database Import BIM model export to Custom-made
into Protégé
into Protégé Ontology BIM Database Interface Linking
compatible Database
Editor [Revit DB] Database
format
Coherence, Congruence,
Reasoning over Reasoning inferred
Compilation of Property Consistency and
Building Knowledge axioms
defined fields Performance checked
Ontology (Reasoning results)
Ontology
10. SWRL inferred axioms can be checked and veri- The CAD prototype refers to AutoCAD® for
fied by each Specialist Designer; graphical representation and is limited to lines and
11. By means of Protégé “Export to Database” com- 8-vertex polylines representation to suit the mod-
mand, an inferred Entity Database is created; eled knowledge structure.
12. The Revit Database is then updated with new
values and definition from the inferred Entity Phase 2: Building Design Process workflow
Protégé Database by means of the developed The following workflow shows the step-by-step im-
Linking Database. plemented prototype:
Due to the proprietary nature of Autodesk Revit, 1. Launch Protégé Ontology Editor with classes,
even if Reasoning Rules in Protégé may possibly af- properties and rules definition;
fect geometrical properties modifying and/or add- 2. Query Tab launch: classes list (Figure 3);
ing values, Revit does not allow them to be changed 3. Class List Export in a TXT file;
because they are System Parameters and it is not 4. Autodesk AutoCad Launch;
possible to edit them out of Revit itself. 5. AutoLisp implemented application launch
Due to this limitation, in order to validate the for automatic Layer creation with layer name
proposed system, the authors implemented plug- equivalent to class name (Figure 4);
ins and add-ons for AutoCAD® which allows interac- 6. Design solution representation by means of 2D
tion with the DXF drawing format. lines and/or (at most) 8 vertex polylines;
7. Design solution saved as DXF format file;
PROPOSED SYSTEM IMPLEMENTATION 8. A specific software has been implemented in
WITH CAD order to parse the saved DXF file and then to
create as many CSV files as the layers used.
Phase 1: Knowledge and Rules Modeling Each CSV file will contain as many rows as el-
Knowledge Ontology modeled by means of Protégé ements are present in corresponding layer in
for the above-mentioned test has been modified in DXF file, separating the element features with
order to allow further prototype tests on the pro- a semicolon, for example sake:
posed system. • Instance type: Line or LwPolyline;
AreaXY property has been linked to Product • Handle: unique AutoCAD® ID;
class and its sub-classes and several has_xn (with n • numVertex: number of instance vertex
from 1 to 8) have been linked to classes in order to (only for polyline definition)
specify 2D geometrical instance definition. • has_xn-has_yn: (with n from 1 to 8) x and
bilities. The guidelines document is derived from a tion between design team and external consultant,
regional, localized IPDP (Integrated Project Delivery the engineering students set up ER, including room
Protocol) document [6], in which the students need areas, volumes or the façade area. This enforces
to formulate the different roles in their team, respon- them to specify required information precisely and
sibilities, the information flow and the necessary col- unambiguously, including the correct way to meas-
laboration activities (deadlines and meetings). We ure certain quantities, e.g. gross or net values and
reformulated this document, which was oriented to inner or outer dimensions. Based on the ER, the ar-
professional practices, for an educational context, chitectural model is filtered to contain only the re-
but kept a similar objective. Instead of formulating quested information, rather than transferring a full
the document as a static checklist, it was written as model. This is in line with the MVD concept. Before
a series of open questions, for which the students the filtered model is exported to IFC, it can be fur-
needed to define an answer within their group. ther optimized to improve model exchange, as will
buildingSMART [7], an organization focusing be described in the next section.
on interoperability and building process improve- To accompany the exported model with quan-
ments, defines Exchange Requirements (ER) and Mod- titative results, the architects create additional table
el View Definitions (MVD). To prepare the collabora- schedules. Engineering students who receive the
a visual check using an IFC Model viewer. low the provided lessons at their own pace. While
During the preparation of the learning material, the recording of this material was a serious effort,
a series of recommendations and known problems student reactions were mostly positive. They ap-
were documented, giving the students pointers to preciated the freedom to watch the material, rewind
problems they might encounter. At first, before the where necessary or even skip parts that seem un-
IFC model is exported, the design model in Archi- needed to them. As the computer skills vary con-
CAD can be altered to generate more usable output, siderably between students, this was a valuable
as summarized in Table 1. alternative to classroom-based tuition, which has
Some aspects were not solvable within the de- become more difficult over the last few years. After
sign model, so in those cases, manual changes in Re- students were given time to assimilate the tutorials,
vit are required, as described in Table 2. guided consulting sessions were organized, where
Even then, certain information was inconsistent students were able to pose individual questions,
and some workarounds were still required. In those based on their own work. This was accompanied
cases, students were free to use other means to with an online forum for questions. An added ad-
communicate the missing information. vantage of video-tutorials is that they can be easily
shared. They not only serve our own classes, but also
Didactical learning material and facilities attract many other users worldwide, since they were
As part of the didactical vision on the Architectural openly hosted as YouTube playlists [8].
Computing course, which also comprised other
themes (visualization, freeform modeling, digital Is it worth the effort?
documentation), it was decided that as much classes While some of these aspects might seem rather
as possible would be replaced with video-tutorials. trivial, they do pose a serious technical challenge to
After a general introduction seminar, students fol- properly set up and we have been lucky to have full
opment and extensive teacher consults. It was ex- (model checker). The formulation of the assign-
plained that the architectural quality of the design ments and the final expectations need further re-
would not be part of the grading criteria, because finement and valuable lessons were learned to im-
this was already done as part of the original design prove the assignment for the next semester.
studio exercise. The quality of the model however, Several adjustments to the curriculum are initi-
taking the received feedback into account, and the ated, to ensure a durable implementation of the
reporting were effectively graded. collaboration beyond the period covered by project
funding. All involved partners are enthusiastic to
CONCLUSIONS AND FUTURE OUTLOOK continue further, precisely as we seemingly are, in
All project partners are convinced that this is most our Flemish region, at a turning point, where several
valuable exercise, providing a huge experience for professional, academic and commercial parties are
both students and educators. This approach illus- increasingly moving towards BIM, which will form
trates BIM as a process, rather than as a tool. Stu- the basis for a regional knowledge network.
dents are stimulated to reflect on both the product
(the design) and the process (the design collabora- ACKNOWLEDGMENTS
tion in a Project Team). However, do not underesti- The project funding from the “Education Develop-
mate the need for continuous evaluation and the ment Fund” of the Association KU Leuven, with refer-
increased technical complexity to facilitate the col- ence OOF 2011/24 is gratefully acknowledged. The
laboration. Luckily, the software tools, while not project also builds on experience gathered during
perfect, proved to be adequate and fairly stable. IFC a previous related project from the same funding
support, while not perfect, due to implementation body, with reference OOF 2007/24, focusing on mul-
differences between software vendors, is gradually ti-disciplinary collaboration in building teams.
improving and is, indeed, “good enough” to be used
at the core of the collaboration process. REFERENCES
It is very important to be clear about the ex- Ambrose, MA 2012, ‘Agent Provocateur – BIM In The Aca-
pectations towards students, as the exercise should demic Design Studio.’ International Journal of Architec-
present them a positive experience with BIM and tural Computing. 10 (01), pp.53–66.
not blur this with an extensive assignment. The ex- Anon 2011, NBS Building Information Modelling Report
ercise attempts to capitalize on the virtues of BIM: March 2011
synchronization of representations (all documents), Bernstein, HM and Jones, SA (eds) 2012, SmartMarket
extraction of information (schedules), model-based Report: The Business Value of BIM in North America.
information exchange (IFC) and model evaluation McGraw-Gill Construction, New York, USA
Visual characteristics
• Size.
• Habitus (shape variable within species).
• Structure.
• Texture - the period of dormancy.
• Color - foliage, flowers, fruits.
Size of tree
• Small 5-10 m fruit trees, ornamental cultivars.
• Medium 10 (15)-20M hornbeam, birch, rowan, birch, spruce, pine, fir.
• Large 20-30 (40) m to 40m: Buba, lime, ash, poplar, elm, maple.
Branching structure
• Branching dense, sparse.
• Direct branches, gnarled.
formation modeling is still being developed for the companies Graphisoft (ArchiCAD) and Autodesk
buildings mainly - the construction management (Revit) that bring their products to our market. Ad-
and facility operations. ditionally there are also Nemetschek (Allpan Archi-
That may explain that in our investigation we tecture) and Bentley (Microstation V8i).
did not find any indication of existence of a survey
or a research in our direction. Firstly we needed to LANDSCAPE MODELLING SOFTWARE
get a deeper knowledge of the term “BIM” and also We have found several smart “CAD based” programs,
the software that supports this system of work. but all of them are used for 2D design (the tradition-
What we found about BIM could be generalised al way). One case is for example ArborCAD, which is
to this: Building Information Modeling covers more special purpose CAD software for the needs of ar-
than geometry- it extends the traditional approach borists. It is based on the landscape software LAND-
to the building design (two-dimensional drawings WorksCAD. It has a lot of features specially aimed at
as plans, elevations, sections, etc.) beyond 3-D to information about plants, trees, and their properties.
time as the fourth and cost as the fifth dimension, Additionally there are also some applications
and sixth dimension is the life cycle management. It that work with plants as dynamic objects:
also covers spatial relationships, light analysis, geo- • NatFX is a plug-in (for Alias|Wavefront’s Maya)
graphic information, quantities and properties of designed for modeling and animating 3D
building components (for example manufacturers’ plants (age, season, and scale). Animation is
details). possible either as a realistic effect (with bo-
There are two major BIM software developing tanical constraints) or as a cartoon effect (by-
Lifetime
• Short-living (about 80 years ,willows, poplars, alder, birch), fast-growing.
• Average, (200-300 years, maple, hornbeam, rowan, spruce, pine), survivor type.
• Long-living (600-800 years, oak, linden, beech, yew).
Environmental requirements
• Temperature: thermophilic (oak, beech, maple), cryophilous (spruce), no particular requirements-
pine, birch, sycamore.
• Moisture: Xerophilic (rowan tree), Mezophilic (ash tree), Hydrophilic (willow).
• Light: Heliophile (larch, pine, birch), Average, Heliophobic (yew, hornbeam).
• Nutrients: most of the plants have average demand for nutrients, but some may demand alkaline
soils - Addition of Ca or acidic that requires addition of peat. Some species tolerates salinity - that
ones are suitable for street alleys (acacia).
passing the constraints), using natFX’s built-in • EASYnat by Bionatics is an add-in that can be
plant skeletons. All natFX trees and plants are used within AutoCAD or 3Dmax. Currently the
fully textured, based on scans of actual leaves, trees are parametrically generated from the in-
stems, and bark. NatFX is currently available as put data (height, width, age and season) (Fig-
a plug-in. ures 1 and 2).
Figure 1
EASYnat examples of render-
ings of the Horsechestnut tree
models.
EASYnat could be suitable for our further re- ple for trees height, canopy perimeter, trunk perime-
search - for implementation into object orientated ter), surrounding reaction (for example for trees soil
software like Revit or ArchiCAD and development requirements, geographic location, moisture), and
of their attributes and actions. The plants should interaction (for example for trees spread, ecologi-
have more attributes and while simulating the grow cally close trees, and antagonistic plants).
process, they should react and interact with its sur- In real life coding examples, the difference be-
roundings. tween inheritance and aggregation can be confus-
ing. If you have an aggregation relationship, the
UNIFIED MODELLING LANGUAGE aggregate (the whole) can access only the PUBLIC
BIM involves representing a design as combina- functions of the part class. On the other hand, inher-
tions of ‘objects’ that carry their geometry, relations itance allows the inheriting class to access both the
and attributes. Such objects and relations can be PUBLIC and PROTECTED functions of the superclass.
described by UML (Unified Modelling Language).
UML is a standardized general-purpose modeling DESCRIPTION OF POTENTIAL APPLICA-
language in the field of object-oriented software TION OF UML DIAGRAMS
engineering. It is a tool for defining the structure We have applied the class diagrams to our study
of a system through several types of diagrams. UML case: we needed to state the classes and all the
enables to model an application specifically and in- atributes that our classes should have. Then we have
dependently of a target platform (Fowler and Scott, created one instance of that class and the relations
2000; Blaha and Rumbaugh, 2004). (Figure 4).
UML diagrams represent two different views of These diagrams we use to map the classes that
a system model: static (structural) and dynamic (be- appear in our application study. While describing
havioral) diagrams (Schmuller, 2001). There are more the plants as object for BIM you have to take in ac-
than ten different types of UML diagrams. In our in- count all their properties and attributes -such as
quiry we are interested in class diagrams, object dia- height, perimeter of canopy, perimeter of a trunk,
grams and activity diagrams (Figure 3). classification in the meaning of family- genus- spe-
In our research we aim to advance the informa- cies- cultivar, to state if the plant is decidious or ev-
tion model of the vegetation to be comprehensive. ergreen, the aspect of season (mainly to state the ex-
This means that we not only model the general at- istence and color of the leaves), the age of the plant
tributes of vegetation (for example for trees family, and its habitus an finally for most efficient work with
genus, species, cultivar, decicious/evergreen, sea- the objects the detailing and the way of graphic ap-
son), but also the time dynamic attributes (for exam- pearance.
Figure 5
Diagram of relations between
classes.
The same (properties and attributes) we should clists-parking places) and infrastructure objects.
state for the other objects that we would like to in-
volve in our application. DISCUSSION
In this preliminary diagram (Figure 5) you can In the paper we have presented our first findings to-
see buildings (can cover the minor architectural wards a Landscape Information Model. The work is
works in the countryside), site equipment (such as still in a very preliminary phase, but we hope to have
benches, trash cans, lightning, equipment for cy- laid some foundations. As trees are dominant visual
Abstract. On behalf of the BBR (German Federal Office for Building and Regional
Planning) the development of an Industry Foundation Classes (IFC) based inspection
tool was accomplished as application on an underlying work-in-progress development
framework. By providing a machine-based checking process the tool ModelCheck
was rolled out to meet demands emerged during pilot projecting. Thus it is capable of
processing automated compliance checks on quality criteria for the authorities, e.g.
documentation guidelines of BBR regarding building and real estate documentation or
building information modeling (BIM) quality criteria formed for the Humboldt-Forum
project – a BIM pilot-project managed by BBR. ModelCheck supports checks on IFC
models - formal against schemes and logical inspection with regards to alpha-numeric
content by using xml-based configurable rules.
Keywords. BIM; quality assurance; rule-based model checking; collaboration
INTRODUCTION
Besides being a promising concept from a general Thus, the participants of the survey highly
point of view, building information modeling (BIM) in agreed (65%) to the statement that the quality of
real world is still confronted with problems in terms digital building models in form and content is not
of overarching business and process related co-oper- adequately standardized yet (Figure 1).
ation on a base of its models. The results of a market The specification of such defined process in-
analysis regarding potentials and hindrances of BIM terfaces can be mentioned here as an important
application in Germany identify great prejudice and precondition for a simplified and secure contract-
reticence coming to business overarching transmis- ing and cooperation: By referring to normative
sion and cooperative usage of BIM models (von Both, descriptions contracts can be concluded very effi-
2012). In the market the benefit of BIM in terms of co- ciently and securely between client and planner re-
operation with project partners is worse than other spectively contractor as well as among the planners
more operative aspects by far. In such co-operation themselves. This becomes very important when the
activities nowadays one reason, besides inadequate contract partners – like in Germany – are composed
technical interfaces, seems to lay in insufficient newly for each project.
specification of exchange conditions and qualities of On one side Germany’s Architecture, Engineer-
model data respectively building information. ing and Construction (AEC) sector addresses this
Along the basic structure of an example rule Regardless of the user’s knowledge, a verbally
(Figure 3), the rule components will be introduced. represented questioning (in the example: “Does
The decomposition of the components, hence the every window have a U-value less than 1.2?”) can
basic structure was thereby adopted from the Ob- function as an initial point of the rule development.
ject Constraints Language’s (OCL) main statements In a first step, the rule administrator dismembers
[3] in order to promote a convention-driven limita- this concrete question according to the four com-
tion to the source code used in the user’s rule files by ponents of the check workflow. Depending on a ba-
only implementing common templates. sic population of all components embedded in the
versioning keeps every applied rule with regards TECHNICAL BASE OF IMPLEMENTATION
to content referable to every produced inspection As an application-independent central service, the
result. This seems important e.g. it guarantees es- framework provides base functionalities for the
sential consistency for archiving purposes – older model based data analysis. The base functionalities
inspection reports are always mapped to the valid serve collaborative data handling, e.g. type- and
rule at time of inspection. Import and export func- attribute-oriented selections of partial and aspect
tionality enables an administrating user to deploy models, integration of these partial models as well
newly finalized rules to users of a standard version as supplying mechanisms for versioning, change
that only use production ready rule sets. This allows management and transaction control (Figure 4).
team-internal organization of roles within a user Thus a kind of “Meta-Model-Server” is provided
group. Together, versioning and a project specific for further research and development projects that
structuring enables parallel usage-oriented organi- in different application scenarios can be imple-
zation of similar rules in different contexts. mented for different kinds of model standards like
Putting ModelCheck into practice (HUF project) ifc STEP, CityGML or GAEB. Furthermore it supports
enables iterative evaluation and optimization regard- model overarching model-operations (Hartmann
ing rule logic for the user – whereon further develop- and von Both, 2011).
ment of inspection shell functionality is then based
upon. ModelChecks further development also takes OUTLOOK
advantage of extensions to the underlying frame- Further development on the analysis and visualiza-
work, which will be briefly introduced hereinafter. tion components in the context of energy efficiency
will take place within the science project “EneffBIM”
(funded by the German funding program “EnTools” towards the layer of decision making. ModelCheck
released by the German Ministry of Economy) start- then serves as a checking and analysis tool for evalu-
ing summer 2013. As seen in Figure 5, especially for ation of variations with their different simulation re-
the usage of dynamic energetic simulation, the logi- sults.
cal content-regarding model analysis shall be the In this case a great meaning is beard to seman-
quality management vehicle for securing the inter- tic visualization of simulation results (specific con-
face from BIM to simulation. straints of property values), representation of the
With involvement of different Frauenhofer insti- range in values that exists in the comprehensive
tutes (ISE and IBP) as well as the universities RWTH model and also checks on the characteristics of val-
Aachen, UDK Berlin, KIT and buildingSMART on one ues.
side, the IFC model will be extended with energy
relevant base types (input parameter) and suitable REFERENCES
geometric representation forms. On the other side von Both, P. 2012: Potentials and Barriers for Implementing
regarding energetic simulation, tools for co-sim- BIM in the German AEC Market - Results of a current
ulation in the context of Modelica will be further Market Analysis, contribution to the30th eCAADe con-
developed and a synchronization of existing model ference, Prague, 09/2012
libraries is been focused on. Hartmann, U.; von Both, P. 2011: Ein Framework zur
Concerning model checking aggregated simu- Definition und Durchführung interdisziplinärer,
lation results shall be led back in the BIM model in modellübergreifender Analysen am Beispiel so-
order to ensure better re-transition and evaluation larer Einstrahlpotentiale im urbanen Kontext.
of simulation results into the planning process and BauSIM2012 Conference - Gebäudesimulation auf den
Marina Stavrakantonaki
Brussels, Belgium
marina.stavrak@gmail.com
Abstract. The fusion between building assessment and design can lead to better informed
design decisions. Performance oriented design is better supported through the use of
interoperable file formats for data exchange between BIM and non-BIM tools. At the same
time, the parameters that influence the calculation during a performative assessment are
no longer a purely engineering problem, since 3D modeling is of primary importance
in defining the numerical output. The role of the designer along with the selection of
the tools becomes all more relevant in this direction. A framework is presented hereby,
which can be used for the selection between different BIM tools for daylight assessment.
An insight is also given on the major parameters that can affect the outcome and on the
obstacles that were experienced in four case-studies in relation to data exchange and
information flow.
Keywords. Performance simulations; parameters; interoperability; daylight.
INTRODUCTION
During the last years, there is an increasing demand ance and DIVA 2.0 as plug-in for Rhinoceros NURBS
for the integration of BPS (Building Performance modeler, and to provide suggestions for future use.
Simulation) tools in the early design phase (Attia et All of the examined tools can provide dynamic day-
al., 2012). The interoperability of BIM (Building Infor- light simulations under given conditions. The prob-
mation Modeling) and non-BIM tools influences the lems that consulting with the use of this software
workflow within the design team, while the building faces on a daily base, are related to incompatibility
practice is progressively oriented to a more interdis- between the architectural 3D model and the simula-
ciplinary approach (Augenbroe, 1992). The hereby tion software, the long 3D modeling times and the
presented study initiated as an internal research error probability when complex geometries are in-
for the consultants of the company DGMR in the volved. The aim is to acquire semantic information
Netherlands, with the task to evaluate the three fol- on the performance of the building over time, in a
lowing daylight performance simulation packages; way that it can be integrated in the design process.
Design Builder v.3.0.0.105, Ecotect 2011 v.5.60/Radi- The evaluation is based on the following criteria:
and the range of deviations that can be expected projects and to a simplified setting:
during the calculation of the analytical model, and • Case study 1: a complex geometry (Figure 1).
specifies the information that is being lost during • Case study 2: a purely orthogonal geometry
the process. The Drawing Exchange Format (DXF) is (Figure 2).
hereby used as the basic means of design informa- • Case study 3: a simplified setting of a typical of-
tion transfer. fice space (Figure 3).
The common feature between the geometries is the
METHODOLOGY linear form. They refer to two on-going projects and
For the needs of the present research three case- one simplified setting that is often met in everyday
studies were used on the grounds of the following practice.
methodology; a base model was prepared and sim- The simulation was oriented to one-variable
ulated in Design Builder v.3.0.0.105. This model was approach in order to facilitate the comparison be-
exported in .dxf format and recalculated in DIVA 2.0 tween the tools. With regard to precision, the fol-
and Ecotect 2011 v.5.60. Rhinoceros 3D-CAD mod- lowing settings were used: ambient bounces (ab) =
eler was used as a complimentary tool in order to 2, ambient accuracy (aa) = 0.1, ambient resolution
model the missing export data. All the three pack- (ar) = 300, ambient divisions (ad) = 1000, ambient
ages were linked to Radiance and provided output super-samples (as) as default. For the needs of this
based on the climate data of Energy- Plus _IWEC study, the Daylight Factor was chosen as the main
weather data files. The following input data were calculation measurement; the prediction of the
given for each one of the packages. The settings re- Daylight Factor under a CIE overcast sky condition
flect the effort to use equal input data. Identical in- is at the moment the dominant approach in evalu-
put is not possible at the moment due to differences ating daylight, despite the fact that it provides only
in the software settings (Table 1). a rough estimation of the yearly indoor conditions
The three case-studies refer to two on-going (Tregenza, 1980). Yet, it is in broad use by the Euro-
Figure 1
Case study 1 as modeled and
imported in the three tools;
from left to right, in Design
Builder, in Ecotect and DIVA.
pean building regulations and most assessment rat- only in Ecotect and DIVA (Figure 5). This last model
ing systems including BREEAM, in order to provide was further used to monitor the effect of the input
benchmark values for indoor daylight quality. The parameters, regarding the architectural form as one
specification of daylight quality for the presented of them. The tests were performed in DIVA 2.0 and
case-studies lies beyond the interest of this study. provided an insight on the deviations that should be
The aim is to evaluate the three packages on the se- expected with the change of specific variables. The
lected criteria and to provide a general framework most important of the variables that were tested
that can optimally facilitate the selection between and the resulting output under CIE overcast sky for
the numerous daylight performance calculation the same IWEC weather file are listed in Table 2.
tools that are at disposal as open-source or commer- The above listed results are some of the tests
cial software packages. that were carried out in order to specify the influ-
Further on, a fourth model was chosen as a ence of the precision settings, the grid density and
separate case study. Its geometry combines circular material properties on the output. Hereby we set as
openings on a circular wall and is part of a project Low precision: ab (ambient bounces) = 2, ad (ambi-
currently under development (Figure 4). The model ent divisions) = 1000, as (ambient super-samples) =
could not be created in Design Builder and the ge- 20, ar (ambient resolution) = 300, aa (ambient accu-
ometry was imported in Rhinoceros and Ecotect as racy) = 0.1, geometric density =70. High precision:
an .obj file format, which was provided by the archi- ab (ambient bounces) = 3, ad (ambient divisions) =
tectural team. Importing an appropriate model for 2048, as (ambient super-samples) = 20, ar (ambient
daylight calculation via gbXML or connection with resolution) = 512, aa (ambient accuracy) = 0.2, geo-
SketchUp in Design Builder proved also problemat- metric density =70.
ic. As a result, calculation output could be obtained
Figure 3
Case study 3 as modeled in the
three tools; from left to right,
in Design Builder, Ecotect,
and DIVA.
Figure 5
Case study4 as calculated in
Ecotect (left) and DIVA (right).
Output av. DF(%) Design Builder Ecotect 2011/ DIVA 2.0/ Table 4
v.3.0.0.105 Radiance Rhinoceros Simulation output, average
Model 1 (15.993 meshes) DF, model 1 to 4.
Ground Floor Zone 1/2 3.916/2.86 5.385 5. 5
Second Floor Zone 1/2 4.339/3.248 4.366 3.91
Model 2 (23.580 meshes)
First Floor 2.194 2.2 3.25
Second Floor 2.112 2.34 3.35
Model 3 (136 meshes) 6.378 7.66 7.94
Model 4 (74 meshes) Not obtained 0.95 1.07
the output. The grid density is important, yet a me- building forms as well as the differences in preci-
dium density with a point-to-point distance = 0.1m sion, even when the input settings appear identical.
is enough to provide a reliable output. For double Moreover, the information that we can obtain with
density (point-to-point distance = 0.05m) the effect one and only simulation from each program, varies
on the results was at the range of 1.11%, meaning significantly (Table 3). The analysis of the results of
that for models consisted of a large number of sur- test-model 4 have already provided a ranked list on
faces, an extremely dense grid can be safely avoid- the deviations that we should expect when altera-
ed. The most important material settings are ranked tions in the input parameters occur.
in the following line from the most to the least im- As seen in the results from model 1 (15.993
portant: visible transmittance of glazing, reflectivity meshes), the setting on the principle of zones does
of the walls, reflectivity of the floor, reflectivity of not facilitate the acquisition of direct information
ceiling. on a specific floor. The averages obtained from the
Tables 3 and 4 present the output from the four zones hereby do not provide clear input for the
case studies as simulated in the three software pack- design team. At the same time, the differences be-
ages. The differences in the output are indicative of tween Ecotect/Radiance and DIVA range between
the deviations that can result from the different abil- 2 and 10%. In the second model (23.580 meshes)
ity of the software to simulate detailed and complex the deviations are bigger; The difference between
Abstract. The central theme of the paper is the introduction of hands-on tools showing
the integration of information technology within a postgraduate study program (MAS
LA) for landscape architects. What has already become a part of the discourse in the field
of architecture – generic design – is now also finding more resonance in the context of
large-scale landscape architectural design. If one studies the educational backgrounds
of landscape architects, however, they often do not match the same standard as those of
architects. A solid background in the area of innovative use of information technology,
especially computer-assisted design and CAD/CAM construction is only at a preliminary
state at most universities. The critical arguments in the choice of the selected medium and
the building up of a continuous digital chain stand here in the forefront. The aim is not to
improve the quality of the landscape design based on the variety of the applied tools, but
rather through the sensible use of the said. Reflections as well as questions of method and
theory stand at the forefront of our efforts.
Keywords. Design tool development; computational design research and teaching; new
design concepts and strategies; parametric and evolutionary design.
BACKGROUND
At the Department of Architecture of ETH Zurich, Chair, Planning of Landscape and Urban Systems of
both students and researchers have the newest Professor Grêt-Regamey (IRL). The LVML is equipped
technical equipment and software at their disposal. with special devices for large-scale data collection,
Optimal networking with professionals in the area of for example a 3D landscape scanner with 2 km
construction as well as CAD/CAM production allow range and a drone. To this end, various software and
us to offer courses in the curriculum that allow ex- hardware solutions are combined experimentally in
perimentation with the newest techniques and ma- order to investigate new boundaries of perception
terials. This excellent infrastructure is supplemented and illustration of the built environment (Figure 1).
with the advanced resources of the Landscape Visu- Professional partnerships to the developers of soft-
alization and Modeling Lab (LVML) founded by the ware and hardware solutions as well as experts in
Chair of Professor Girot in cooperation with the PLUS the areas of landscape and urban planning allow for
a hands-on examination and implementation in the more and more incapable of understanding and vis-
various research areas. The difficulty in teaching lies ualizing complex landscape designs. Moreover, the
not in the lack of equipment, but rather it is seeing level of expertise in the daily use of digital tools is at
this digital overload as a new challenge in a positive a level that is no longer acceptable for an efficiently
sense. organized office. In order to close this gap, the MAS
LA (Master of Advanced Studies in Landscape Archi-
INTRODUCTION tecture) Program of Professor Christophe Girot was
At present, information technologies are an essen- newly conceived in 2009 in terms of content, both
tial component of design and building construction. methodically as well as didactically. To this end the
Contemporary architecture and large-scale land- former design-specific focus was transferred to the
scape architecture as designed by top offices would learning and use of up-to-date tools.
not be thinkable without them. Without computer-
assisted manufacturing and logistics, modern form The MAS LA Set-Up
language and structural solutions would hardly be The course of studies is divided into themed mod-
realizable. Meanwhile, the software and hardware ules, workshops and one concluding synthesis
involved has become so sophisticated that the stu- module. The modular structure allows a concentra-
dents’ generally increased computer skill levels suf- tion on individual themes, which can be combined
fice for an architectural program at a technical uni- within the framework of an individual project as the
versity. At the present, a heated discussion on the concluding thesis module. The main focus of the
level of education is taking place especially in the program is not the acquisition of new software skills
field of landscape architecture within Switzerland. but rather the integration of cutting-edge modeling,
In contrary to architecture offices, practicing land- visualization and presentation technologies as de-
scape architecture offices, especially those in Conti- sign tools within the field of landscape architecture
nental Europe, criticize that university graduates are (Figure 2).
Each module begins with a phase where new should be applied and recombined to explore new
techniques are learned. In this phase, individual ex- design methodologies in their final project (Hagan,
ercises are connected to current issues in landscape 2008). The concluding module of the postgradu-
architecture. The students are encouraged to rec- ate program acts as a test case for the questions or
ognize global as well as local economic and socio- agenda, which have been defined throughout the
logical demands and integrate them in their designs teaching year.
using and connecting the learned tools. The achieve-
ment lies in the diligent selection and connection of Parametrism – the Solution for All Prob-
the technology with the environment we live in. The lems in Architecture?
critical debate regarding issues of sustainability in Ever since the 11th Architecture Biennale in Venice
conjunction with large-scale design work in urban 2008, is Patrick Schumacher’s postulated concept
landscapes plays an essential role next to the techni- “Parametricism as Style - Parametricist Manifesto”
cal aspects. In the second part of the module, par- all the rage. For many students, complexity equals
ticipants grapple with complex problems, which will quality. The use of parametric tools, i.e. Grasshop-
be discussed during a concluding presentation. The per is often seen as the solution to conceptual prob-
sequence of modules start with modeling and CNC lems. Our desire is to show students solutions and
production followed by visualization, programming, approaches how they can choose the right tools for
GIS, applied progamming and ends with media and the design problem at hand in order to test their
photography. ideas efficiently and unconventionally, realize them,
We challenge the students to go beyond the as well as also later generate suitable formats for the
boundaries of conventional domains and test the construction process. Here the learning of specific
tools in analysis, design, and visualization. The pro- software does not stand in the forefront but rather
grams and different CAAD/CAAM techniques, which the learning of a new way of thinking that under-
the students have become acquainted with in the stands the tools as integrative design tools (Mitchell,
different modules, complement each other and 1990). The sequential structure of the MAS LA pro-
maintained absolute control over the impact of de- ied processing possibilities. The user can fall back on
sign changes. the entire toolset of image manipulation software
The Processing script of MAS LA student Chris- in order to influence the height information. Here
tine Baumgartner allows one to manipulate the comes in the Processing script itself tools like brush-
height of the cones and the terrain lying underneath es for localized work as well as global ones, such as
independently from one another (Bader-Natal, the blur filter. The application allows a view over the
2010). The heights were saved in two separate depth area with an orbit camera. Any current condition can
maps that represent the relative heights through be saved with the pressing of a button as an image
gradations of gray. The height of the cones is always and the model as a DXF to be used later on for gen-
taken relative to the terrain (Figure 3). The sum of erating a visualization (Figure 4).
the cone volumes can be calculated at any time and
in conclusion adapted so that it corresponds with Theoretical Programming
the volume of the excavation material. The simple Going one step further, we introduced in spring
data modeling of the heights in the form of gray- 2013 the first time an intensive 3-day workshop for
scale images allows for simple data saving and var- the MAS LA students called “Theoretical Program-
Figure 4
Using the generated data to
automate the visualization
process (student: Christine
Baumgartner).
ming”. The overall objective of this workshop was on the module with the students.
the critical reflection of the implementation possi- The set-up of the workshop was a combination
bilities of programming, within the real practice as of lectures and role play in the loneliness of a moun-
an landscape architect. Out of our experience these tain hut in the swiss alps. Within groups of 4, the stu-
projects often fail due to misunderstanding and dents were asked to communicate a complex design
wrong expectations from both sides. problematic to a programming consultant. The stu-
After the students find an initial foothold in dents have been learning different techniques and
programming with the module “Programming methods how to bridge and communicate a design
Landscape” this knowledge was deepened in the to an IT company (Figure 5).
subsequent module “Applied Programming” where The result of this experimental workshop was
first applications were searched for within known surprisingly positive. The students understood
CAD workflows. The results of the past few years, through this playful attempt the problematics and
however, have shown us that the students have no could define potential application fields of pro-
problem with creating their own programs only they grammed solutions in their design. The feed-back
often do not have the fundamental understanding of the students pointed out the importance of such
for their necessity and potential within professional reflections and training for the real practice. With
practice. the 3 days they learned different techniques, like the
In order to make a convincing appearance in usage of “UML”, “CRC-cards”, “class diagrams”, “story
a professional context, it is necessary to be able to board”... to prepare all the desires of the landscape
speak the language of the other profession. To this architects, in order to have the IT company program-
end, we supplement our teaching team with a com- ming the software.
puter specialist who spans a continuative theoreti-
cal background. Programming paradigms of greater The Sandbox Tool
scope, for example object-oriented, automata-based Another aspect that we are investigating at the mo-
and genetic programming will also be presented ment is the simplification of workflows through the
(Hight, 2008). The students become acquainted with unconventional linking of existing software and
concepts such as spring systems, shape grammar, hardware. What happens when traditional manual
Lindenmayer systems and agent systems. techniques are combined with state-of-the-art
At the same time, we take advantage of this CAAD/CAAM technologies that are adapted to the
short but intensive time together in order to reflect workflow? Are the students accepting this simple
tools as part of their design workflow? We developed our own software that allows
Within the framework of a case study, an exist- one to generate two-dimensional analyses and visu-
ing topography is created in a milled negative mod- alizations directly. Different analytical methods have
el. The students could use this as the formwork, in been programmed to supply the students analysing
order to create the same point of departure at any their design proposals:
time. The knowledge of this possibilites frees the We supply the designer with an elevation map,
students to work in a very experimental way. All including a separation of the gradients above and
manual techniques are allowed in the modeling of beneath the water table. Additional features include
the designs. If the designer wants to capture a state a visualization of the slopes and normalized lighting
of the model the student makes two photos from conditions for all the models (Figure 7).
different viewpoints, which are then transferred As a result, the students never work on a com-
through photogrammetry into a computer network puter 3D model but rather focus their work directly
model where they can be digitally stored and ana- on design statements in plan. The ‘analog’ sand
lyzed (Figure 6). model thereby maintains its significance and be-
Figure 7
LandscapeAnalysis_flodding:
3 Waterlevels composed with
Contour Curves from different
Layers of the Analysis Tool.
Elective Course Students 2012.
GENERAL FRAMEWORK
This paper is focused on the suitability of paramet- ever, they could also see extreme approaches such
ric design tools for the generation of a tower de- as ready sketch design followed by modeling of the
sign that consists of 150000 PET bottles. The tool is same in computer, the use of parametric tool to ac-
taught and used in a 6-day workshop, and is embed- tually generate the design and the switch back and
ded in the context of an experimental collaborative forth among the two tools. Our report differs in in-
design studio between two faculties of architecture tegration of physical material tests into the task and
of two different countries. the tested group regarding rather novice students.
The problem lends itself well to a parametric ap- Before the workshop we invited expert users
proach, as it concerns a composition of many similar (colleague teachers of those programs) of various
units and students need to have control over the CAD programs (3DStudio Max, Revit, AutoCAD,
total number of components. However, parametric ArchiCAD, and Rhino) to create an arbitrary three-
tools may not be sufficient and the only method of dimensional structure of 150000 similar objects. The
designing suitable in all phases of this task. We ob- choice of object and composition were left com-
served that our students naturally hand-sketched as pletely open to the experts (Table 1).
well. The precedence we could find in Sanguinetti Manipulation of the designed object failed due
and Abdelmohsen (2007), where the authors suc- to the computer capacity in three of four programs.
cessfully describe integration of sketching and para- From their feedback we selected Rhino in combina-
metric modeling in conceptual design task. How- tion with Grasshopper as the most promising tool in
Second semester: focus on design tools dents could re-use the earlier gained knowledge
In the second semester the project was developed from their colleagues and shift the start point, so
at our faculty only, but we had consultation sessions that we could start designing earlier. We also intensi-
with advisors from the partner university. Different fied the switch between design and material experi-
students than from the previous semester took part ence according to the scheme in the table (Table 2).
in this studio. In the first semester we mainly fo- Most of the time during the semester was de-
cused on feasibility of the project and did informal voted to laboratory tests of the PET bottles, where
material tests and based designs on these findings. students got the reliable values of the load capac-
The second semester [7] was supported by a 6 day ity of bottles and pull capacity of the cap, which we
long workshop devoted to information visualization report on at CESB13 conference in Nováková et al.
tool lectures and manufacturing. The results were (2013). They also worked on analysing the possi-
regularly communicated over distance to the advi- bilities of connections using PET materials in a sheet
sors in partner faculty and other parties involved in form. We noticed that the designs of the previous
the project. semester were limited to regular shapes such as
The scheme of teaching varied in both semes- cubes, hexagons or cylinders. In the second semes-
ters. In the first semester we allowed the students ter the students obtained more freedom for their
to design only after analyses were ready: analyses PET bottle designs. Towards the end of the second
– design – fabrication. In the second semester stu- seminar (during the workshop) we encouraged stu-
Figure 2
Final projects from the first
semester.
dents with correlating ideas to group and work out maximum size 2x2x8 m3). We found that they could
one project. Half of the students formed teams of easily change their design towards the initial hand
two or three people (Table 3). sketch without loosing the awareness of the num-
During the workshop students took their initial bers. Consequently we allowed the students to play
sketches and tried to model them within Rhino. In with their newly acquainted skill. By allowing them
the first two days they learned basics of grasshop- the “play phase” within the teaching hours, the stu-
per and understood parametric thinking, on the dents were more motivated to experiments, deepen-
third day we introduced several ways of generating ing their understanding of parametric modeling and
tower-like objects (Figure 3). got quicker feedback from the teachers. Together
After the initial phase they constructed their vir- with this parametric attitude to the problem, the 4th
tual models with the data constraints (150 000 bot- and 5th day was devoted to building the actual pro-
tles, minimum 20 meter height, one component of totypes of the components. Students not only had to
Figure 3
Examples of different pos-
sible principles for tower
generation. Combination
of geometrical shapes and
mathematical formulas.
collect the bottles, but also tried to set the connection OUTPUT OF THE SECOND SEMESTER
between them according to their previous research. The projects of the first semester did not result in
We could observe students sitting by the screens feasible designs which could be actually construct-
and sketching their technical ideas on the paper. This ed. We assume that the main reason for this was the
workflow programming-sketching was efficient for lack of expertise in sophisticated CAD tools.
them and it proved to be the fastest (Figure 4). We collected several physical prototypes of
Because the material was experimental, not PETower building modules together with connec-
all assumptions and designs were successful and tion strategies (Figure 5).
students had to change their virtual models again In contrast to the first semester we did see thir-
according to the real scale component prototypes. teen feasible projects in the second semester, which
Students grouped again in this phase of component were consequently communicated to several parties
generation. in Zurich (municipality, festival organizers, potential
Figure 4
Students programming and
sketching designs simultane-
ously.
sponsors). Three projects were highly realistic and computer modeled in the middle phase of design-
one of them was accepted as a realization project ing, but all the time of project development they
(Figure 6). were sketching even when sitting by the computer.
Especially in the phase of moving towards construc-
CONCLUSION tion we could see simple drawings of connection
We collected all sketches, made screenshots of the details or patterns of assembled units. The paramet-
developing projects and made documentation of ric tool proved to be very important. Some students
the 1:1 models. We observed that students not only tried to develop the project with other CAD tools,
Figure 6
Plasticienne: 23 m high, 148
000 bottles.
but failed. Thus they responded very enthusiasti- struction methods with PET bottles was on the other
cally to the parametric tool when they saw how the hand made on paper and parametric tool turned out
design remained flexible while keeping also numeri- to be unsuitable. Hand sketch helped developing
cal control. As we observed, teams of two or three initial designs, visualizing partial ideas, generating
people were able to deliver results, which fulfilled all details and clarifying technical solutions. In general,
conditions of the project, while individuals did not we feel that it is necessary to introduce similar work-
progress beyond trials and failures. For all students shops focused on parametric design in the begin-
it was interesting to see how much the parametric ning or middle of the design studio together with
tool enabled them to deal with the real problem. the same focus on hand sketch, where we believe in
Furthermore they were able to follow their initial better impact of using of these tools directly in the
hand sketch graphics giving it exact numerical and design studios by project development.
structural control (Figure 7).
For our experimental design studio and this FUTURE WORK
special task using parametric tools proved to be of In the next semester of the experimental design stu-
critical importance. Also the hand sketch technique dio we shift focus to the design of shelters and small
seems to be crucial. Parametric tools enabled stu- service structures made of PET bottles for the same
dents to experiment with the tower design, while festival event. We would like to observe the direct
also keeping control over the various constraints impact of using parametric tools on final designs
that apply to the project. Exploring various con- and the role of sketching in this process.
Abstract. The research presented in this paper aims at identifying the cognitive
operations implied in the uses of parametric modeling in architectural conception. The
uses of parametric modeling in architectural design remain emergent and marginal.
How can we teach these practices? The identification of the main cognitive operations of
conception allows us to propose accurate pedagogical objectives. This paper presents: the
research methods employed, the results achieved and propositions for pedagogical tools.
Keywords. Parametric modeling; architectural conception, CAAD curriculum,
architecturology.
INTRODUCTION
Parametric modeling is part of computer aided de- to identify the characteristics of the cognitive opera-
sign process of industrial sectors, such as automo- tions of conception implied in the uses of paramet-
bile or aeronautic, for over three decades. For a few ric modeling in architectural conception. We interro-
years architectural sector has carried out parametric gate here architectural “conception” that we define
modeling. as the cognitive aspect of design activity.
Visual programming languages as Grasshopper This paper presents: the research methods em-
[1] have certainly something to do with this amazing ployed, the results achieved and propositions for
and growing adoption by architects. Popular among didactic tools.
students and professionals, this plug-in of Rhinoc-
eros 3D modeler enables them to build paramet- METHODOLOGY
ric models without any programming or scripting
knowledge. However, the uses of parametric mod- Context of the research
eling in architectural conception remain emergent Analysis of design practices in architectural con-
and marginal. texts (our as well as Kolarevic, Picon or Lindsey ones
How parametric modeling is involved in archi- (Kolarevic, 2005; Lindsey, 2001; Picon, 2010)) shows
tectural conception process? How architects can be that parametric modeling is linked to various com-
trained to parametric modeling and visual program- puter assisted tasks: complex form finding and rep-
ming language? These two issues must be clarified. resentation, evaluation, optimization, fabrication,
In order to address these questions, we search communication, collaboration, etc. We observed
The Z translations of the grid points are linked Rajeb had identified some operations implied in
to a set of curves. The curves are linked to the collaborative conception that not give directly
main ways observed in the existing public measurement to an object. She formalized these op-
place and to the wanted parking places. The erations as “pragmatic” one (Ben Rajeb, 2012, p.281).
Z altitude of each point is dimensioned to be The following operations that seem to be implicated
close to zero in proximity of a set of curves. in the uses of parametric modelling in architectural
In these different operations we can observe conception are kind of pragmatic operations.
that we can distinguish operations of conception Operations of collaboration: In our analysis, we
of parametric model and operations of architectural observed the occurrence of two pragmatic opera-
conception. But we can also observe that these dis- tions identified by Ben Rajeb: the operation of “pool-
tinctions show how intricate and porous they are. ing” and the operation of “interpretation”. The
The links and exchanges built between these two “pooling” operation is an operation by which col-
kinds of activity (conceiving the architecture and laborators with different point of view and different
conceiving a parametric model) are interrogated in expertises, share information in purpose to attribute
our research in terms of third operations: the “prag- measurements to an object (Ben Rajeb, 2012 p.286).
matic operations”. This operation operates in the use of parametric
modelling in architectural conception when differ-
Not an operation of architectural concep- ent collaborators (for example architects and model
tion either an operation of parametric manager or parametric design experts, etc) share
model conception … knowledge about the projects (the architectural in-
“Pragmatic operation”: In her research, Samia Ben tentions, the necessity or potentiality of a paramet-
ric modeller, etc) to link the architectural conception ty, etc.). It operates in parametric modelling among
and the parametric model conception. The “pooling” other when an architect builds an understanding of
is an operation by which a conceiver translate and the potentiality of the modeller to propose a spe-
negotiate his meaning in purpose to communicate cific way to conceive a space. It can be observed for
it for a collaborator. example in the imaginary built by Frank O. Gehry
The operation of “interpretation” is an opera- about the parametric modeler CATIA (Lindsey, 2001).
tion by which a conceiver gives a personal mean- Operation of translation in parametric geometry:
ing to their collaborators discourse and information The translation in parametric geometry is a prag-
(Ben Rajeb, 2012 p.285). It operates for example matic operation specifically observed in the uses of
when an expert of parametric modelling interprets parametric modelling in architectural conception.
the discourse on architectural intention in purpose By this operation, a conceiver shift from one system
to define constraints or parameters of a parametric to another (from an architectural system to a geo-
model. The “interpretation” is an operation by which metric and parametric one and reverse). For exam-
a conceiver gives a personal meaning to information ple, in the case “Topography” previously described,
shared by a collaborator. we can observe a translation from an architectural
Pooling and interpretation are operations aim relevance “answer with different slopes to the want-
at building some “référenciel opératif commun” (de ed uses” to a relevance for the parametric model “di-
Terssac and Chabaud, 1990) that we can observe in rectly associate the Z positions of the grid points to
shared relevancies and references. a set of curves that position in the space the wanted
Elaboration of cognitive representation of the uses”.
tools: From the case analysis, an operation of elabo- Hugh Whitehead and some of the actors of the
ration of cognitive representation of the tools can be SMG and ARD teams at Foster and Partners talk
identified. These representations are based on inter- about such an activity of “interpretation” (Whitehe-
pretations of a specific tool (its potentiality, difficul- ad, 2009; Freiberger, 2010). We use the term “trans-
tural conception, we can propose some pedagogical sis shows that elementary operations of conception
objectives (Table 1). proposed by architecturology (slicing, dimensioning,
referencing) are accurate to describe the conception
Didactics resources of parametric models. We identify as well some po-
We defined then a pedagogical framework of web rosity between conception of parametric modeling
resources that architects can exploit during design and architectural conception. These porosities are
process and for training. This framework includes a allowed by third operations: operations of collabora-
general knowledge support and a specific knowl- tion (interpretation and pooling), an elaboration of a
edge support for the visual programming language cognitive representation of tools and a specific op-
Grasshopper. The general knowledge support [2] eration of parametric modeling (translation in para-
provides resources on geometry, computer graphics metric modeling).
and more broadly on applications of computer sci- Thanks to the identification of these character-
ences to architectural design. istics of the uses of parametric modeling in architec-
The Grasshopper resources are gathered into a tural conception, we proposed accurate: -general
library of samples presented with images of possible training objectives, -pedagogical objectives and -di-
produced shapes, a describing text with keywords dactics resources.
and obviously the corresponding *.ghx code [4].
This library is proposed as mediation for the use of REFERENCES
Woodbury’s patterns (Woodbury, 2010) by non-ex- Aish, R and Woodbury, R 2005. ‘Multi-level Interaction in
pert in parametric modeling. Parametric Design’. In Proceeding Smart Graphics, Lec-
ture Notes in Computer Science. Springer, Berlin.
CONCLUSION Ben Rajeb, S 2012. Modélisation de la collaboration distante
This paper presents some of the main result we dans les pratiques de conception architecturale : Caracté-
obtain on the identification of cognitive opera- risation des opérations cognitives en conception collabo-
tions implied in the uses of parametric modeling rative instrumentée. PhD Thesis, Ecole d’Architecture de
in architectural conception. We have interrogated Paris la Villette, Paris.
parametric modeling as an activity of conception in Boudon, Ph Deshayes, Ph Pousin, F and Schatz, F
itself (conception of parametric models). Our analy- 2000. Enseigner la conception architecturale, cours
Günter Barczik
Erfurt School of Architecture, Germany, HMGB Architects, Berlin
fh-erfurt.de/arc/ar/werkschau/master/digitales-gestalten/, hmgb.net
guenter.barczik@fh-erfurt.de, gb@hmgb.net
tal design tools. The models are not just for the fi- projects, and we invite students to bring problems
nal presentations, but also sketch models that as from their more complex projects into the course so
early as possible transfer into the physical what was that they can be discussed and solutions be found.
sketched digitally (Figure 1). This has three main rea-
sons: Firstly, physical models in ‚real‘ 3D space are DESIGN COURSE STRUCTURE AND DE-
much more comprehensible and expose a design’s SIGN TASK SEQUENCE
qualities much better than digital models project- We have structured our design course in three steps.
ed onto a 2D screen. Secondly, even simple sketch In each step, a pavilion has to be designed and pre-
models already start to hint at production chal- sented in two-dimensional representations as well
lenges that become much more important when as in physical models. Before the precise design task
building in 1:1. Today, we think, it is rather easy to is set, we introduce various CAD tools. The design
be seduced by the possibilities of digital tools into tasks themselves then include certain requirements,
conceiving projects that then run into problems conditions and restrictions which invite if not re-
when they come to be realized. Early model-making quire employing the tools just introduced.
makes the students address such possible difficul- In each step, the physical models use less mate-
ties literally at first hand. Thirdly, the transition from rial, but the parts are more laborious to assemble.
digital to physical sometimes comes with mistakes, Where digital models can be made up of geom-
especially when students try out certain techniques etries that are continuous and as large as designers
for the first time, experimenting with production desire, physical models and - even more so - real
tools and materials. Such mistakes can very often buildings have to be assembled from components.
be made productive because the ‚wrong‘ or ‚failed‘ The ever faster development of large scale 3D print-
translations can unintentionally show new aspects ers only partially remedies this, because the printers
of the original that were difficult or even impossible mostly rely on very fine strata which, when viewed
to see there. So a second oscillation occurs between closely, again dissolve the continuities.
the digital and the physical. As software we use Rhino in conjunction with its
A third oscillation is attempted between the lit- Grasshopper Plug-In. Rhino is in the process of be-
tle, simple design tasks in the course and the larger coming the lingua franca for architectural 3D mode-
and longer design projects students undertake in ling, dito with Grasshopper for simple programming
parallel courses. The tasks we set are aimed to equip of such software.
students with techniques that also serve their larger
MILIEU FOR WORKING AND STUDYING where investigations can be faster and more radical
We chose simple pavilion as topic for the design within a protected experimental realm isolated from
exercises in order to bridge the gap between exer- various restrictions.
cises dealing with the technical capabilities of the
software and the challenges of architectural de- STEP 1: CURVED FREEFORM SURFACES
sign - context, construction, spatial program, func- AND STRATIFIED MODELS
tionality. Pavilions do incorporate the latter, but to The first pavilion has to have various seating possi-
a degree that can be rather freely chosen by the bilities inside as well as outside, and its roof has to
students, so that there remains space for play and be accessible. It has to be a continuous form, not an
experimentation with the former. We strive to cre- assembly of components: all functional and circula-
ate a playground-like milieu where playfulness, ex- tion elements have to be synthesized and integrated
perimentation and risk-taking are common, so that into one coherent shape. Its physical model has to
students dare to - so to speak - flex the new-found be built from different strata cut manually or with a
muscles they have been equipped with (meaning laser-cutter (Figures 2 and 3).
the new software tools). We encourage the students We introduce free-form modeling tools in two
to attempts in which they are at first likely to fail. The steps: at first solids are manipulated through their
learning effect, and the sense of self-satisfaction on control points for quick but imprecise shape explo-
the students’ side, to us appears higher this way. For ration. Thereafter, surfaces are created from control
the students to be able to easily move conceptually curves - a more laborious but much more precise
we aim to create a ground that is both slippery and and intentional design method. Sculptural and func-
padded so that they can move swiftly and fall eas- tional aspects of the created surfaces are discussed,
ily - but soft. and the relationships between their aesthetical
Our intention is that students transfer the new qualities as objects ore public sculptures and their
possibilities explored through the new skills ac- usability as architecture. Categories like ‘furniture’,
quired onto other design projects they are or will ‘house’, ‘wall’, ‘roof’, ‘stair’, ‘ramp’ that appeared fixed
be working on; projects with more numerous and become fluent. A solution space for architectural
realistic requirements in terms of spatial program, design that was compartmentalized becomes a con-
constructability, functionality and relationship to ur- tinuum. The prevalence of purely horizontal surfaces
ban and socio-economic contexts. Our more simple in architecture is questioned and uses for inclined
pavilion designs are intended to serve as test cases, planes found and discussed.
The digital designs are then sectioned into stra- more, students enjoy the possibilities of doubly
ta, stacked vertically or side by side. The stratification curved surfaces.
becomes a design theme in itself: how are the strata We encourage students to see occasional transi-
orientated, and how thick are they, i.e. how many of tion difficulties between digital and physical model-
them are there ? Students explore different stratifi- making as ‘happy accidents’ and exploit those as
cations, even un-parallel ones, experimenting with welcome design ideas (Figure 5).
radial and curved arrangements and strata that have
trapezoid instead of rectangular sections (Figure 4). STEP 2: INTERSECTING SPACES AND
Students experience that the stratification can DEVELOPABLE SURFACES
be seen either as an unwelcome tainting of the se- The second pavilion has to be the result of three in-
ductively perfect digital model, or as a means of tersecting shapes. The different source shapes have
structuring the endlessly pliable; making it more dis- to be recognizable in the resultant exterior shape
ciplined and taut. and create different spatial regions inside. These re-
Very often, the resulting designs play with the gions - as opposed to separate rooms - have to be
difference between outside and inside shape and associated with different functions. The hull surface
exhibit rather thick intermediate spaces. Further- has to be developable and built as a shell as thin as
Figure 4
Stratification Studies.
Figure 6
Intersecting spaces and
developable surfaces pavilion
models.
STEP 3: COMPLEX ROOF AND DIS- simple form are the surface, the geometry to be
SOLVED HULL mapped, the number of u and v separations and the
The third pavilion is more of a roof, i.e. for an archeo- height of the projections. Occasionally, we extend
logical excavation. It has to be a single surface that the definition with more parameters, varying the
changes from convex to concave at least once. In height or leaving the uniform division of the surface
order to fabricate it, using Grasshopper the surface behind more complex patterns.
is populated with a three-dimensional pattern in The de-materialization from Step 1 to Step 2 is
such a way that it is divided into multiple develop- further continued as the resulting surface is perfo-
able surfaces. The population pattern has to include rated so that its holes are larger than its solid parts.
holes so that the resulting populated surface be- The geometrical restraints that were introduced
comes porous (Figure 9). from Step 1 to Step 2 are removed again. The formal
We employ a simple and well-known grasshop- freedom from in Step 1 is synthesized with the con-
per definition that divides a surface into a rectangu- struction capabilities from Step 2.
lar grid and maps a given geometry onto the indi- Students study the effects of the geometry of
vidual cells. The parameters in the definition’s most the population modules and the population system
Figure 8
Intersecting spaces and
developable surfaces pavilion
models.
on the original surface and explore the difficulties are not builders. It is, though, becoming ever more
and possibilities in fabricating modules and surface important as the growing number of design tools
(Figures 10 and 11). Certain combinations of surface and fabrication methods increases the number of
curvature and mapping height easily create self-in- specialists while decreasing the percentage share of
tersections. The definition does not check for those existing skills that any individual can have - thereby
- this would have been to difficult to implement raising the number of specialists and therefore the
within our course structure. need for shared work and communication of goals,
The possibilities of customized mass-production intentions and ideas.
- already hinted at in design step 1 - are explored
and discussed. CONCLUSION AND OUTLOOK
In order to fabricate and assemble the numer- In order to extend existing design skills, we intro-
ous parts that make up the surfaces, the students, duce technical possibilities of CAD software with
after having worked individually in tasks 1 and 2, conceptual and geometrical design tasks. We at-
now form groups of 3-4. So in addition to the CAD tempt 3 oscillations: between technical tools and
techniques, design possibilities and fabrication design possibilities, between digital and physical
methods, teamwork is experimented with: who does models, and between simple architectural designs
what, in which sequence are steps undertaken, how within the design course and the larger design pro-
are communal decisions reached ? Such teamwork jects students work on in parallel. These repeated
has always been important in a discipline where, like movements between different modes of working in
with composers but unlike visual artists, designers time weave numerous conceptual strands that be-
Figure 10
Complex roof and dissolved
hull pavilion models.
gin to tie different conceptual regions into a whole. and the work done outside of it, so that the new ter-
In the future, we aim to intensify this weaving, ritories opened up for designing architecture can be
especially of the work done within the design course traversed more naturally.
Anna Pla-Catala
IE University, Spain
http://www.ie.edu/university/studies/academic-programs/bachelor-architecture, http://
www.ie.edu/school-architecture-design/, http://ienudl.wordpress.com
anna.pla@ie.edu
tecture school (programming- fabrication, whether resistance (even fear) towards the reformulation of
hard-coded or graphic-scripting), are the questions design Authorship and what constitutes such notion
and discussions on ‘process-driven’ design that in- today.
evitably and immediately arise. Even if ‘rule-based’ This might explain why implementing DT has
design systems have been mainstream for decades been (in our specific context) reasonably achievable
already in some design contexts even in analogue and successful (by being accepted and willing to im-
form (Eisenman) , there still exists an extremely high plement it) in almost every area of the academic CV
Figure 2
NuDL_Fellowship.
but design studio, which is the area where we have initial intuitive hunch by means of the hard-core
found most resistance from. Before jumping into ob- rigor that computational tools entail is such, that
vious criticisms however, this fact might have a very the designer must be skilled first, and above all, in
simple explanation. One that lies at the core of the the ‘Logic of Design’ of highly complex systems that
problematics that emerge out of the profound shift comprise -geometric, algebraic and logical- relation-
in architecture-making due to the impact of DT (Fig- ships.
ure 3). As educators, a two-fold task presents ahead of
us; on the one side, to keep up with the fast rate de-
Instrument vs Method velopment of DT’s as intrinsic to themselves (Com-
If structure, construction and representation classes puter Science), and on the next, to focus on the
have welcome DT’s corpus of knowledge in col- relationship with the corresponding culture of ‘use’
laboration with their own, it is primarily because within Design Practices. What is key, is how to trig-
parametric modeling, programming and digital fab- ger the combination of ‘Intuition and Logic’ both of
rication are mainly valued as ‘Instrument’ and not ‘Ideas and Skills’ in one single but multidimensional
‘Method’. To be more accurate, as an instrument for dynamic ensemble.
improving: a) workflow, b) variable input/output Experience over the past 5-years has proved
and, c) delivery of precise geometric data to be taken that prejudices as to what architectural design
to digital fabrication and/or performance analysis. ‘is’ or ‘ought to be’ still exist. And the introduction
Nonetheless, this fact alone we argue, merely of programming and digital fabrication within
constitutes a slight automation device of otherwise architecture´s education has still to overcome an ex-
traditional and conventional design procedures, by- tensive set of deep-rooted classical values. Most sur-
passing the truly essential foundation of parametric prisingly is the fact however, that these prejudices
and algorithmic thought. do not always come from some of the more estab-
The degree of control necessary to develop an lished layers of the profession (as perhaps expected),
but also, from the collection of ‘a-priori assumptions’ (Code.org Co-founder Claire Sutcliffe (Geere, 2012))
that young candidates arrive at architecture school Such initiatives deserve our deepest respect
with . . . not only about the discipline, but also in re- indeed. It is most admirable to have achieved for
spect to the the digital, and the radical change that computer code to surpass the ‘geek’ community in
is involved in making a highly ‘strict’ use of what they order to become a Country’s policy for children’s ed-
otherwise have known to be ‘playful’ devices. ucation; a generation, let’s not forget, that will still
At an institutional level and in contrast with the take 10 years approximately to get to Undergradu-
type of architectural education’s resistant attitude ate Schooling.
we have tried to convey, a couple of non-architectur- And this is fact alone proves, that architecture
al examples are here worthy of noting. Such projects schools should stop worrying about how to preserve
are born out of a true honest belief in the capacities traditional disciplinary knowledge modes and cease
of computer code and the new epistemological to have a conservative attitude in order to fully (and
paradigm opened-up by DT. Those are: Code.org rapidly!) embrace programming and fabrication, as
[1] in the USA, and the recent enterprise taken on well as the rest of the vast array of DT.
board by Code Club [2] in the United Kingdom (an Because, to put it very simply: These are our
afterschool voluntary initiative that aims at teaching New Standards. And as such, this is the responsibil-
computer programming to 10 year-old kids). ity of architecture education today (Figure 4).
‘At age 10-11 (on average) children have the
necessary numeracy, literacy and logic skills to learn PART 02_COMPUTATION / PERFOR-
the concepts of coding’ .... ‘Some might argue that MANCE
they have these skills even earlier than that. To be Even if the expected resulting final work to be de-
blunt, ICT lessons today mainly consist of learning livered by an ‘Architect ‘ remains being a physical
Microsoft Office. Are we raising a nation of secretar- structure (a built design), it has indeed become
ies? I sincerely hope not. It’s insulting to children to more than clear that the contemporary architec-
think they can’t handle something more creative, tural model we all participate in (every agent in the
inspiring and powerful than an Excel spreadsheet’ design-to-construction process), is an evolving one
that has become as much cybernetic as material. a clear distinction from the one of the Engineer.
Computation has given the designer an unprec- Hence, computation in relation to performance is
edented degree of Control over the complete spec- evaluated here with an explicit criticism towards sta-
trum of design-build processes. As a design tool, it tistical and self-referential efficiency models as sole
is capable of dynamically defining the global coor- alibis or testing-modes of resulting prototypes.
dinates of a generic continuum, to then yield up to In biology, epigenetics studies how environ-
a specific (intentional) configuration. The criteria for mental factors affect genetic function (genotype).
evaluating which single instance is most suited for a Similarly, ‘rule-based’ design processes have at their
particular design problem, is what drives us to the starting point the definition of a robust ‘genotype’
notion of Performance. that can be subsequently refined according to feed-
As a measure of the direct output of a driven back-loops that incorporate further information ex-
process, performance is usually conceptualized as ternal to itself.
the increase-or-decrease in efficiency of such pro- Ecology is not sustainability. In an effort to re-
cess. Although computation has been incorporated consider the Holistic ‘intelligence’ formed by the
into the discipline of architecture, it has been mainly whole complex set of spatial components (digital,
used for two main tasks: a) to generate complex ge- physical, material, economic, atmospheric, etc.),
ometries that intensify the function of the Formal; computational design ought to develop a model
or, b) instrumentalised as mere optimization device capable of strategically, tactically and synergetically
without exploiting its ontological/cultural potential relate to its environment. The utilization of Code as
beyond technocracy. design method acquires full meaning only if it dy-
Our mission has been to articulate a digital ex- namicaly integrates the affects of the material con-
pertise for the 21C Architect whose practice is of text in which it develops (Figure 5).
Abstract. This paper proposes a complementary approach for the architectural design
studio. By interpreting architecture by means of an interactive (dance) performance as
design task it combines architectural theoretical examination with the implementation
of new technologies and event realization. This design studio concept integrates
scenography, choreography, sound design and event management, providing workshops
carried out by external and internal experts to give insight into these disciplines and new
tools. The experimental form allows the students to define the specific form within a broad
scope, ranging from a dance performance performed by the students themselves to an
interactive installation. The focus for the students was on dealing with the diverse input
and on the decision-making process and its reflection.
Keywords. Interactive; performance; teaching; collaboration; gesture control.
INTRODUCTION
The prevailing concept of design studios at architec- the essential characteristics into a different medium.
ture schools is that of simulating a design process The concept of this design studio involves a very
in an architectural office. Consequently, a standard broad range of topics so as to incite the students’
task may consist of designing a building or other learning process with a task that represents the
objects in the context of the built environment. By complexity of the design process in architecture. The
contrast in an alternative approach several exist- project therefore combines elements of architectur-
ing works combine architecture and dance (Bronet al theory with an extensive number of efficient tools
and Schumacher, 1999; Pekol, 2011) and served as a and basic introduction in choreographic and sceno-
starting point for a new design studio concept. graphic acting. As collective project the students in-
In contrast to established concepts, this design volved have to clarify their specific art and technical
studio has an interdisciplinary setting, using new skills and assemble all competences of the group for
media, dance and scenographic elements to trans- a presentable and conclusive stage event.
late architecture into an interactive performance. A The future outcome being an interactive perfor-
central educational aim of the project described in mance – instead of (relatively) theoretic drawings of
this paper was to instruct the students to analyse and an architectural object – fundamentally transforms
abstract architecture in order to be able to transfer the design and working process. Firstly, the students
cisions taken in the different phases of the design The next phase involved introductions to the
process. different hardware and software tools and presen-
As explained above one reason was to collect tations with related topics. While becoming familiar
as much design relevant information as possible. with the different tools and aspects of this project,
Another important aspect – which was communi- the students were given the choice to organise
cated to the students – was to enable the students themselves in three different work groups: costume
to rethink and if necessary to revoke decisions and design, music and technology, and stage design. As
go back in to that design phase. the work progressed, the students organized all nec-
essary coordination themselves and project man-
CREATING AN INTERACTIVE DANCE agement became an important and integral part of
PERFORMANCE this complex work.
The objective of this project was the realization of The main part started after the students had
an interactive dance performance using tools that been introduced to this new area and gotten ac-
would react to the dancer’s movements. The pro- quainted with the tools. The learning process still
ject was based on cooperation with a scenographer played an important role in the students’ work pro-
and a choreographer. In the first phase, the students cess, and it shaped the ideas of what their work
were presented with four different buildings from could look like. Not only were the students intro-
accredited architects. The students had to analyse duced to some examples of the enormous amount
the buildings and detect the main architectural of interactive dance performances [2,3] that are
themes. With these themes and metaphors in mind, being developed, but they also had to find their in-
they were asked to find the adequate form in trans- dividual answer in relation to their abilities and the
lating them into a performance (Figure 1). In order selected building.
to facilitate the students’ progress in this new field, The final performances – which will be ex-
we included activities like visiting a final rehearsal plained in more detail below – cover a broad range
at the theatre and a modern dance performance reflecting the different architecture, which they try
[1]. Furthermore, the students were asked to make to translate. They range from the creation of inter-
improvisation exercises in order to develop a basic active illusions (Mies) to an interactive installation
understanding of the language used in dance. based on design methodology (Haller) to a provoca-
tive theatre play (Libeskind) and a colourful dance benefit from dance elements, if the dance elements
performance (SANAA). Some of the works focus on would be performed by themselves or by dancers,
the choreography, others explore the effects, which or if the performance would not include any dance
can be produced on projections via Kinect sensor, elements at all. It was continually made clear to the
and still others play a virtual game creating an aug- students that the choice of tools and elements was
mented reality with the use of markers and specific theirs and that their decisions should be based on
glasses. their evaluation of the adequacy to support the in-
terpretation of architecture with the language of
DESIGN STUDIO ORGANIZATION dance and interactive media.
As the setting and organisation of the design studio The following list presents all tasks that were
is quite complex, it will be explained in detail below. part of the project (Figure 3):
Whereby the emphasis is on the first two phases. • Creation of a performance based on famous
This architectural design studio project involved buildings.
– to a greater or lesser extent – the following partici- • Production of music or music compilation (us-
pants: architecture students (20), scenographers (2), ing Ableton software).
dancers (2), choreographer (1), media artist (1), and • Event management (organization of dance
architecture design tutors (2). performance event, including opening speech,
At the beginning the students were presented catering, designing invitations, stage setting,
with a choice of buildings and were asked to work etc.)
on the “translation” of these buildings in groups of The first phase was a conventional architectural
five. analysis, consisting of research about the history of
The selected buildings where (Figure 2): the building, the architect(s), the most important
• Mies van der Rohe, Barcelona Pavilion, 1929 historic trends and other relevant aspects of that
• Fritz Haller, HTL, Brugg-Windisch, 1966 time. As a kick-off, the professor for architectural
• Daniel Libeskind, Jewish Museum, Berlin, 1999 theory made a presentation in which the buildings
• SANAA, Rolex Learning Center, Lausanne, 2010 were introduced, focusing on the atmosphere.
It should be pointed out that the students where In addition the students were asked to de-
given great freedom in creating their performance. fine the characteristics of the buildings and name
They were free to decide if their concept would three themes and metaphors for each. They were
Figure 3
Organisation of Architectural
Design Studio.
CREATION OF PARAMETRIC DANCE sic or sounds had been created analogue by record-
COSTUMES ing sounds that interpreted the used materials.
The concept of this interdisciplinary project foresaw The work “Haller interactive” was not referring
a development of the dance costumes using para- only to the selected building (HTL), but in response
metric design methods. Therefore, a three-day work- to Fritz Haller’s universal design method focused on
shop for Rhino and Grashopper was held to provide the topics regularity, modularity and order. The stu-
a profound introduction to the parametric design. In dents decided that the best representation would
addition to that, a broad range of devices was made be to create an interactive installation resembling
available to the students in the department’s labora- a computer game. By playing this interactive game,
tory. This included a 3D printer and a laser cutter; the which was composed of augmented reality and
latter had been chosen by the students because of interactive elements, the participants intuitively
its easy handling and suitability for the task. learned the rules (Figures 4 and 5).
As the starting point for developing the costume, In conformity with the prevalent atmospheric
the students selected a central element of the cho- power the work “Decertatio” interprets Libeskind’s
reography, which consisted in the idea that the Jewish Museum as a theatrical enactment. In the
dancers would open the imaginary boundary, the centre of the production stand disorientation,
so-called “fourth wall”, between themselves and the provocation and conflict. Set in a black box the audi-
audience. The opening up was expressed through ence was placed in a square in the centre of which a
the dissolution of the costume, which consisted seemingly lost dancer danced in a spot light, while
of two layers. The first layer, a simple fabric band an actor recited a text that expresses conflicting
wrapped around the dancer, was slowly unwound. emotions.
The second layer mainly constituted the costume The performance “Colour Feelings” enacts SAN-
and was produced with the laser cutter. The pattern AA’s Rolex Learning Center as dance performance
was created with the use of Rhino and Grashopper with the students themselves as dancers. Each spa-
on the basis of a parametric design patterns. tial impression is represented by an own scene char-
acterized by a specific colour and performed by one
INTERACTIVE (DANCE) PERFORMANCES of the students. Interactive projections that react to
Due to the experimental form the four student the movements reinforce the impression but play a
works vary considerably with regard to conception minor role. In each scene one person personifies the
and execution. To expand on this variety with more visitor, and another person the built environment.
detail, Table 1 presents a categorization and the fol- The performance ends in a scene where all dancers
lowing summaries shortly describe the content and come together and leave their ‘marks’ expressed by
differences of the student works. the specific colours on the surrounding.
The “MIES in motion” sets its focus on the aspects
of materiality and space and perception. Images of CONCLUSION
the surfaces where projected onto a special screen. Related to the complex task of developing an in-
The dancer’s movement in front of the screen modi- teractive dance performance, the structure of the
fied the projected graphics. The accompanying mu- design studio was equally complex. As a first evalu-
ation, we can point out that the unexpected com- public performance requires cooperation on differ-
bination of theoretical aspects of architecture with ent levels and functions. In this context, the project
the management and organization of a public per- supports students’ skills in cooperation, time man-
formance and the integration of modern soft- and agement, budget and the interaction with a group
hardware applications leads to challenging but also of specialized participants.
very stimulating tasks for students. Setting up this
project therefore activates competences in vari- REFERENCES
ous ways. On the one hand, it brings students into Bronet, F; Schumacher, J 1999, ‘Design of the Movement:
close, direct and playful contact with software and The Prospects of Interdisciplinary Design’, Journal of
techniques like rapid prototyping, gesture recogni- Architectural Education, 53(2), pp. 97-109.
tion, sound editing or visual programming. On the Forsythe, W 1999/2003, Improvisation Technologies: A Tool
other hand, it establishes a strong and intense link fo the Analytical Dance Eye, (CD-Rom), ZKM, Karlsruhe.
to architecture itself and forces students to reduce Laban, R von 1991, Choreutik: Grundlagen der Raumharmon-
the intentions of architects and their buildings to ielehre des Tanzes, Noetzel Verlag, Wilhelmshaven.
very essential statements. Combined with a high Pekol, B 2011, ‘BodyCAD: Creative Architectural Design
motivation of the participants, the management of a Through Digital Re-Embodiment’, ISEA, 2011.
[1] http://www.kulturverein-tempel.de/index.php?id=355/
[2] http://anarchydancetheatre.org/en/project/seventh-
sense/
[3] http://www.wedream.co/interactive-dance-perfor-
mance-2/
Abstract. Until recently, design teams were constrained by tools and schedule to only
be able to generate a few alternatives, and analyze these from just a few perspectives.
The rapid emergence of performance-based design, analysis, and optimization tools
gives design teams the ability to construct and analyze far larger design spaces more
quickly. This creates new opportunities and challenges in the ways we teach and design.
Students and professionals now need to learn to formulate and execute design spaces in
efficient and effective ways. This paper describes curriculum that was taught in a course
“8803 Multidisciplinary Analysis and Optimization” taught by the authors at Schools of
Architecture and Building Construction at Georgia Tech in spring 2013. We approach
design as a multidisciplinary design space formulation and search process that seeks
maximum value. To explore design spaces, student designers need to execute several
iterative processes of problem formulation, generate alternative, analyze them, visualize
trade space, and address decision-making. The paper first describes students design space
exploration experiences, and concludes with our observations of the current challenges
and opportunities.
Keywords. Design space exploration; teaching; multidisciplinary; optimization; analysis.
INTRODUCTION
In the current practice, the process of designing neering, and construction. Students and industry
buildings is rapidly becoming more collaborative professionals must learn to work together to formu-
and integrated through the use of Computer-Aided late and construct design spaces in order and under-
Design and Engineering (CAD/CAE) technologies. stand performance trends and trade-offs to solve
However the use of these technologies in the early issues central to practice.
stage of design is limited due to the time required Geordia Tech’s curriculum demonstrates an im-
to formulate and complete design cycles. A new portant issue in digital design education. Georgia
class of technology, involving automated multidis- Tech’s Schools of Architecture, Civil Engineering,
ciplinary analysis and design space exploration is and Construction, offer a variety of courses in design
increasing by the order of magnitude of the number studio, design theory and process, computer-aided
of alternatives that a design team can generate and design (CAD), building information modeling, para-
analyze (Haymaker, 2011). This creates new chal- metric design, energy analysis, structural analysis,
lenges in the ways we educate tomorrow’s design- cost analysis, decision analysis. However our Insti-
ers and managers in schools of architectural, engi- tute lacks integrated courses that help students un-
derstand how to work together to systematically for- and exploration processes. The curriculum engages
mulate, execute, and understand multidisciplinary students in a team-based approach to problem for-
building design spaces. mulation alternative generation, alternative analy-
Several organizations and associations such as sis, design space exploration and optimization, and
the American Institute of Architecture (AIA) Technol- trade-space visualization and decision-making.
ogy in Architectural Practice [1], the National Coun-
cil of Architectural Registration Boards (NCARB) METHODOLOGY
award for the integration of practice and education The methodology in this course consists of five
[2], the American Society of Civil Engineers (ASCE) phases that are described in more detail below:
excellence in civil engineering education teaching Problem formulation, alternative generation, alter-
workshop series [3] and the Associated General Con- native analysis, design space exploration and opti-
tractors of America (AGC) BIM Education program [4] mization, and trade space visualization and decision
support the efforts of academic programs to create making. The students utilize these phases to con-
and implement effective new curriculum that bring struct design spaces for the professional challenges
together students from multiple disciplines, indus- in the semester long group project.
try professionals, and advanced design technologies
to learn to address practical design challenges. To Problem formulation
address this need, some curriculums are emerging In this first phase, we engaged professional design-
in architectural schools such as Columbia University, ers to present challenges from their own practice
Harvard University (Kara and Georgoulias, 2013) and that they felt could have benefitted from more
University of Southern California and Stanford Uni- exploration if they were given more time and bet-
versity (Gerber and Flager, 2011). ter tools. Figure 1 and the following text describe
This paper describes new curriculum under de- the challenges presented by the design teams. The
velopment in Georgia Tech’s Schools of Architecture benefits of engaging design teams in this way were
and Building Construction that engages architec- twofold. It helped students confront real world de-
ture, engineering, construction, and computer sci- sign challenges without needing to spend too much
ence students and industry professionals in collabo- time gathering information about them. It also gave
rative multidisciplinary design space construction professional designers access to new design space
exploration tools and ways of thinking about their total square footage and aesthetic attributes.
challenges. • Case 3: Mixed-use tower
• Case 1: Cancer treatment center The tower in china was conceived with the vi-
A new cancer treatment process provides an sion of a “the Breathing Tower” that uses green
opportunity to develop a new design meth- energy techniques, including passive lighting
odology. The professional design team found and ventilation. The student’s goal in analyz-
the massing phase challenging because of the ing the design for the tower involves optimiz-
very large equipment involved with the new ing the quality and comfort levels of the occu-
treatment process. Several programming and pants. They look at performance criteria such
crane access issues constrained the potential as daylighting, passive ventilation, structural
solutions somewhat, but the design team was stability and attempt to preserve the grace and
interested in more systematically exploring the symmetries of the original design aesthetic,
tradeoffs of different building massing in terms while keeping costs at a minimum.
of their visibility from highway, energy con- Students first used Wecision’s Choosing by Ad-
sumption, daylight factor, sensitivity to adja- vantages model (Abrams et al., 2013) to model the
cent neighbors, and connection to adjacent organizations involved, the goals and constraints
green space. they needed to consider, the range of alternatives
• Case 2: Children’s hospital they wanted to explore, and the preferences on out-
The hospital, located in the Middle East, was comes (Figure 2). They also enter initial estimates
conceived to emphasize western healthcare of what they believe the outcomes are likely to be
ideas such as patient comfort, equality, and based on intuition.
external views. The students were asked to Students then developed Meta Model (MM) in
evaluate the current proposal and provide in- System Modeling Language (SysML) to describe the
sight into how the geometry and solar shad- structural and behavioral aspects (Reichwein and
ing could be modified to improve solar and Paredis, 2011) of their design challenges. The MM
day lighting performance, thermal gain, and is an abstract model of the data of the actual geo-
patient views. The design team focused on the metric model. It captures the structural aspects of
trade-off between designing for solar radiation the model such as domain specific semantics, attrib-
and day lighting factor; however, other factors utes and relationships among parts through block
contributed to the final evaluation including or class definitions. From these definitions multiple
Instance Models (IM) of design alternatives can be tive parametric design models that are driven by the
generated by changing the parameters. The behav- design variables specified in the MM. In some cases
ioral aspects of the challenges are captured though custom scripting is also included to enable topologi-
activity diagrams that represent the sequence of cal transformations that are difficult to achieve using
actions to be performed in order to generate, ana- parametric logic alone. The students tested the par-
lyze and select a design alternative that describe the ametric model and generated different alternatives
generative and analytical systems in their design by modifying the variable values (Figure 5).
spaces. Students used commercial parametric design
They used SysML Block and Instance diagrams tools such as Rhino/Grasshopper, Revit, and Digital
(Figure 3) to describe the alternative’s components Project to generate the parametric model. Output
and relationships that will be important in the analy- of these tools would be a set of architectural forms
sis, and SysML Activity diagrams (Figure 4) to de- in which their geometry and properties are easily
scribe the analysis processes they wish to perform modified by changing the parameters.
on these models. In these diagrams they explore
and communicate the detailed input parameters for Alternative analysis
analysis tools, as well as the output parameters of In this third phase, the integration of their para-
the analysis, and whether they are to be minimized metric model with analysis tools allows students to
or maximized. analyze and evaluate the performances of different
alternatives in a design space and compare them
Alternative generation based on their performance metrics. To this end,
In the second phase, to represent the design alter- students need to integrate CAD and CAE tools in a
natives geometrically, students then made associa- way that the data flows between the tools in an au-
tomated fashion to reduce design cycle latency. The lop their own workflows, for example students in the
simulation and analysis tools were selected based high-rise group developed a customized workflow
on the performance objectives, inputs, and familiar- to minimize the total structural weight. The deve-
ity from among available commercial software such loped workflow is able to calculate the wind pres-
as EnergyPlus, Green Building Studio, eQuest, DIVA, sure on the façade based on ASCE 7-10, calculate
and IES VE for energy analysis, SAP2000, GSA Oasys, tip deflection on the top of the building, and modify
STAAD, Karamba, and ETABS for structural analysis, the columns’ cross section until achieving the most
Radiance, Ecotect, DIVA, and Daysim for Daylighting efficient column sections (Figure 7). Students in the
simulation. Figure 6 shows student daylight analyses Cancer Treatment Facility developed several geo-
comparing the original design team’s design with metric scripts to analyze designs automatically for
one of the alternative’s generated from their para- visibility from the highway, sensitivities to adjacent
metric model. buildings, and access to open space (Figure 8).
Students were introduced to experimental work-
flows such as ThermalOpt (Welle et al., 2011) and Design space exploration and optimization
BiOpt (Flager et al., 2013) that build in data trans- Due to the potential size and complexity of poten-
formations and strategies that help prepare models tial building design spaces, analyzing and testing
for fully automated simulations and contain domain for every parametric variation can be impossible.
specific knowledge necessary for more efficient op- Additionally, many of the design objectives are hard
timization. Students were also encouraged to deve- to formalize, and so it is often more fruitful to en-
able the designer and tool to work iteratively visu- how to use Pareto frontiers, performance trends,
alizing and generating aspects of the design space. and sensitivity analyses in order to make informed
Hence, in this fourth phase, the students learn to decisions in guiding the optimization process. They
apply computational techniques such as design of used the built in tools provided by ModelCenter and
experiments and use optimization and sensitivity Wecision. Figure 10 shows two examples of student
algorithms to systematically guide the generation approaches to exploring the multidisciplinary de-
of alternatives. Students used commercial design sign spaces.
exploration and optimization tools such as Octopus At the end of the class, students return to We-
and Galapagos by Grasshopper, and ModelCenter. cision to identify several prominent alternatives in
the design spaces they explored, and to report on
Trade space visualization and decision the multidisciplinary performance and weigh the
making importance of the advantages of each alternative.
The visualization of performance enables students At the end each alternative is evaluated based on its
to engage in computer-based exploration and visu- total advantages.
alize tradeoffs. In this final phase, the students learn
Figure 6
Students analyse alternative
designs for structural perfor-
mance, daylight, energy, cost
and more.
Figure 8
Students developed a custom
process for analysing the pro-
jects relationship to adjacent
green spaces.
Figure 9
Students developed an
optimization process, each
team found designs that
outperformed the industry
chosen design, for the objec-
tives analysed.
and skills to contribute. Each teams requires an ap- understanding of student skills and interest.
propriate mixture and level of domain knowledge in
the programs as well as general computer scripting. Separate learning of concepts from apply-
In future versions of the class we plan simple tuto- ing concepts
rial exercises early in the class, and delay choosing We taught students the concepts and tools directly
teams a few weeks until we have developed a better in the context of the industry problems. This was
Figure 11
A final Wecision model that
communicates the multidis-
ciplinary advantages of a
selection of alternatives in the
design space.
Abstract. The proposed article deals with introducing collaborative architectural design
into the training of ergonomists at the Master 2 level. The collaborative design workshop
aims to confront ergonomists with the difficulties any design project involves, and which
challenge architects, designers, engineers and so on: collaboration between people with
different skills and different expertise; powerful time constraints; need for their work to
converge; working together and/or at a distance; sharing documents; decision-making,
etc. The article will present a short review of work carried out in the domains of
architecture and design, and of the contribution of ergonomics within architectural
projects. We shall then present the workshop’s educational aims, and give details of the
way it functioned. Finally, observation results will be presented and discussed.
Keywords. Collaborative design; architecture; ergonomics; training workshop.
1. The first is that collaboration in design requires expressed; and it makes explicit relevant elements
tools to enable designers to develop a broader of the context. Figure 1 provides a schematic illus-
vision (of the diversity of stakeholders and tration.
the plurality of issues involved) and construct The second tool is an enlarged method for ex-
shared references to the future, possible hu- ploring the possibilities in design, which are struc-
man activities when taking account of all the tured according to three broad types of contribution:
constraints that arise. 1. The project management contribution: explor-
2. The second, more exploratory hypothesis seeks ing questions about the will to change and cre-
to document the way the design collective ex- ate new things. They address the way the pro-
ists in two distinct situations: when physically ject is piloted and how it develops, considering
present and when working at a distance. all the elements deemed relevant: political,
strategic, financial, temporal, human;
PROPOSED METHODOLOGICAL TOOLS 2. The ownership contribution: exploring ques-
The design-for-use approach to collaborative design tions about how the will behind the project is
articulates two methodological tools. made concrete in the form of something via-
The first is a tool for social analysis of the project, ble. They guarantee the feasibility of achieving
used by practitioners of ergonomics to structure the project on various levels: technical, legal,
their interventions. It details the preoccupations security, ecological, human;
and/or problems expressed by each stakeholder, 3. The end users point of view contribution: ex-
and their issues; it presents the people likely to be ploring questions from the point of view of
concerned by the preoccupations and/or problems future, possible activity at the heart of the
Moment 2 – Presence
Sketches taking shape Paper/pencil
Moment 3 – Presence
Translating drawings and Paper/pencil
sketches into m2 on the plan
Moment 4 – Distance
Abolishing m2 SketSha – video
conference
Moment 5 – Distance
Reorganizing and finalizing SketSha – video
project conference
of collective activity, notably under the effect of their arguments and opens them up for debate in
teachers’ interventions which guided or even reori- the group. New ideas gradually emerge and a con-
ented the designers’ work: sensus forms. When distance working, confronta-
• Help with the initial definition of the school tions still relate to the project’s main ideas but are
project: what school? For whom? expressed by one pair towards the other situated at
• Help with more detailed definition: a school for a distance.
all; each future user’s activity; accessibility; Withdrawal. When physically present, with-
• Reorientation: from detailed calculation of ele- drawal takes the form of a less active role for one
ments towards the overall meaning of the pro- of the participants, which has several functions: in-
ject: placing sketches in the plan, rather than dicating disagreement, or the wish to start another
precise calculations about the size of a stair- activity, related to the activity going on (e.g. making
case. a drawing while the group progresses with produc-
ing ideas). This type of withdrawal turned out to be
COLLABORATIVE DESIGN WHEN PHYSI- productive, as it makes it possible to share the draw-
CALLY PRESENT, AND AT A DISTANCE ings which fuel the ideas produced.
The work of the observers made it possible to fol- When distance working, withdrawal took the
low the collaborative design process and identify form of disappearing from the camera angle. This
the specific aspects of each situation and the role was less comprehensible and thus less productive in
played by different artifacts (Belaitouche et al., 2012; terms of taking the collective work forward.
Mateev et al., 2012). Speaking and decision-making. When physi-
Conflict. When physically present, confronta- cally present, the flow of speech enables a certain
tions relate to the main ideas of the school project proliferation of ideas. People occasionally talking
and are expressed individually: everyone sets out over one another can be dealt with in the situation.
Abstract. The rule editor of a parametric shape grammar interpreter is presented. The
problems that arise are discussed along with their solutions.
Keywords. Shape grammar; parametric shape grammar editor; implementation.
INTRODUCTION
Shape grammar implementations, in theory and with binary devices. In the context of a rule editor
praxis, have received increasing attention over the ambiguity will result in little more than serendip-
last years. Recent projects have been introduced by ity. Despite the successes of serendipity, here we
Yazar and Colakoglu (2007), Trescak et al. (2009), Yue will try to restrict it to a minimum. Rather it is dur-
et al. (2009), Keles et al. (2010; 2012), Jowers and Earl ing the match finding process, while decomposing
(2011) and Grasl (2012). Each project offers some a shape into its constituent parts, that ambiguity has
solutions to the general problem, but few imple- its place.
ment or describe rule editing capabilities. A com- As simple as many rules may seem to the human
plete shape grammar interpreter should support mind when drawn on paper and explained by an
emergence, parametric rules and rule editing via a accompanying text, it is quite a different matter if a
graphical editor. Here the results of such an effort digital computer is the entity trying to understand.
(Grasl and Economou, forthcoming) are described It is fantastic how many details the human mind can
with a focus on the peculiarities of the parametric overlook unbothered, how many blanks are filled in
rule editor. on the fly in order to understand a rule.
In order to implement a rule editor for a shape Of course if one were to sit down and describe a
grammar interpreter several difficulties have to be rule in computer code some ambiguities might fall
overcome. Mark Tapia (1999) already introduced away, most likely though it will take several attempts
some ideas concerning the user interface, most of until the desired result is achieved. In any case this is
which still hold today. However, the editor described not the desired solution, rather the designer should
by Tapia did not support parametric rules. be able to communicate with the computer in a
Although ambiguity is sometimes seen as a more intuitive language, that of the drawn shape.
strength of shape grammars (Stiny, 2006), it is not This is difficult enough for a non-parametric, or rig-
during rule definition that it should come into play. id, shape, but once parametric matching is allowed
Here unambiguity is essential. This is perhaps the things become all the more complicated.
dilemma of trying to implement shape grammars
Manual constraints are important to model possible. Again things are simplified by offering this
specific requirements that cannot be covered by a additional possibility.
mapping alone. If for example a rule should be ap-
plicable to rectangles of all proportions, then the REFERENCES
topology mapping hast to be used in combination Grasl, T 2012, ‘Transformational Palladians’, Environment and
with geometric constraints restricting a quadrilat- Planning B: Planning and Design, 39(1), pp. 83 – 95.
eral to a rectangle. Grasl, T and Economou, A (forthcoming) ‘From topologies
to shapes: Parametric shape grammars implemented
CONCLUSION by graphs’, Environment and Planning B: Planning and
Implementing a general parametric shape grammar Design.
interpreter is not an easy task. Once it is achieved Jowers, I and Earl, C 2011, ‘Implementation of curved shape
the question arises of how to feed the interpreter grammars’, Environment and Planning B: Planning and
with rules. Of course the rules could be formulated Design, 38(4), pp. 616 – 635.
directly in the underlying representation used by Keles, H Y, Özkar, M and Tari, S 2010, ‘Embedding shapes
the interpreter. In the case at hand this would mean without predefined parts’ Environment and Planning B:
describing the rules as graph grammar rules in the Planning and Design, 37(4), pp. 664 – 681.
description language provided by the graph gram- Keles, H Y, Özkar, M and Tari, S 2012, ‘Weighted shapes for
mar engine. While this is indeed a flexible option, embedding perceived wholes’ Environment and Plan-
and possibly the best approach for some very specif- ning B: Planning and Design, 39(2), pp. 360 – 375.
ic rules, in general it is desirable to be able to specify Stiny, G 2006 Shape: Talking about Seeing and Doing, MIT
rules graphically. Press.
The approach presented here builds on the un- Tapia, M 1999, ‘A visual implementation of a shape gram-
derlying graph representation of shapes. Neverthe- mar system’ Environment and Planning B: Planning and
less most of the findings should be application to Design, 26(1), pp. 59 – 73.
other implementations as well. Trescak, T, Esteva, M and Rodriguez, I 2009, ‘General shape
Defining the required mapping is not essential, grammar interpreter for intelligent designs gener-
since the same effect can be achieved by manu- ations’ in B Werner (ed) Proceedings of the Computer
ally adding constraints. However, this is tedious and Graphics, Imaging and Visualization 2009, pp. 235–240.
more often than not a mapping will suffice. Wolter, J D, Woo, T C and Volz, R A 1985, ‘Optimal algorithms
Manual constraints are of course an absolute ne- for symmetry detection in two and three dimensions’,
cessity for a parametric shape grammar interpreter. The Visual Computer, 1(1), pp. 37 – 48.
Visual constraint definition would be nice, but for- Yazar, T and Colakoglu, B 2007, ‘QSHAPER’ in J B Kieferle and
mula based constraints have proven to be a good K Ehlers (eds) Predicting the Future: 25th eCAADe Confer-
and flexible solution. Visual constraints can be add- ence, Frankfurt am Main, Germany, pp. 941-946.
ed fairly easily if the editor is implemented in a CAD Yue, K, Krishnamurti, R and Grobler, F 2009, ‘Compu-
environment that supports such constraints. tation-friendly shape grammars: Detailed by a
Treating rules of the schemas a → b and a → sub-framework over parametric 2D rectangular shapes’
t(a) differently could perhaps be avoided if complex in T Tidafi and T Dorta (eds) Joining Languages, Cultures
rule definitions based on constructive geometry are and Visions: CAADFutures 2009, pp. 757- 770.
Abstract. The propose paper presents an ongoing research which main goal is to use
cork in a customized modular façade system. Cork is used due to its ecological value,
renewable characteristic, insulation properties and aesthetic value. The modular system
design is bio-inspired in the microscopic cork pattern and the study aims at reproducing
in the façade some of the natural characteristics that enable cork to be suitable for the
function it plays in construction. Façades are design by a generative design process
based on a parametric shape grammar which encodes shape rules and an algorithm to
guide the generation. The developed cork modules are part of a back-ventilated façade
system which is assembled upon a substructure that reproduces the cork cell structure
and enables both the assemblage of the modules to the support wall and the connection
between them.
Keywords. Shape grammar; generative design; cork; façade; digital fabrication.
INTRODUCTION
The use of a generative design system enable the quirements and still maintain production costs.
generation of multiple solutions based on different However architectural quality is not absolutely
scenarios and requirements that would introduce measurable, there are some specific qualities that
different variables to the system. are well measurable. Energy efficiency is one of
Recently both the fabrication and the design those qualities. The use of materials with good insu-
process in Architecture are being questioned by lation values and the optimization of window open-
the use of digital technologies for the promotion ings according to the site insulation characteristics
of more efficient buildings. Requirements such as will improve a specific type of building quality.
good structural or thermal performance and cus- Kroes et al. (2008) state that the emphasis upon
tomization are the cause of the arising of new gen- building performance brings the architecture world
erative processes. much closer to engineering design. According to
Design assisted by generative processes such as Gruber (2011) the quality of a final project is defined
shape grammars allows the customization and op- by the quality of investigation conducted in the im-
timization of solutions by manipulating parameters. portant stages of design. The challenge proposed
Combining these processes with new digital fabrica- is to combine the expanded vision of the architect
tion techniques enable new products to be design with the fulfillment of specific variables using also
which are customized, respond to pre-defined re- quantitative criteria rather than just the qualitative
FRAMEWORK
Gil, L 1996, ‘Densification of black agglomerate cork boards Sousa, J.P., 2010, From Digital to Material: Rethinking Cork in
and study of densified agglomerates’ in Wood Science Architecture through the use of CAD/CAM Technologies.
and Technology 30, pp 217-223. Springer-Verlag. UTL, Instituto Superior Técnico, PhD thesis.
Kotsopoulos, S.D.; Casalegno, F.; Carra, G.; Graybil, W.; Stiny, G.; Gips, J. 1971, ‘Shape Grammars and the Genera-
Hsiung, B. 2012, ‘A Visual-Performative Language of tive Specification of Painting and Sculpture’ in C V Frei-
Façade Patterns for the Connected Sustainable Home- man (ed) Proceedings of IFIP Congress 71 (Amsterdam:
Sotirios’ in SimAUD ‘12 Proceedings of the 2012 Sym- North-Holland) 1460-1465. Republished in O R Petro-
posium on Simulation for Architecture and Urban De- celli (ed), The Best Computer Papers of 1971 (Philadel-
sign. Pp.97-108 phia: Auerbach) 125-135.
Knight, T. W. 2000, Shape Grammars in education and prac- Velasco, R.; Robles, D. 2011, ‘Eco-envolventes: A parametric
tice: history and prospects. MIT, 14 Sep. 2000. Avail. design approach to generate and evaluate façade con-
<URL http://web.mit.edu/tknight/www/IJDC/> figurations for hot and humid climates’ in Respecting
Kroes, P.; Light, A.; Moore, S.A.; Vermaas, P. 2008, ‘Design in Fragile Places [29th eCAADe Conference Proceedings /
Engineering and Architecture: Towards ao Integrated ISBN 978-9-4912070-1-3], University of Ljubljana, Fac-
Philisophical Understanding’, in Vermaas, P.; Kroes, P.; ulty of Architecture (Slovenia) 21-24 September 2011,
Light, A.; Moore, S.A. (editors). Philosophy and Design. pp.539-548
From Enginering to Architecture. Springer. 1-17.
INTRODUCTION
A growing number of regulations and standards More specifically, added programming possibilities
to which nowadays buildings have to comply, has allow the continuous generation and evaluation of
put design performance back on the architectural parametric variations in order to select (sub-)opti-
agenda. This has led to the emergence of a perfor- mal design solutions (Strobbe et al., 2012).
mance-oriented architectural design paradigm in Such CAD-systems are often founded on a
which building performance (regarding sustain- geometric representation of the design. However,
ability, safety, accessibility, comfort, etc.) becomes the current increased emphasis on building perfor-
equally important as traditional design drivers such mance in architectural design starts to question this
as functionality, history or aesthetics (Kalay, 1999). In central role of geometry in CAD. The designer has to
contrast to the traditional design process, in which work within the constraints of government rules and
performance issues are often dealt with in a post- regulations to accomplish a good compromise from
engineering optimization phase, the performance- a wide range of design solutions. Such constraints
based design process takes into account the per- are diverse in nature and often difficult to trans-
formance requirements in an early design phase. late into a graphical or geometric form. Therefore,
Recent technologic advances in computational de- parametric modeling so far allows the generation of
sign systems allow the integration of performance quite restricted geometric variations in the model.
requirements and have led to an increased produc- Furthermore, a geometric representation of design
tivity of the design process (Petersen et al., 2010). is inappropriate from a computational and automat-
ic reasoning point of view, as it is computationally base and an inference engine (Figure 1). The data-
expensive to identify features to be computed from base contains all the design knowledge in form or
a set-theoretic representation of a geometric object. production rules (knowledge base), together with
More appropriate in the context of supporting information about the current state or knowledge
the design process, is the combination of a geomet- (working memory). A production rule is modeled as
ric representation and its rule-based representation. a transition between a ‘before’ and ‘after’ state. The
This allows the representation of the design through left-hand side of the production rule describes all
a sequence of design rules that preserve design in- the preconditions that need to be satisfied, while
formation that otherwise has to be reconstructed. the right-hand side describes the action of the rule
This representation is more suitable for computer execution. If the production rule’s precondition
implementation and can allow both (1) automated matches the current state of the working memory,
reasoning on the design and (2) the generation of a the production’s action is executed.
design grammar (i.e. family of designs) that extends The inference engine starts with the facts in the
restricted parametric variations. These functionali- working memory and uses the matched production
ties can support the designer in different types of rules to update, remove or add new knowledge to
problem-solving activities by generating alternative the working memory. In order to select and execute
or even sub-optimal design solutions within a gram- the production rules, the inference engine contains
matical paradigm. a pattern matcher, agenda and action executer.
Our research investigates to what extent rule- Firstly, the pattern matcher uses an algorithm to col-
based design systems can provide essential sup- lect production rules of which the preconditions are
port in the core design activities of architectural satisfied with the facts in the working memory. The
designers. This paper describes an implementation collection of rules resulting from the matching algo-
of such a rule-based design system, founded on a rithm is called the conflict set. Secondly, the agenda
graph-based shape grammar. One possible graph- determines the resolution strategy of the conflict
based design representation is described and it is set, for example according to priorities assigned to
discussed how a grammar of designs can be gener- the rules or according to the order in which the rules
ated through the application of design rules. were written. Thirdly, the action executer performs
the production rule’s action and removes it from the
SHAPE GRAMMAR IMPLEMENTATION agenda. This process is continuously iterated, result-
ing in different production system derivations.
Rule production system
A rule-based design system founds on the collec- Shape grammar
tion and management of design ‘knowledge’ in form The use of production systems to provide some
of rules, as is the case in rule production systems. A form of artificial intelligence is found useful in sever-
rule production system generally consists of a data- al scientific domains. In the domain of architectural
uted graph represents (geometric) objects as nodes, can be applied in different environments using an
and relations between these objects as directed inference engine. Graph rules are described as a
arcs. Several node types are distinguished that rep- transition between two graphs, following a typical
resent different objects (vertices, edges and faces) IF-THEN statement. The left-hand side of the graph
(Table 1). This approach can be extended to an un- rule describes the graph that needs to be matched
limited number of nodes that represent different to the host graph, together with additional condi-
objects (for example: “Solid”, “Wall”, “Door”, “Window”, tional statements. These conditions include attrib-
etc.). Similarly, several arc types are distinguished ute conditions and negative application conditions
that represent different relations between the geo- (NAC). Attribute conditions define restrictions on
metric objects, for example: “hasVertex”, “hasEdge”, the attributes of the graph, while NACs specify re-
etc. In addition, several attributes are associated quirements for non-existence of sub matches. The
with the graph objects in order to store non-topo- right-hand side describes the transformation of the
logical information: unique ID’s are associated with host graph. This graph transformation includes de-
all nodes, coordinate geometry is associated with leting or manipulating existing graph objects, creat-
vertex nodes, etc. ing new objects, and also performing computations
The graph representation contains only topo- on the object attributes.
logical information of the shape, which allows the As an example, the graph-based representation
support of parametric shapes. Additional attribute of a shape rule that generates a Koch curve is dis-
information (e.g. coordinate geometry) is needed played in Figure 3. The Koch curve is a mathemati-
to restrict the parametric shapes to specific geo- cal fractal curve that is constructed by recursively
metric shapes. Therefore, an attributed graph-based replacing a line segment with an equilateral triangle.
representation ensures a unique mapping between Therefore, the left-hand side of the rule represents
the shape and the graph. An example of the graph a line segment and the right-hand side represents
representation of a geometric line object is given a graph with both modified and new graph ob-
in Figure 2. The graph consists of two vertex nodes
(white), one edge node (black), and two directed Figure 2
arcs from the edge node to the vertex nodes. The Graph representation of a
line is considered parametric, if the coordinate ge- non-parametric line object.
ometry attributes contain parametric values.
GRAPH RULES
Parallel to the graph-based representation of the
design, the rule set can also be described using the
same representation. This graph-based rule repre-
sentation enables the collection and management
of design knowledge with a far greater expressive
power than pure data. Once a rule is described, it
jects. The attributes of the new nodes are defined tection of subshapes, which is an important chal-
as expressions that take into account the attribute lenge for shape grammar implementations. Also,
context of the matched host graph (X1, Y1, X2, Y2). the matching conditions are evaluated in order to
Furthermore, an additional attribute condition is recognize specific shape features. The preservation
implied in order to ensure a minimum length of the morphism describes the mappings of the objects of
generated line segments. one graph to those of another, using tags to indicate
the mapped objects. If multiple matches are found,
GRAPH TRANSFORMATION SYSTEM all possible morphisms are calculated and stored.
A graph transformation system is used to allow the The selection of the morphism can happen non-de-
stepwise application of graph rules on the original terministically or using a user-defined sequence (for
host graph. Among others, AGG is a general graph example through priorities assigned with the rules).
transformation system written in JAVA (Rudolf and The user can go back and forth in this transforma-
Taentzer, 1998). Rule application is performed by tion process, and generate multiple alternatives.
matching the left-hand side of a graph rule to the In the following example, the initial line shape
host graph and replacing it using a preservation that is described in Figure 2 and the Koch rule that
morphism. The AGG system solves the problem of is described in Figure 3 are used to generate shape
graph matching, i.e. the subgraph isomorphism grammar transformations. Figure 4 shows several
problem, as a constraint satisfaction problem (CSP). graph transformation steps in the generation pro-
As indicated previously, this feature enables the de- cess (step 0, 1, 2 and 10). Rule matching and selec-
INTRODUCTION
Research is currently being developed towards the design system and articulates it with the production
application of the mass customization paradigm to system.
the design and production of ceramic tableware. According to Pine (1993), mass customization
The experience documented in this paper repre- can improve a company’s competitiveness, allowing
sents a first mockup of the framework necessary for it to offer differentiated products to its customers.
the implementation of such paradigm. The ultimate This research is thus intended to be applied to an in-
objective is that end users can create their own, dustrial context, through collaboration with a local
highly customized, tableware set. ceramics company.
According to Duarte (2008), the implementa-
tion of a mass customization system implies devel- Methodology
opment on three fronts: a design system that en- A small scale implementation of the mass customi-
capsulates the stylistic rules of tableware elements, zation paradigm was tested as an exercise, in a se-
generating the corresponding digital models; a pro- mester long course about shape grammars. In this
duction system that allows to automatically materi- exercise, the shape grammar apparatus, invented by
alize those models into usable tableware elements; Stiny and Gips (1972), was used as a design system,
and a computational system that implements the encoding the rules for the shape generation of ta-
egory includes the bowl, the mug and the cup. tableware elements. Let’s take the example of the
Types in the same category were considered soup plate. Three functions can be identified in its
formally similar among each other, differing only shape: the broad border is used to hold the plate;
in terms of dimensions. In the deep type category, the soup needs to be contained, and so the dish
if we disregard the handle in the mug and the cup, must have a deeper part for this purpose; and finally,
these types are formally similar to the bowl. it needs a broad and flat bottom, so it lays steady on
Different types are thus characterized by differ- the table. So the three functions are: holding, con-
ent sizes, measured both in height and radius. Table taining and laying on the table.
1 illustrates the dimensional relations between the The functional parts were analyzed in terms of
six chosen types within the selected collection, as dimensions, namely height and radius. The relations
well as the mentioned distinction between deep between dimensions of the different functional
and shallow types: in shallow types, radius is larger parts are described by a functional configuration.
than height, and so the ratio between these dimen- The functional configuration for the soup plate, sys-
sions is higher than 1, whereas in deep types, it is tematizing the example given above, is shown in
lower than 1. Table 2.
The shape similarity among types within the Different types feature different functional con-
same category is interpreted as a parametric varia- figurations. For example, to be able to contain liq-
tion of the same entity, justifying the development uids, the soup plate and the deep types feature a
of a parametric shape grammar. taller containing part than the dinner plate.
Generally, all three functions are present in each
Functional parts type. However, in some types they are assigned to
This first observation has also brought attention parts other than the main body of the tableware ele-
to the distinction between functional parts within ment. For example, in the mug or the cup, the hold-
Table 1
Classification and dimensions
of the different types in one
collection
Type Charger Dinner Soup Bowl Mug Cup
Radius (cm) 15,50 13,50 11,50 7,00 4,50 3,75
Height (cm) 3,00 2,50 4,00 8,50 10,00 5,00
Radius/Height 5,16 5,40 2,88 0,82 0,45 0,75
Category Shallow types (plates) Deep types
ing function is assigned to the handle. In its current element, as well as the upward direction (h, from
state, this shape grammar is only encoding shape for height) and the outward direction (w, from width).
the main body of the elements. Parts like the han- The general dimensions of the tableware element
dles in the cups and mugs will be addressed in the are introduced as input parameters of rule 1, which
future. creates an object in which the element is inscribed.
Further development of the shape grammar This object is called the envelope (Figure 3, en). In
will focus its extension to other collections. It is ex- the two-dimensional view of the profile, the enve-
pected that for the same types within other collec- lope is represented by a rectangle, or more generally
tions, functional configurations, despite featuring speaking, a quadrilateral - since the rectangle will be
some dimensional variation, are somewhat similar, subsequently distorted, the general term quadrilat-
and therefore characteristic of the type. Variations eral, or quad, is more appropriate.
on this configuration are to be registered as other The initial envelope is subsequently subdivided
collections are analyzed, and will be properly inte- into the element’s functional parts by rule 2, which
grated into the design system. is parameterized according to the correspondent
functional configuration (Figure 3). Rule 2a subdi-
Initial shape and first derivation steps vides the envelope into three functional parts, and
This first analysis is incorporated into the first two can be applied to the shallow types. For the deep
rules of the shape grammar, which are applied in the types, rule 2b should be applied, which disregards
beginning of each derivation of a tableware element the holding part (pg).
(Figure 2). Derivation of the tableware elements can be
Derivation begins with the initial shape, which is split into three phases: initialization - which applies
a referential determining the center of the tableware rules 1 and 2 as seen above -, base shape definition
Figure 2
First steps of derivation for the
soup plate: envelope creation
and functional partitioning.
Figure 3
Shape grammar rules for
envelope creation and func-
tional partitioning.
Figure 5
Resumed derivation for base
shape of soup plate.
Figure 6
Subdivision rule and example
of application: subdivision al-
lows for more complex shapes.
Figure 7
Distortion rule and example of
recursive application.
Figure 10
Derivation for base shape of
the soup plate.
Figure 12
Example of UV mapping: rules
are mapped onto surfaces.
Figure 13
Derivation for decoration of
soup plate.
erate on the surface’s parametric, or UV, space. Rule are to be replaced by more elaborate ones featur-
10a subdivides the surface into two parts with the ing relief-based motifs. For the selected collection,
same U parameter differential (Figure 14). In the der- two rules are defined, 11a and 11b, which are to be
ivation for the selected collection, rule 10a is used applied exclusively onto shallow and deep types
recursively to subdivide the plate into eight para- respectively. Both rules apply a slight depression to
metrically equal parts. the target labelled subsurface, a motif which is typi-
Rule 10b subdivides it into three parts, also cal for the selected collection. However, contrary to
along U, but in this case the first and third part have rule 11b, rule 11a also changes the subsurface’s con-
the same U parameter differential, which is different tour (Figure 15).
for the second independent part. The parametric re- Recursive application of motif replacement rules
lations among the parts are variable. In the selected to all labelled subsurfaces is the final stage of the
collection, the second part is larger (Figure 14). How- derivation, resulting in a design that belongs to the
ever, in the corresponding rule this constraint is not collection’s language (Figure 15).
set, so to allow for a wider range of variation.
Rule 10b uses a label to mark the subsurfaces to APPLICATIONS OF THE SHAPE GRAMMAR
which subsequent decoration rules can be applied.
Since the use of labels is still under development Parametric modelling
and lacking consistency, it has not been addressed A three-dimensional parametric grammar is difficult
in this paper. to test without some kind of implementation. On
Similarly to the base shape definition phase, the one hand, the combination of several param-
surfaces resulting from the subdivision operations eters corresponds to a large number of solutions.
Figure 15
Motif replacement rules and
application in the soup plate
derivation.
On the other hand, some three-dimensional geo- two models were generated using different param-
metric operations are difficult to represent in two- eter configurations. Therefore, a total of four digital
dimensional drawing, such as the surface mapping models were generated in Grasshopper (Table 3).
operations. These models were to be later produced through
Therefore a computational model was deve- rapid prototyping.
loped in Grasshopper, a visual programming in-
terface that interacts with geometrical modelling 3D printing
software Rhinoceros, and allows implementing, and The digital models generated in the Grasshop-
therefore evaluating, parametric models. It should per program were materialized through available
be noted that the Grasshopper model is not con- 3D Printing technology. For saving purposes, four
sidered an implementation of the shape grammar. quarters of dishes were produced, instead of four
However, if we consider that the result of the deriva- complete dishes (Figure 17). This was also useful to
tion of a parametric shape grammar is a parametric evaluate the results in terms of their section.
model, than we can argue that we are implementing Prototyping the models provided for a general
a derivation. Actually, the Grasshopper model was first impression about the models being generated,
developed so that rules can be identified as groups namely in terms of scale and weight. The 3D print-
of components, in a modular fashion (Figure 16). ed models are especially useful for communication
With this tool, two derivations of the soup plate purposes, allowing to better illustrate the project
were implemented as parametric models. The two design, either to faculty members as well as to po-
derivations differ slightly, having different rules ap- tential partners in the industry.
plied in the base shape. Then, for each derivation, Further research will aim at determining if these
Table 3
Digital models of derivations
of the shape grammar.
CONCLUSIONS
This exercise brought the attention to the many
questions that should be answered in order for the
mass customization paradigm to be applied to ce- financial support for the present publication and
ramic tableware. Focusing particularly on the de- corresponding communication.
velopment of the shape grammar, the manipulation
of three-dimensional and predominantly curved REFERENCES
shapes poses as the main challenge, which needs to Chau, H.H., Chen, X., McKay, A., Pennington, A., 2004. Evalu-
be mastered in order to serve as an effective design ation of a 3D Shape Grammar Implementation, in:
system. Gero, J.S. (Ed.), Design Computing and Cognition’04.
However, the success of this first mockup poses Kluwer Academic Publishers, Dordrecht, The Nether-
as a good indicator for further research, which is lands, pp. 357–376.
planned to develop along the three systems. Duarte, J.P., 2008. Synthesis Lesson - Mass Customization:
Models and Algorithms - Aggregation Exams (Agre-
Future developments gação). Faculdade de Arquitectura, Universidade Téc-
Some aspects of the Shape Grammar formalism nica de Lisboa, Lisboa.
are to be further addressed, stabilizing the design Jowers, I., Earl, C., 2011. Implementation of curved shape
system so it can be extended to other collections grammars. Environment and Planning B: Planning and
and element types, namely the validation of map- Design 38, 616 – 635.
ping operations, as well as control mechanisms Knight, T.W., 1980. The generation of Hepplewhite-style
such as labels and parameter intervals. Concerning chair-back designs. Environment and Planning B: Plan-
implementation, although Grasshopper was used ning and Design 7, 227 – 238.
in this first mockup, a study must be conducted on McCormack, J.P., Cagan, J., Vogel, C.M., 2004. Speaking the
the available programming technologies for imple- Buick language: capturing, understanding, and explor-
menting the design system. Concerning the pro- ing brand identity with shape grammars. Design Stud-
duction system, a thorough research on production ies 25, 1–29.
techniques is to be developed, both for handcrafted Pine, B.J., 1993. Mass Customization: The New Frontier in
and industrial ceramics manufacturing. Last but not Business Competition. Harvard Business Press.
least, establishing a partnership with a manufactur- Pottmann, H., Asperl, A., Hofer, M., Kilian, A., 2007. Architec-
er is a key factor for the success of this research. tural Geometry, 1st ed. Bentley Institute Press.
Stiny, G., 1980. Introduction to shape and shape grammars.
ACKNOWLEDGEMENTS Environment and Planning B: Planning and Design 7,
Eduardo Castro e Costa is funded by FCT (Fundação 343 – 351.
para a Ciência e a Tecnologia) with PhD grant SFRH/ Stiny, G., Gips, J., 1972. Shape Grammars and the Generative
BD/88040/2012, and by CIAUD (Centro de Investi- Specification of Painting and Sculpture, in: Freiman,
gação em Arquitectura Urbanismo e Design) with C.V. (Ed.), Information Processing 71. North Holland,
Amsterdam, pp. 1460–1465.
Abstract. In sports facilities, a grandstand is the structure which provides good sight
quality and safety evacuation conditions for the spectators. Grandstand plays important
functional and formative roles in sports facilities, and especially in large scale stadia.
This paper argues the notion of shape grammar and its computer implementation will
solve the difficulties in grandstand design. The authors identify the specific difficulties of
grandstand design, then set the aims of the grammatical computer tool. Afterwards the
shape grammar of grandstand design is formulated, and a computer tool is developed
based on the grammar. At last, the paper discusses the application and usage of the
grammar and the computer tool both in early design phase and design development phase
with a design practice case study of a large scale stadium.
Keywords. Grandstand design; shape grammar; parametric modelling.
INTRODUCTION
In sports facilities, a grandstand is the structure Therefore the in the early phase of the design prac-
which provides good sight quality and safety evacu- tice, architects are likely to use existing grandstand
ation conditions for the spectators. Grandstand design with similar condition rather than design a
plays important functional and formative roles in new grandstand for the project. In the design devel-
sports facilities, and especially in large scale stadia. opment phase, modification of grandstand design
Apart from the function and form of the grand- will result in the large amount of remaking of docu-
stand, the designs of other parts of stadium such as mentation. Furthermore, the modification process
the facade surface and the roof are closely related of the other parts of the building would be delayed
to the grandstand, and most of the interior rooms by the grandstand. Three problems are identified in
are placed under the grandstand. In the very early the traditional grand stand approach. How to pro-
design phase of a large scale stadium, the design vide a highly customized grandstand model in early
of the grandstand must be considered to accom- design phase? How to provide rapid response to the
modate the spectators and the other basic need of modifications in the design developments phase?
the building. Traditionally, the process of a grand- How to rapidly negotiate the relationship between
stand design trends to be complicated and tedious. grandstand and the other parts of the building?
The contents of Grandstand Grammar tion (Figure 1); rules of aisle generation (Figure 2);
After the identification of the design tasks, the rules rules of seat distribution (Figure 3); rules of elevation
can be translated to a shape grammar called Grand- calculation and elements translation (Figure 4). Figu-
stand Grammar (GG). Rules in GG are organized into res 5 to 7 show the process of using GG to generate
4 groups: rules of row and seat guide curve genera- a single tier grandstand.
Figure 3
R15 to R19 are the rules of seat
distribution.
Figure 4
R20 and R21 are the rules of
elevation calculation and
elements translation.
Figure 7
Step 17 to 19 show the genera-
tion of the whole grandstand
3d model.
Figure 8
Main components of the
grandstand design tool.
Figure 10
grandstand design of the
schematic design phase (left)
and the design development
phase (right). Many param-
eters were changed during the
design process. The model can
be updated quickly according
to the adjustment of the
parameters.
Abstract. This paper describes a generative design approach integrating real building
data in the process of developing a shape grammar. The goal is to assess to which extent
it is feasible the use of a reverse engineering procedure to acquire actual building data
and what kind of impact it may have on the development of a shape grammar.
The paper describes the use of Terrestrial Laser Scanning (TLS) techniques to acquire
information on the São Vicente de Fora church, then the use of such information to
develop the corresponding shape grammar, and finally the comparison of this grammar
with the grammar of Alberti’s treatise, to determine the grammatical transformations that
occurred between the two grammars.
Keywords. Alberti, shape grammar, shape recognition, design automation,
transformation in design.
INTRODUCTION
This paper is centered on the construction of the on Portuguese architecture in the counter-reform
shape grammar of a Portuguese church called São period (Kruger et al., 2011) to determine the gram-
Vicente de Fora. For this propose a point cloud from matical transformations (Knight, 1983) that occurred
a TLS surveying was used and a part of a church ele- from the original Albertian grammar to the actual
ment (a Doric base) was then closely analyzed. buildings grammars.
This research is part of a wider project aimed at Established in 1147 by King Afonso Henriques
decoding Alberti’s treatise De Re Aedificatoria by in- both the monastery and its church of São Vicente
ferring the corresponding shape grammar using the de Fora had their reformation by King Filipe I in the
computational framework provided by description 16th century.
grammars (Stiny, 1981) and shape grammars (Stiny It is believed that these renovations followed
and Gips, 1972). The goal is to compare the grammar drawings of Juan de Herrera who was in Lisboa by
of the treatise with the grammar of actual buildings 1580-1583 and the drawings of Filipe Terzi (So-
to determine the extension of Alberti’s influence romenho, 1995). The Portuguese architect Baltazar
plot, which goes from point A to point A10. Rule intercolumn. M relates to a quarter of the church’s
1 generates recursively a generic structure of the main nave, measured closer to the transept, w1 is
church main compositional elements in eleva- the remaining width, which goes from the pilaster
tion. This grid contains a set of labels A, B, C and D axis to the beginning of the arch. Finally L is equal
inserted in a horizontal line from the bottom to to ½ of the church’s main nave plot minus M. In Rule
the highest central line of the barrel vault ceiling. 2 an insertion point (A2) is given to start the genera-
Each label has several sublabels from Kn to Kn-1 tion of a proto pilaster. This point is obtained from
being K ∈ {A} and n>1; n ∈ {1, 2, 3, 4, 5, 6, 7, 8, 9, the interior of the church structure both in plan and
10} and K ∈ {B, C, D} and n>1; n ∈ {1, 2, 4, 6, 7, 8, 9, in section using the previous rule. Rule 3 call previ-
10}. The lines containing the set points {A, D} and ously developed grammars to insert detailed base,
{A3, D3} are mirror axes. The equation of this set is shaft and capital. Rule 4 inserts an arch from a lateral
L=wr+4d+2ic+3/2IC+M; where the mirrored part chapel and a point B. Rule 5 takes the former arch
of the main nave is M=4d+2ic+3/2IC+w1; wr is the and inserts a Doric entablature with triglyphs and a
church high chorus width, d is the pilaster width, ic point C using point B as a reference. Rule 6 inserts
is the inter chapel’s intercolumn, and IC is the main the barrel vault ceiling on the top of the entablature
and a point D using point C as the insertion point. A1`. Rule 12 (e me ma) erases labels. A 3D model of
Rule 7 inserts a half chapel and a pilaster and a line the main nave was generated by applying the gram-
with A3 and C3 points. Rule 8 mirrors the half chapel mar rules. Rules are presented in Figure 2.
using points A3 and C3 to define the mirror axis.
Rule 9 inserts a half chapel, a sub inter column (ic) NEW SHAPES FROM AN OPTIMIZATION
and an axis from point A6 to D6. Rule 10 mirrors two PROTOCOL - THOUGHTS ON EMBED-
sets of pilasters (with labels A9, A8, A4, A2) using the DING
axis with points A and D as the mirror line. Rule 11 As mentioned above, the experiment described in
generates the space to accommodate the high cho- this paper relates to the extraction of data from the
rus using the pilasters with labels A9, A8, A4, A2 and 3D model generated out of the point cloud model
and then the analysis of part of it elements. The combination of these L, C, S elements and
There are two main concepts that were taken their parametric variation generate the moulds
into consideration in the task of evaluating a line Ovule, Channel, Wave and Gulens. In turn, the com-
extracted from the point cloud. One is the notion of bination of these moulds gives different column
LCS system (Figure 3) mentioned in Alberti’s De Re system elements, such as Pedestal, Base, Column
Aedificatoria and the other is George Stiny’s notion (Shaft), Capital and Entablature. Finally, these might
of embedding. be used to obtain different combinations of Doric,
In Book VII, Chapter VII of the Re Aedificatoria Ionic, Corinthian and Composite style elements.
(Alberti, 2011), while describing the Bases and the The combinations of these column system elements
Capitals of the column system, Alberti mentions that may produce around 900 different columns. This
these can be constructed from a minimum vocabu- is the size of the language of columns that can be
lary composed by the letters L, C, S, reversed C and generated from the LCS system, and which might
reversed S. Is to be noted that Alberti`s original trea- be recognized using the system showed in the next
tise edition contains no drawings. section. In this way, Alberti was providing a proce-
dure to generate almost all the column system with according to him quantity, proportion, and location.
a sub system that was embedded in it. Finally anoth- The rules in use are rules of the type x→x` de-
er function of the LCS system was to provide a loca- fined by Stiny, meaning that a shape x is trans-
tion for decorative elements like flowers, leafs, and formed in a similar shape with parametric variations
eggs, which are not addressed in this paper. (Stiny, 2011). If we get rid of the parameterization,
The column systematization shape grammar we may obtain rules of the type x→y, which trans-
that can be developed from the LCS system is a form a shape x into another shape y.
grammar of detail and it is identified as a bottom Both x and y are elements in the index of dimen-
up shape grammar. The results of the experiment sions i and j, where i ∈ {0d, 1d, 2d, 3d}, that is, points,
described in this paper support this hypothesis. Ap- lines, planes, solids, and j ≥i). In the LCS vocabulary,
parently the rules from the treatise are to be applied C is part of the base that is part of the column and so
in an almost straightforward fashion. But Alberti`s on. A general definition of this embedding feature
established that the designer must use them as is x→prt(x).
pleased in order to achieve “concinitas”, that means In this particular case, LCS shapes are boundary
elements of the columns. Let’s take then the defini- this paper, r2, r4, r5 and r7 ∈ I; r6, r10 and r11 ∈ A; r0,
tion rule x→b(x) to encode the transformations that r1,r8 and r9 ∈ C; and r3 ∈ S.
occurred in the design of an element from an origi- As Knight mentions, in rule change transforma-
nal grammar to a transformed grammar. tions C shapes are defined as transposed shapes,
that is, as new shapes or as resized and/or reposi-
THE GRAMMATICAL TRANSFORMA- tioned shapes. Rules 2, 4, 5 and 7 are equal to those
TIONS - EVALUATION PROCESS AND found in the original Albertian grammar. In Rule 3
FEEDBACK the changes verified are in the constituents of the
One role of the grammar is to help tracing the influ- Capital. The disposition of the capitals are similar to
ence of Alberti’s treatise on the design of the São those found on the second level of the Palacio Ru-
Vicente de Fora church by verifying whether its ele- cellai’s façade. In this case, there is a simultaneous
ments can be obtained from Alberti’s rules or some subtraction and resize and reposition transforma-
sort of transformation of such rules. tions. The Shaft, Base and Capital’s heights are equal
The Transformations in Design framework pro- to the ones described in the treatise. Rule 6 adds a
posed by Knight (1983, 1994) -- according to which new element to the grammar, a barrel vault ceiling.
the transformation of one style into another can be Is to be noted that this element don’t belong to the
explained by changes of the grammar underlying column system. Rule 10 adds a new chapel and Rule
the first style into the grammar of the second -- will 11 the high chorus. Rule 8 and 9 a mirror but chang-
be used as the theoretical background. es the axis location. Is to be noted that in the treatise
According to Knight, There are at least four is not specified the notion of mirror but of symme-
different ways of transforming a grammar as dia- try. Rule 0 and Rule 1 change the location of labels.
grammed in Figure 4, namely, rule addition, rule These rules manipulate elements that need to
subtraction and rule changing, which can be des- be more closely observed. A technique to recognize
ignated by letters A, S, and C, respectively. A fourth and analyze sets of curves was used. This technique
transformation type I can be added if we consider consists in a Grasshopper code (Figure 5) whose
that a rule can remain unchanged, This transfor- main goal is to extract and compare section lines
mation I is important for our study because each from an element of the column (a Doric Base torus),
time such a transformation is used there is strong which proceeds automatically in three different
evidence that the designer was knowledgeable of steps:
Alberti`s rules, as seen in the Loggia Rucellai shape The first step is the extraction of a line section
grammar by Alberti himself (Coutinho et al., 2013). from a mesh surface out of the point cloud.
In the São Vicente de Fora grammar presented in The second step consists in comparing this
line with a curve previously embedded in the code mar; and third, curves that are sections of parts of
through the distances between the control points of the “real” building obtained from the PCM are auto-
both lines, which need to be bigger than 0. In rule matically recognized, analyzed and evaluated sug-
x→b(x), b is such that (in this particular experiment) gesting that this technique might be an efficient
b>0; and b ∈ B where B ∈ LCS. way of evaluating large data sets.
Then, the third step analyses the difference in The use of such survey data to generate the
value of linear distance and rejects the ones that grammar was of great help, particularly, considering
are not in the acceptable range. This last stage is the level of accuracy and detail that is possible to
not completely implemented yet. A similar process achieve from such a method.
using canonical representation (Keles et al., 2010) The code to automate the shape recognition
graphs are in use in order to better visually under- proved to be helpful but improvements are neces-
stand the differences and similarities between the sary, namely the generation of mesh surfaces direct-
topology of different lines. As said above, the con- ly from the PCM in a complete automated way.
trol points of the target curve and the points of sec- The process for choosing the curves from the
tion extracted from the cloud are points contained corpus (that are the models to be merged) needs to
in parallel lines. The distance measured is the seg- be optimized and is not completely defined. So far a
ments of such lines. This process is not completely linear distance is in use but the notion of neighbor-
efficient. It might work well for straight lines con- hood (Krishnamurti and Stouffs, 2004) might be of
tained in parallel planes but not in the case of curves great help in order to understand the kind of trans-
in the 3d space. It is interesting to note that the pro- formations occurred in the application of the rules
cess used in this experiment reduces the Algebras so in different buildings. This task will be the focus of a
that U33 → U12 → U02. future research article.
CLOSURE ACKNOWLEDGEMENTS
The contributions of this paper are threefold: first, This work is part of the “Digital Alberti” project fund-
it is the first Portuguese grammar obtained from ed by Fundação para a Ciência e Tecnologia (FCT),
Alberti`s treatise shape grammar rules; second, it Portugal, and hosted by CES at the University of Co-
uses a laser surveying and the resulting point cloud imbra (PTDC/ AUR/64384/2006) and by ICIST at the
model (PCM) as a way of transforming the grammar Technical University of Lisbon. The project is coordi-
and develop the São Vicente de Fora shape gram- nated by Mário Krüger. Filipe Coutinho is funded by
Wolfgang E. Lorenz
Vienna University of Technology; Institute of Architectural Sciences; Digital Architecture
and Planning
http://www.iemar.tuwien.ac
lorenz@iemar.tuwien.ac.at
Abstract. When Benoît Mandelbrot raised the question about the length of Britain’s
coastline in 1967, this was a major step towards formulating the theory of fractals,
which also led to a new understanding of irregularity in nature. Since then it has become
obvious that fractal geometry is more appropriate for describing complex forms than
traditional Euclidean geometry (not only with regard to natural systems but also in
architecture). This paper provides another view on architectural composition, following
the utilization of fractal analysis. The procedure concerning the exploration of a façade
design is demonstrated step by step on the Roman temple front of the Pantheon by
Appolodorus and its re-interpretation – in the particular case the entrance front of
Il Redentore, a Renaissance church by Palladio. Their level of complexity and range
of scales that offer coherence are visualized by the specific measurement method of
box-counting.
Keywords. Fractal analysis; box-counting method; Pantheon; Il Redentore; Palladio.
INTRODUCTION
This paper has two objectives: rent study focuses on the overall viewpoint specified
1. The first one concerns the description of har- by a harmonic expression of distributions across
mony defined by the appearance of architec- different scales. The author uses for the first time a
tural elements of different sizes and scale. particular fractal analysis method as measurement
2. The second one utilizes the first one, introduc- of reminiscence, applied to the Roman temple front
ing an objective comparison method between of the Pantheon (built between 110 and 125 AD by
an architectural design (acting as origin) and its Appolodorus) and the Renaissance temple front of Il
historical followers. Redentore in Venice by Palladio (groundbreaking in
Apart from an analysis concerning the utiliza- 1577) – The Pantheon was chosen as Palladio (1984)
tion of characteristic architectural elements, the cur- emphasized the particular importance of that build-
Despite differences of overlapping elements, the re- and the pillars flanking the entrance with own ga-
sults nevertheless display a similar range of coher- bles display another additional level.
ence in comparison to Il Redentore (Table 2 and Fig- Concerning the different algorithms, both sets
ure 2b). Moreover, the medians of the two sets are of measurement lead, as it is true for Il Redentore,
similar to Il Redentore: the median of set A equals to very similar results (Table 2). The deviation of the
1.661 (1.677) and for set B it is 1.660 (1.685) – with data is again low, although this time minimum R² is
slightly higher interquartile ranges of 2.07 and 1.32 slightly lower (0.992 and 0.994) than in the case of Il
percent. This leads to the conclusion that both fa- Redentore (0.996 and 0.997).
çades are characterized by a similar development of
architectural elements across a similarly broad range CONCLUSION
of scales (range of coherence: 1-30 percent with Il The box-counting method provides an objective
Redentore and 1.5-28 percent with the Pantheon). comparison method between design solutions
In particular, this means that details of a certain demonstrated by Il Redentore and the Pantheon.
size have their correspondence in both façades, al- It visualizes the development of roughness across
though differences in design are obvious. E.g., Il Re- multiple scales and, derived from that, the harmonic
dentore, for instance, displays not only one but two relations between the whole and its parts. Both re-
clearly interrelating Roman temple façades, while sults discussed in this paper show a similar depth of
the Pantheon consists of two vertically arranged details and a similar level of complexity. Specifically,
gables. In the case of Il Redentore, niches for statues this means that, even if Palladio changes the com-
Figure 2
Il Redentore and Pantheon:
box plot diagram of box-
counting dimensions (a) and
box size in percentage of the
height of the front view (b).
position of the temple front, the harmonic distribu- HB Schmiedmayer, H Stachelberger and IC Gebeshu-
tion across all scales is similar to the Pantheon. This ber (eds), Biomimetics – Materials, Structures and Pro-
proves that, although variations in the reinterpreta- cesses: Examples, Ideas and Case Studies, Springer, Ber-
tion occur, Il Redentore nevertheless takes up the lin, pp. 179-200.
same characteristics as its origin of a Roman temple Lorenz, WE 2012, ‘Fractal Geometry of Architecture: Imple-
front. mentation of the Box-Counting Method in a CAD-soft-
Box-counting reveals similarities and differences ware’, Proceedings of the eCAADe Conference, Prague,
between styles with regard to different degrees of Czech Republic, pp. 505-514.
roughness and depth of self-similarity. Up to now, Maertens, H 1884, Der optische Maßstab: oder die Theorie
the author has analyzed façades, corresponding to und Praxis des ästhetischen Sehens in den bildenden
a larger distance of the observer. As ornaments are Künsten, Wasmuth, Berlin.
characteristic elements of a building, it would be in- Mandelbrot, BB 1981 ‘Scalebound or scaling shapes: A use-
teresting for future work to deal with a smaller dis- ful distinction in the visual arts and in the natural sci-
tance as well. ences’, Leonardo, Vol. 14, No. 1, pp. 45-47.
Mandelbrot, BB 1982, The fractal geometry of nature, W.H.
REFERENCES Freeman, San Francisco.
Bovill, C 1996, Fractal Geometry in Architecture and Design, Ostwald, MJ, Vaughan, J and Tucker, C 2008, ‘Characteristic
Birkhäuser, Boston. Visual Complexity: Fractal Dimensions in the Architec-
Foroutan-pour, K, Dutilleul, P and Smith, DL 1999 ‘Advances ture of Frank Lloyd Wright and Le Corbusier’, Nexus VII:
in the implementation of the box-counting method of Architecture and Mathematics, 7, pp. 217-232.
fractal dimension estimation’, Applied Mathematics and Palladio, A and Beyer, A 1984, Die vier Bücher zur Architektur
Computation, Volume 105, Issue 2-3, pp 195-210. (I quattro libri dell’architettura, Venice 1570), Verl. für Ar-
Lorenz, WE 2003, Fractals and Fractal Architecture, Master chitektur Artemis, Zürich.
Thesis, Department of Computer Aided Planning and Palladio, A and Davis, MD 2009, Andrea Palladio: L’Antichità
Architecture, Vienna University of Technology, Vienna. di Roma Raccolta brevemente da gli auttori antichi,
Lorenz, WE 2009, ‘Fractal Geometry of Architecture – Imple- et moderni; Nuovamente posta in luce (Rom 1554),
mentation of the Box-Counting Method in a CAD-Soft- Universitätsbibliothek der Universität Heidelberg, Hei-
ware’, Proceedings of the eCAADe Conference, Istanbul, delberg.
Turkey, pp. 697-704. Puppi, L 1994, Andrea Palladio: das Gesamtwerk, DVA, Stutt-
Lorenz, WE 2011, ‘Fractal Geometry of Architecture: Fractal gart.
Dimension as a Connection Between Fractal Geometry Salingaros, NA 2006, A theory of architecture, Umbau-Verlag,
and Architecture’ in P Gruber, D Bruckner, C Hellmich, Solingen.
INTRODUCTION
The overall goal of the ongoing PhD is the develop- According to the Management Plan for the
ment of a tool able to assist the architect in the reha- Historical Centre of Oporto, a strategic document
bilitation design process of the bourgeois house of created in 2010, a requirement of UNESCO, when it
Oporto, Portugal. The research described in this pa- revised its classification program of world heritage
per start this with the presentation of a shape gram- sites, the historical center consists of 1,796 build-
mar simplification focused only on the topology of ings, 443 in good condition, 649 in average condi-
rehabilitated or in rehabilitation buildings. tion, 575 in poor condition and 78 in ruins, with 51
The old center of Oporto should be preserved being works in progress. The dominant function is
not only for the knowledge and symbolism present housing, constituting 80% of the buildings (Loza, et
in its built historic heritage, but also for its intrinsic al. 2010). The bourgeois house is the building type
material and economic values. that predominates in this territory.
In this city, the Porto Vivo, SRU - Society for Ur- In the critical success factors in the report of the
ban Rehabilitation has recently been created and its 2010 activities of the SRU, we can identify as weak-
mission is to lead the process of urban regeneration. nesses the extent of the territory and the complexity
This institution has replaced CRUARB (commission of the task and as strengths the experience, knowl-
for the urban renewal of Ribeira/Barredo) restructur- edge and results.
ing their political action. The report of the 2011 activities emphasizes the
Figure 1
Street Clérigos (author’s
photograph).
Figure 2
Street 31 de Janeiro (author’s
photograph).
Figure 5
First floor existent layout.
Figure 6
First floor proposal layout.
CV Information Processing 71, Amesterdam, pp. 1460– Vieira de Almeida, P (ed) 2008, Apontamentos para uma
1465. Republished in Petrocelli OR 1972 The Best Com- Teoria da Arquitetura, Notes for a Theory of Architecture,
puter Papers of 1971, Philadelphia, pp. 125–135. Horizonte, Lisbon.
Stiny, G and Mitchel, WJ 1978, ‘The Palladian grammar mod-
els’, Environment and Planning B, 5, pp. 5–18. [1] www.mit.edu/~tknight/IJDC/.
From the treatise to the built work in the design of sacred buildings
Bruno Figueiredo1, José Pinto Duarte2, Mário Krüger3
1
School of Architecture, University of Minho, Portugal, 2CIAUD, Faculdade de Arcqui-
tectura, Universidade Técnica de Lisboa, Portugal, 3Department of Architecture, Centre
for Social Studies, University of Coimbra, Portugal
1
http://www.arquitectura.uminho.pt/3751.page-pt, 2http://home.fa.utl.pt/~jduarte/ 3http://
woc.uc.pt/darq/person/ppgeral.do?idpessoa=1
1
bfigueiredo@arquitectura.uminho.pt, 2 jduarte@fa.utl.pt, 3 kruger@ci.uc.pt
Abstract. This paper presents a research on the use of shape grammars as an analytical
tool in the history of architecture. It evolves within a broader project called Digital
Alberti, whose goal is to determine the influence of De re aedificatoria treatise on
Portuguese Renaissance architecture, making use of a computational framework (Krüger
et al., 2011).
Previous work was concerned with the development of a shape grammar for generating
sacred buildings according to the rules textually described in the treatise. This work
describes the transformation of the treatise grammar into another grammar that can also
account for the generation of Alberti’s built work.
Keywords. Shape grammars; parametric modelling; generative design; Alberti; classical
architecture.
INTRODUCTION
The research described in this paper is part of a in deriving solutions in the same language. Howev-
larger project called Digital Alberti, whose aim is to er, certain features of Portuguese classical churches
determine the influence of Alberti’s treatise De re ae- are not identifiable in such solutions and, therefore,
dificatoria on Portuguese Renaissance architecture, its source of inspiration remains uncertain.
making use of a computational framework (Krüger Several scholars in the history of Portuguese
et al., 2011). Renaissance architecture report that Portuguese
This paper analysis the task of achieving a shape royal house contracted Italian architects and pro-
grammar that can contribute for clarifying the influ- moted the visit of Portuguese architects to Italy dur-
ence of Alberti’s work on Portuguese architecture of ing the 15th and 16th century. (Moreira 1991, 1995;
the counter-reform period. Previous work was con- Soromenho, 1995; Branco, 2008) This fact may have
cerned with the translation of De re aedificatoria’s caused architects who worked in Portugal during
descriptions of sacred buildings into a generative that period to contact with Alberti’s buildings that
shape grammar. (Duarte et al., 2011; Figueiredo et were erected in the late 15th century.
al., 2013) This grammar has shown to be successful This fact led us to consider the transformation
Figure 1, which diagrams Sant’Andrea’s plan propor- length (Li) is directly dependent of cell width (Wi):
tional schema. Both analyses revealed three main Li = α Wi ; α ∈ {1, 1 1/3, 1 1/2, 2}.
aspects that differentiate Sant’Andrea’s plan from Sant’Andrea’s Li dimension corresponds to 3Wi,
the treatise grammar generative outcome: (1) cell resulting in a 3:1 proportion. Although this propor-
proportions; (2) the relative proportions between tion does not comply with the descriptions in Book
the lateral chapels’ openings and the skeleton be- 7, it is foreseen in the proportions described by Al-
tween them; (3) the rooms that fill space between berti in Chapters V and VI of Book 9 - Ornament on
lateral chapels. Both the analysis and the subse- Private Buildings: “…The method of defining the
quent shape rules implications are described below. outline is best taken from those objects in which Na-
ture offers herself to our inspection and admiration
(1) cell proportions as we view and examine them. […] The very same
In Book 7, Chapter IV, paragraph two, Alberti de- numbers that cause sounds to have that concinnitas,
scribes the principles for defining the proportion of pleasing to the ears, can also fill the eyes and mind
cells in rectangular temples. The rule of the treatise with wondrous delight. From musicians therefore
grammar considered these proportions, where cell who have already examined such numbers thor-
oughly, or from those objects in which Nature as dis- generate a temple with the length Li of Sant’Andrea
played some evident and noble quality, the whole (Figure 2 left), and with the further integration of the
method of outlining is derived. […]” correspondentiae inatae in set of conditions, further
On Chapter VI, Alberti refers to and describes in solutions can be achieved by the application of Rule
detail, the use of musical consonances to determine 1:
cell proportions. In synthesis, he defines that the Li = α Wi ; α ∈ {1, 1 1/3, 1 1/2, 2, 2 1/4, 1 7/9 ,3, 2 2/3,
proportions may be either short, long, or intermedi- 4, √2/√1, √3/√2, √3/√1, √4/√3}.
ate: as short proportions he considers Square (1:1),
Sesquialtera (3:2) and Sesquitertia (4:3); as intermedi- (2) the proportion of the skeleton between
ate proportions Double (1:2), Duplicate Sesquialtera lateral chapels
(9:4) and Duplicate Sesquitertia (16:9); and finally, as The proportional relation between lateral chapels
long proportions Triple (3:1), Double Sesquitertia (8:3) openings (Wcl) and the walls separating the vari-
and Quadruplus (4:1). ous chapels (Ws) is described on Chapter IV, Book
In the same chapter, Alberti describes that con- 7, between paragraphs 4 and 7: “… the bones, that
cinnitas is reached by the use of musical consonanc- is, of the building, which separate the various open-
es, but he also considers the use of correspondentiae ings to the tribunals in the temple - be nowhere less
inatae to establish “certain natural relationships that than a fifth of the gap, nowhere more than a third,
cannot be defined as numbers, but that may be ob- or, where you want it particularly enclosed, no more
tained through roots and powers.” Further reading of than a half.”
this chapter enabled the inference of correspondenc- These parameters and conditions were synthe-
es between certain ratios – (√2:√1), (√3:√2), (√3:√1), sized in the Rule 4 of the treatise grammar by the
(√4:√3) – that can be used to define proportions. equation:
By incorporating the musical consonances in the Ws = φ’ Wcl; 1/5 ≤ φ’ ≤ 1/3 ∨ φ ‘= 1/2.
initial conditions of Rule 1, the grammar will able to The Ws dimension is also dependent on cell
length Li, which is equal to the sum of the lateral rear facade form a room connected to the cell that-
openings, plus the width Ws between them the conforms a rectangular plan (Figure 1). This spatial
temple’s end walls, and it can be deduced by the fol- relation was not considered in the treatise shape
lowing function: grammar because it is not described in De Re Aedifi-
Ws = ( Li - Ncl Wcl ) / (Ncl + 1). catoria. While the addition of one single chapel per
Since at Sant’Andrea, the proportion Wcl:Ws cor- facade, as it happens in San Sebastiano, results in a
responds to √3:√2 (Table 1), it does not verify the relatively evident spatial relation between lateral
conditions specified for φ’ in the initial rule. In a strict chapels and the cell’s wall, when several chapels are
understanding of the principles laid out in Book added to the same facade, such a spatial relation can
VII, such a non-correspondence could have been be configured in several ways. The set of Rules 7 (Fig-
considered as an error in the Albertian canon. How- ure 3 center) show the spatial relations translated
ever, several authors (Tavernor, 1985; Kruger, 2011) from the treatise, while Rule 7a’ and Rule 7b’ (Figure 3
showed that the use of the proportion √3:√2 to de- right) show the new spatial relations introduced by
sign the chapels’ openings and the skeleton could reproducing the ones existent in Sant’Andrea.
be considered Albertian by introducing the use of
correspondentiae inatae in the definition of such a Sant’Andrea grammar add-ons
proportion. The subsequently inclusion of such cor- According to Terry Knight (1983), to transform a
respondences in the set of conditions in the original shape grammar, at least one rule addition, rule de-
Rule 4 (Figure 2 right) results in: letion or rule change has to be performed. By tak-
Ws = φ’ Wcl; 1/5 ≤ φ’ ≤ 1/3 ∨ φ‘ ∈ {√2/√1, √3/√2, ing into consideration her definition of rule change:
√3/√1, √4/√3}. “Rule change changes a rule, initial shape, or final
state by changing any of its spatial or nonspatial
(3) rooms filling space between chapels, components: spatial relations, spatial labels, or state
frontispiece and rear facade. labels.” - the operations performed to Rule 1 and
In Sant’Andrea, the spaces in between the row of lat- Rule 4 can be considered a rule change because they
eral chapels and the edges of the frontispiece and add new dimensional conditions to the initial ones.
Despite the maintenance of the parametric schema, their interrelations, which have been applied to de-
new spatial relations can be achieved by resizing fine the recursive structure of the treatise grammar.
the plan. The addition of a new rule, as in the opera- Since this structure encapsulates the formal and par-
tion described above for the addition of Rule 7a’ and ametric logic of Alberti’s buildings, it was decided to
Rule 7b’, can also be considered a transformation of maintain the core of their recursive structure during
grammar. the transformation process. Although the recursive
structure of the grammar was kept, several rules
SHAPE GRAMMAR TRANSFORMATION were transformed by changing their spatial rela-
WITH A CONSTANT RECURSIVE STRUC- tions, and other rules were added.
TURE Figure 4 shows a step by step computation, il-
The treatise grammar followed mimetically the or- lustrating the different options of derivation at each
der of description of the temples’ parts in the trea- step, where only one derivation is subsequently
tise. Their morphology is mainly described on Chap- transformed by the use of the next set of shape
ters IV and V of Book 7, in which the constituent rules.
parts of the temples are treated: cell – inner space
of the temple, defined by the geometry of their area; THE CLASSIC NUMBER VERSUS THE
tribune; lateral chapels and their skeletons; portico CONTEMPORARY PARAMETRIC MODEL
informed by the column systems – shaft, base, capi-
tal and entablature – and their proportions; pedi- The ‘number’ in the algorithmic nature of
ment; walls; roof; and main openings. De re aedificatoria translated by contem-
While in Palladian Villas grammar (Stiny, 1978) porary eyes
the Villas constant partition features were useful to In classical philosophy, numbers have a specific
define the grammar recursive structure, in Alberti’s meaning before its scientific dimension. Alberti sys-
built work, the few examples of designs of sacred tematizes classic architectonic dimensions through
buildings, and the typological variety of those ex- considerations on the perfection of numbers, as well
amples do not seem to be the more appropriate as establishing relationships between music harmo-
for setting up a grammar representative of Alberti’s nies and proportional systems in architecture (Book
sacred buildings. From the reading of the treatise, it 9, V).
was possible to consider a framework for the defini- In Nexus 2002 conference, during a round table
tion of the morphological parts of the temples and discussion about the significance of both the quan-
tity and the quality of numbers in De re aedificato- portional and morphological dependency. Thus, the
ria, Lionel March in answering to Robert Tavernor’s schema presented in Figure 1 and the shape rules il-
question [2] - “Can anyone explain exactly what lustrated in Figure 2 feature the possibility of assign-
might be meant by the “quality” of a number?” - ar- ing different values to their dimensional parameters
gued for the numbers dual nature in the treatise: and also by the interdependencies between the cell
“When Alberti was writing, the words ‘quantity’ and and chapels dimensions and location, resulting in
‘quality’ still retained their Aristotelian roots. (…) a parametric shape rule. This kind of relations is re-
Thus, from an Aristotelian perspective, in giving peated in other shape rules resulting in a parametric
shape to an architectural work, Alberti is engaged in grammar (Stiny, 1980).
qualitative decisions, but in dimensioning the work Like the initial grammar, the transformed gram-
he is acting quantitatively. (…) A pediment is quali- mar still is a parametric grammar. Therefore, each
tatively ‘triangular’, but its dimensions are quantita- derivation of the grammar can potentially generate
tively 24 feet long to 5 feet high.” a family of design solutions, rather than one single
March’s argument in the discussion of number solution. A computational parametric model was
significance follows to the idea that “a contemporary developed in Grasshopper with the aim of managing
approach would be computational with respect to the generation of multiple design solutions within
‘number’ and semiotic with respect to reference and the grammar (Grasshopper is a Visual Programming
usage. Interface that interacts with modeling software Rhi-
The treatise grammar inference and their subse- noceros. A program written in Grasshopper consists
quent transformation followed this notion of work- of a combination of interlinked components per-
ing simultaneously with a ’contemporary’ under- forming operations on primitives, usually but not
standing of ’shapes’ and ‘numbers’. ’Shapes’configure necessarily geometrical ones. This programming
the essence of the spatial relations of shape rules, paradigm allows visually developing parametric
while ‘numbers’ introduce their dynamic dimension- geometrical models, whose outputs correspond to a
al significance. family of solutions). The parametric model encodes
the knowledge gathered in the grammar inferring
De re aedificatoria a pre-digital parametric and transformation processes. The output depends
model on the variation of parameters, which correspond to
The process of inferring shape rules directly from the what Alberti prescribes for the number and dimen-
reading of De re aedificatoria exposed the algorith- sions of the elements that should, according to the
mic nature of their content. Alberti notations on the author’s theory and practice, conform the temple
sacred buildings parts are described in terms of nu- (Figure 5).
merical qualities and quantities defining their pro- In the last three decades, computational tools
gained an extraordinary importance in the contem- I pass from the drawings to the model, I sometimes
porary architectural discourse. Parametric design notice further mistakes in the individual parts, even
is one of the computational models that acquired over the numbers”
more relevance in these process. Despite their im- Despite this incongruity, the analysis of the
portance, little discussion has been given to the use buildings contributed for the systematization of
of parametric design in a pre-digital era. The transla- a coherent body knowledge of Albertian sacred
tion of Alberti’s work into a shape grammar revealed buildings because our focus on the buildings was
that the De re aedificatoria’s descriptions of sacred constrained by our concern for the structure of the
buildings is precursor to the use of parametric de- treatise grammar.
sign to define a set of architectonic principles. Thus, The methodology presented for the inference
it is inevitable that a research on De re aedificatoria of transformations of the treatise shape grammar
today gives rise to its implementation as a compu- contributed for encoding new knowledge into the
tational model. grammar. Although, the algorithmic nature of the
treatise descriptions eased the task of matching
CONCLUSION building proportions and morphology with the
The variety of context and the role that Aberti had grammar shape rules, this reinforces the notion
in the design of his buildings results in very specific that inferring rules from the analysis of a corpus of
knowledge that can be retrieved from them. Thus, existing buildings is an adequate tool to reinforce a
the sole analysis of the buildings were not sufficient grammar’s capability for generating solutions in ac-
to set up rules defining a consistent architectural ty- cordance to both textual and design descriptions
pology. Furthermore, they do not always verify his (Mitchell, 1990).
treatise’s principles. Regarding to this subject, Tav- Both shape grammar and parametric model
ernor (1996, p. 178) remembers that Alberti (IX, 10, implementations prove to be effective tools for
p.137) made reference to the difficulty of translating generating design solutions in the same style. The
his theoretical principles in a successful design: “I former introduces a step by step computation that
can say this of myself: I have often conceived of pro- reinforces the visual perception of formal transfor-
jects in the mind that seemed quite commendable mations. The latter, by automating the process of
at the time; but when I translated them into draw- generation, emphasizes the variation on the solu-
ings, I found several errors in the very part that de- tions by controlling their parameters. Even though
light me most, and quite serious ones; again, when their structure has different philosophies, they used
I return to drawings, and measure the dimensions, I the same knowledge on the design, resulting in the
recognize and lament my carelessness; finally when same corpus of solutions.
Abstract. This paper presents the current state of progress investigating the possibility
of modelling traditional Chinese architecture using parametrics based on the two rule
books. This builds on the work of producing systematic analysis on both rule books and
contributing knowledge from extant buildings. The case study target is the floor plan
described in Ying Zao Fa Shi. Discussion and future works are suggested at the end.
Keywords. Parametric modelling, traditional Chinese architecture, Ying Zao Fa Shi,
Kung-ch’eng tso-fa tse-le, floor plan.
INTRODUCTION
When studying traditional Chinese architecture, two before the generation of the floor plan models using
references are essential—literary records and extant Grasshopper and Rhino. The models are discussed
buildings. China, a country with over 5000 years of and evaluated and additionally, the comparison
history boasts remarkable architecture from all dy- with parallel research of the Shape Grammar ap-
nasties and periods. Unfortunately, almost none of proach to the floor plan is discussed.
the buildings before Tang Dynasty (618-907) remain
and many buildings from Song Dynasty (960-1279) THE TWO RULE BOOKS
to Ching Dynasty (1616-1912) have been badly Figure1 illustrates the chronological diagram indi-
damaged or destroyed. cating a brief history of China and the two dynasties
However, two important texts survive: Ying Zao (in the square boxes) in which the two rule books
Fa Shi (Building Standards) from Song Dynasty and were compiled.
Ching Dynasty: Kung-ch’eng tso-fa tse-le (Structural Ying Zao Fa Shi (Li, 1103) was the official building
Regulations) from Ching Dynasty, which are known standard as a guidance of design and construction
as the “two text books of Chinese ancient architec- in Song Dynasty. During the period of the Song Dy-
ture” (Liang, 1985). They are the only remaining clas- nasty, an increasing number of different levels and
sical Chinese literature which deals with architecture types of buildings were constructed which led to an
and are, in essence, rule books that govern most as- urgent requirement of an official instruction. There
pects of the design. As a starting point, the analysis were three original purposes of this book. First, to
on the two rule books is a key factor in understand- set the design guidelines to articulate the social
ing architecture of this period. This paper looks at status of feudalism. Second, to establish a unified
generation of the floor plan using the Ying Zao Fa architectural form and style to guarantee a consist-
Shi. A series of rules and hypotheses are reviewed ent level of detail and artistic effects. Third, to define
the material choices and quantities as well as the even points out that the Hui Zong Emperor was a
work load to avoid corruption and embezzlement. naive politician, but was an excellent artist. Mean-
The first edition was published in 1091 and with ex- while Li Jie was also good at drawing and music. This
tended second edition compiled by Li Jie, the court might be one reason why occasionally Li Jie omitted
architect of the Hui Zong Emperor in 1103. some important descriptive rules but paid more at-
This book consists of thirty-four volumes. Vol- tention to the architectural style and decoration.
umes one and two are the overall introduction to Together with Li’s research (Li, 2001; 2003) and the
different types and components of the architecture. on-going research in the case study on the ting tang
Volume three is about the foundations, masonry section by the authors, it has been shown that Chi-
structures and carving of handrails. Volumes four nese traditional architecture has some parametric
and five introduce the structural carpentry system. characteristics.
Volumes six to eleven introduce the finished car- As shown in Figure 2, Ying Zao Fa Shi was writ-
pentry. Volume twelve includes three timber precast ten in an ancient form of the Chinese language
methods and bamboo weave method. Volume thir- which has no punctuation. The characters, vo-
teen explains tile and cement processing. Volume cabulary, grammar and text direction were all dif-
fourteen focuses on the composition and colour ferent from contemporary written Chinese which
matching of decorative painting. Volume fifteen de- presents a big problem to modern researchers. In
scribes the precast of bricks and ceramic materials. relative terms, Ching Dynasty: Kung-ch’eng tso-fa
Volumes sixteen to twenty-five presents the work tse-le (1734) is linguistically more acceptable since
load required in the previous volumes. Volumes it is compiled in 1734, more than six hundred years
twenty-six to twenty-eight outlines the material closer to us. Meanwhile, more extant buildings from
consumption of the components mentioned above. Ching Dynasty can be studied as practical evidence.
Volumes twenty-nine to thirty-four are the selected In this book, twenty-seven types of buildings with
diagrams. accurate size and dimensions are given as examples,
The significance of Ying Zao Fa Shi is not “simply making it useful for reconstruction of buildings of
for its existing” (Li, 2001). The book is, in general, well the period.
organised, logical, systematic and rigorous which
is quite rare in ancient literature. Although some RULES FOR THE FLOOR PLAN AND PARA-
aspects such as the floor plan are relatively lacking METRIC APPROACH
in systematic description, the whole book provides In order to understand and recreate floor plans, a
readers with a “rule-based and parametric” system description of the ancient floor plan system is neces-
(Li, 2001) for the ancient style buildings. Liang (1983) sary. In Ying Zao Fa Shi, the following factors or pa-
rameters can be used to describe a building: form the area of a building. This area can be divided
• The building type (such as dian tang or ting into small units (small rectangles) ie bays (usually,
tang, here tang means hall). each bay has four columns at the four corners, al-
• The overall dimension (measured in modular though not in every case). Each bay is determined
unit): by the bay width and bay depth, as shown in Figure
• Building width (and bay width) 3. The sum of the bay width or bay depth gives the
• Building depth (and rafters) building width or depth. But in reality, the bay depth
• The grade (which is used to calculate the abso- is not described in the set of parameters above. In-
lute value of the modular unit) stead, the horizontally projected rafter is used to
In the most common and formal cases, the floor measure the depth. There are three reasons. First,
plan of a single house is rectangular and consists Ying Zao Fa Shi mentions for ting tang type building,
of two major factors: building width and building one bay depth equals to two rafters deep but it does
depth, which determines the dimension and scale not mention the relationship for the dian tang type
of the house. The building width and building depth building. Therefore in order to unify the parameters
Figure 3
Factors of the floor plan.
in the later parametric modelling, the rafter is se- could still be built up by first making hypothesis
lected as the depth measurement for both building based on the information in hand and then evaluat-
types. Second, to be consistent with the research of ing with the diagrams in the rule book and extant
case study on the section, the rafter is a key param- building measurement data. The assumptions here
eter in defining the section. The rafter is closely relat- are based on the investigation of historian Chen
ed to the disposition of the columns and beams and (1993). As shown in Table 2, the four parameters are
the total number of columns. Third, from the work- summaried. In particular, the bay width is not given
ers’ experiences, they tend to use rafters rather than directly. The calculation is as follows: a bay has two
bay depth. Apart from the rectangular forms, there sets of dou gong (the bracket joint) that sit on each
are also several non-rectangular floor plans, known side of the columns (the black dots in Figure 3) and
as non-formal architecture, which includes the use either one or two sets between the columns (inter-
of the triangle, circle, sector, octagon, polygon, and columnar dou gong). The centre-to-centre distance
the superposition of polygons and Wan shape. They of dou gong is 125 fen ± 25 fen. Thus the bay width
are widely used in pavilions and gardens which typi- with one intercolumnar dou gong is 250 fen ± 50
cally appear in Southern China. But these irregular fen, and with two intercolumnar dou gong is 375
shape floor plans are not discussed in this paper. fen ± 75 fen. Therefore the total range of bay width
At this point, it is worth describing the measure- is 200-450 fen. In addition, the centre bay is often
ment units used. Depending on the eight grades of wider and in most extant examples the two outer
the buildings, a fen can have eight different absolute bays are often slightly narrower than the others [3].
values (Liang, 1983), measured in cun (a Chinese In the table, the modular unit fen is used rather than
length unit), as shown in Table 1. Given that 1 cun = the absolute values.
32mm approximately in Song Dynasty, the final ab- After all the parameters are clarified, the para-
solute values of width and depth can be obtained. metric model now can be built. Figure 4 shows the
For example, if the building is ting tang type and in logic diagram for the parametric modelling. In this
Grade Three, 1 fen =0.5 cun x 32mm/cun=16mm. logic diagram, ting tang and dian tang types are in-
In order to build up the parametric logic, there tegrated together since Liu (1984) points out that
are four more details which need to be clarified: the “although Ying Zao Fa Shi distinct the two types
value of bay width and rafter, and number of both strictly, buildings are slickly dealt with in practice”.
(which constitutes the building width and building The rectangular floor plan grid is set first by defin-
depth). Unfortunately, at this stage, Ying Zao Fa Shi ing the x-y plane as the base plane. The next step is
does not provide a systematic definition. Instead, it to define the size and number of bays. In order to
gives information partially by defining and partially achieve this, the four parameters described above
by enumerating. Despite this, the parametric model are outlined here. As the primary parameters, the
rafter, number of bays and number of rafters can be in Ying Zao Fa Shi (Figure 2 right), they are high-
directly controlled by the corresponding number ly consistent in form. And there is one extant
range listed in Table 2. The bay width is a multiplica- building example—Fuoguang Temple Wenshu
tion of two factors: centre-to-centre distance of dou Dian (Figure 6), which built at 1137, located at
gong and number of intervals between dou gong. Shanxi Province. It is a Grade Two dian tang type
Additionally, the number of intervals between dou building with seven bays. From the paramet-
gong is equal to the number of intercolumnar dou ric model, the minimum building width is given
gong plus one. Since the number of intercolumnar as 7x200x0.55x32=24640mm=24.64m while the
dou gong is known directly, this is the fourth pa- maximum is 7x450x0.55x32=55440mm=55.44m.
rameter. Thus, overall, there are only four simple pa- Similarly, the building depth spans from 15.84m
rameters that can be controlled depending on the to 21.12m. Wang (2011) provides its measurement
building type and grade. In addition, there is one data of 31.56m in width and 17.60m in depth. As Liu
judgment in this logic diagram: the building width (1984) argues that “there is not such an extant build-
should always be larger than the building depth. ing completed follows Ying Zao Fa Shi found so far”,
And if so, the conclusion will appear true. Under this if the measurement data is within the range of the
one set of logic diagrams the floor plan of both ting parametric model, then the two are consistent.
tang and dian tang types, all eight grades of build-
ing with different bays and rafters are involved. Fig- DISCUSSION
ure 5 shows two examples of the model. Parametric design differs from the conventional de-
Comparing the examples with the diagrams sign mode of adding and removing marks in that
the relationships between the parameters are the and a set of rules, and then the design rules act on
essence of parametrics. In this case study, the re- the initial symbols repeatedly, resulting in a final de-
lationships are not based on one specific example, sign. Following each typical set of rules will result in
but a systematic description and summary of all the one corresponding final design. Thus Shape Gram-
buildings in a typical period—which are the rules. mar generates a language of design. How and in
The logic diagram of the formal rectangular floor what sequence do the rules applied makes up the
plan is then built up based on the rules. Following so called grammar? Compared with Li’s research (Li,
this, different outcomes can be generated to indi- 2001), the advantage of the parametric method is
cate the advantages of parametric method which the ease with which the process can be extended
can result in different final products without a new into three dimensional modelling. For instance, the
set of logic diagrams or the removal/addition indi- intersectant points could be the column locations
vidual components. In particular, all the floor plan when combining with the case study on the section.
formats are included in this set of logic diagram, in- Then, the two dimensional representation of archi-
cluding both the building types (dian tang and ting tecture through the plan and section will form the
tang), any dimensions and all the grades. three dimensional parametric model. And indeed,
There is parallel research in the Shape Gram- according to Wang (2011), more special propor-
mar approach to the floor plan (Li, 2001). In the re- tions (relationships) exist in the elevations, as well
search, Li derives the process with initial symbols as many other building factors. On the other hand,
Figure 6
Fuoguang Temple Wenshu
Dian [4] [5].
Abstract. This paper presents the analysis of a bottom-up design system using shape
grammars. This research is part of a larger study that proposes the development of a
generic grammar to improve the quality of site development in social housing plans,
including the improvement of their public spaces. We show the use of shape grammars as
an analytical method to study the design of Belapur social housing development, designed
by Charles Correa, in 1983.
Keywords. Design methodology; shape grammar; analytical grammar; low-income
housing.
INTRODUCTION
This research aims at applying shape grammars as a up housing systems able to be implemented as in-
method for generating improved social housing de- cremental urban developments based on the pro-
sign systems which may contribute to the develop- gressive addition of housing clusters and associated
ment of more diversity in external areas and public community areas. The underlying hypothesis is that
spaces, creating identity and appropriation by its we can develop a set of generic parallel grammars
dwellers. In order to achieve such goals we start by which allow (and control) the generation of such
analyzing social housing plans as case studies to housing systems.
infer design patterns to propose the development
of generic grammars, which may enable the gen- OBJECTIVES
eration and management of incremental housing This paper is part of a larger study that proposes a
systems with locally captured spatial qualities. In development of a generic grammar to improve the
the book A Pattern Language (1977), Alexander and quality of low-income housing plans, including the
his collaborators define a theory and application in- improvement of public spaces and community ar-
structions for the use of a pattern language at dif- eas. The generic grammar is developed from analy-
ferent design scales - from the scale of the city and sis of four case studies by capturing the underlying
urban design to the building scale, garden and lay- common rules that were used in their design. The
out of housing units. The main goal of this research four case studies are: Belapur plan – located in New
is to define a methodology for developing bottom- Bombay, India, designed by Charles Correa in 1986,
of Bombay’s low-income profile with a variation more privacy and a sense of neighborhood at the
from 45m2 to 70m2 on house typology. The pro- smaller scale. Three of these clusters combine to
ject demonstrates high densities – 500 inhabitants form a bigger module of 21 houses, surrounding a
per hectare, including external areas, schools, etc community space of 12m x 12m (Figure 1-2).
(Correa, 1999). The site is located on six hectares of The houses were designed as an evolutionary
land 1 km away from the city center of New Bom- module, where “units are packed close enough to
bay and the development had to cover almost the provide the advantages of high density, yet separate
entire range of low-income groups – from the low- enough to allow for individual identity and growth”
est to the upper-middle categories (Correa, 1989). – this strategy allows growth from “a single lean-to
This plan presents a hierarchy of community spaces roof to urban town-houses” (Correa, 1989; 1999) be-
as a fractal structure; it consists of organizing 7 cause each dwelling is freestanding and does not
housing units around an intimate courtyard with share any wall or land with its neighbors, allowing
approximately 8m x 8m (Figure 1-1). This composi- a family to extend its home according to their needs
tion is repeated at a higher scale as shown in Figure by means of self-construction. Such policy towards
1-3 creating a similar composition which can itself house extension resembles that of the Elemental
be repeated at an even higher scale, hence creating concept developed by Alejandro Aravena (Aravena
the fractal structure. The first configuration provides and Iacobelli 2010). The plan clearly expresses Cor-
blocks (Figure 7). Note that labels and concern of the square and block: all rules off label () and
two different spatial relations regarding the orienta- associate a new block with 7 lots in two distinct spa-
tion of the blocks. The rule RC 09 removes one of the tial relationships (Figure 9).
optional lots inside the courtyard (see Rule - RC 02)
and inserts a portico to isolate a neighborhood unit DISCUSSION
constituted by three blocks. Thus, the rule allows The paper presents the Belapur grammar as a de-
creating different levels of privacy in accordance sign system, which may contribute to the future de-
with the growth of the scale of public spaces. velopment of a generic grammar for social housing
After applying control rules that generate the plans. The development of a generic grammar for
common spaces of the housing development, it is housing intends to contribute to the improvement
possible to apply the rules of Grammar B (see rules of public spaces and community areas by insertion
RB 05 to RB 08) which allow the continuous insertion of qualitative requirements as a control mechanism
of blocks and increase the scale of public space (Fig- that allows adding public spaces and facilities in a
ure 8). hierarchical structure in accordance to needs.
The Grammar B’ defines association between The bottom-up grammar’s structure explains
blocks with corner for the passage and association the concept of incrementality, pluralism and malle-
Figure 4
Derivation of a block.
Figure 5
Derivation of a block with an
empty lot.
ability (Correa, 1999, p.109) and can transcend the nomena where order emerges from a multiplica-
set of design solutions from a few additional rules tion of local interactions eventually represented by
that are not part of the initial urban plan proposed a local rule, Correa’s plan intensions are best cap-
by Correa. Additional rules can also be set to react tured by a bottom-up grammar where local rules
to pre-existing features such as natural barriers, riv- provide not just the incremental procedure but also
ers, topography and empty spaces, among others. the underlying order which is always a goal in plan-
Although not formally expressed such rules are al- ning. This can explain the predictability of the result,
ready present in Correa’s design. despite its spontaneous characteristic. With this in
After analyzing Correa’s project and developing mind, we could argue that, although a hypothetical
a grammar for it an important issue has emerged: al- top-down grammar may computationally generate
though the grammar allows the incremental growth the same shape and order, from the analytical view-
of the housing development, how can the overall point it fails to capture the conceptual principles
result display such an orderly character? This issue underlying the system and therefore they cannot be
leads us the question of the design being a bottom- considered equivalent.
up or top-down process. Finally, a subject that needs further attention
To achieve his concepts of incrementality, plu- involves considering how the incremental structure
ralism and malleability, Correa, resorts to a design of these grammars deal with the subject of neigh-
system which is supposed to be implemented in a borhood facilities location. Such theme is present
bottom-up fashion. Similarly to many natural phe- in Correa’s plan but rules are not formally expressed
and as a design attitude it seems that Correa simply tional item, therefore in a top-down fashion. How-
decided their location using them as a composi- ever, considering that one of the goals involved in
Figure 9
Grammar B’.
Abstract. A shape grammar was developed for analyzing the evolution of Maputo´s slums
with the strategic objective of capturing the evolution of house types and understanding
the social agreements behind the spatial relations of their house elementary spaces in
order to reuse such rules for the purpose of rehabilitation. This paper shows preliminary
results of the research and aims at developing, based on the resulting grammars, a
parametric tool able to execute morphological analyses, simulations and generate
improved design solutions for the qualification of Maputo´s informal settlements.
Keywords. Shape grammars; urbanism; computation; regeneration; informal settlements.
INTRODUCTION
This paper introduces a new approach for an urban settlement a slum, or is a slum created in unplanned
simulation framework for deteriorating unplanned areas, but it is fair to say that in most cases slums
settlements in the city of Maputo (also known as happen to be informal or unplanned areas that are
Caniços), areas that often are regarded as ‘slums’. suffering from multiple physical or socio economic
According to UN‐Habitat’s report (2003), The problems (Karimi and Parham, 2012).
Challenge of Slums, in 2003, 31.6 per cent of the
world’s urban population lived in slums or squatter RESEARCH PROJECT
settlements. The 2010 report - The UN State of Afri- This paper shows preliminary results of a PhD re-
can Cities – states that Mozambique’s urban popula- search aimed at developing a parametric tool able
tion will raise from 9 million in 2010 to 15.6 million in to execute morphological analyses, simulations and
2025, confirming the country’s position as having a generation of improved design solutions for the
significant growing of urban population for the next qualification of Maputo´s informal settlements. The
few decades. main goal is the creation of an integrated model
The difference between an informal settlement, that substantiates planning decisions and presents
an unplanned settlement, a slum, or a deteriorated itself as a viable methodology in the search of more
urban area is not always easy to define (despite the sustainable solutions.
UN-Habitat (2006) definition). In reality all these ar- This model is based on Stiny’s shape grammars
eas often overlap in terms of their characteristics, (1980) as means to elaborate plans capable of adapt-
function and appearance. Not always is an informal ing to changes in premises without losing its urban,
dersen et al., 2012) data which is part of the broader facilitate the definition of the grammar at this stage.
research program designed by Prof. Paul Jenkins – In order to focus on the fundamental compo-
Home Space Maputo. It is based on what Andersen nents that constitute a typical plot, other elements
et al. and Jenkins (2012) define as ‘Type A’ houses. In are considered such as trees (which can be pre-
Home Space Maputo – Built Environment Study (An- existent) and the toilets separated from the house
dersen et al., 2012) the “many different house plans buildings.
have been divided into five general house types. It is Designs are shown bi-dimensionally. The gram-
however important to stress that many of the house mar develops by configuring the arrangement of the
types overlap each other; some house types become plots and then placing the basic form of the house (a
transformed into other types (…). These five general 7m x 3.5m rectangle) in each one of them. Addition-
house types are classified as the most common.” (Ibid). ally, the rectangle is divided in two functional zones,
Accordingly to this study, the first phase of the as “the most simple house type and in general has
house building construction often is to start with two divisions” (Andersen et al., 2012) - private and
the most basic type. Type A house is then the most social. Without any reference of construction tim-
simple, with only two divisions. “The house is entered ing or order, it´s here established that the placing of
from the center of the long façade directly into one the outside toilets happens before any other exten-
slightly larger room and with further direct access to sion of the house is made. The same order issue is
a bedroom. The house has one private room while the presented with the trees, especially with the larger
‘sala’ is for receiving visitors and at times acts partly as types. It´s assumed that most of them existed be-
a kitchen” (Ibid). fore any kind of land division. For design purposes
The plot and the block shape and size are taken it´s used the average treetop diameter of the three
from a sample of the ‘bairro’ Magoanine B (Figure 1) most common species to determine constraints re-
described in Home Space (Jenkins, 2012) as an ‘Un- garding the placement of houses in relation to the
official planned area’. This particular area was chosen tree’s position.
because unofficial planned areas, “which had com-
munity / private planning and sub-division interven- GRAMMAR
tions at some period, but were not registered formally The view of most informal settlements suggests an
in the land cadastre and/or registry” (Ibid), show a cer- organic and almost chaotic land occupation. How-
tain level of regularity in terms of urban layout that ever, Paul Jenkins’s (2012) studies identify four de-
velopment statuses for land occupation in Maputo P. Rectangle P is 28m x 16m (average plot size from
city, according to different criteria such as the level the sample shown in Figure 1) and it is composed by
of planning, land registry and socio-economic con- the lines l’, l’’, 4/7 l and e (which define the limits and
ditions (Figure 2). Those statuses are: the entrance side of the plot). Line e contains at its
“Officially planned – areas which had state plan- midpoint a triangle symbol for the entrance.
ning and sub-division interventions at some period Once the first plot is established, other plots are
(…); added in order to create city blocks. Rule 1.2 mirrors
Unofficially planned – areas which had commu- P by its 4/7 l line and Rule 1.3 copies the mirrored
nity / private planning and sub-division interventions plots thirteen times until there is a twenty eight
at some period (…); plots city block. From here the blocks are replicated
Upgraded - areas which had been unplanned but orthogonally nine meters away from each other (the
had state, community or private planning or sub-divi- average street width). This presentation uses three
sion (…) city blocks (eighty four plots).
Un-planned – areas which had no previous (…)
planning or sub-division, often referred to as ‘sponta- Stage 2 - House in the plot
neous’ or ‘informal’ areas (…)” (Jenkins, 2012). After the city blocks generation, Type A houses are
The research focuses on the latter three statuses placed in each plot (Figure 4). “Three general tenden-
– the ones that show some kind of self-organization cies were recognized regarding how the houses of this
despite of any level of state/private intervention or type where located on the plot. The most common
planning. The grammar presented here is then the situation was the house located in the very far corner
first approach to the most ‘regular’ status of the of the plot and with two sides of the house connected
three that reveal ‘informal’ qualities - the ‘Unofficially to the perimeter walls (situation 1). The next common
planned’. location of the house on the plot is where the short end
The grammar is divided in three general sta- of the house was connected to the plot boundary and
ges. The first one relates the plots in order to create closer to the street (situation 2). Some cases also had
blocks. The second stage places the Type A house their house centrally free standing on the plot (situ-
inside the plot and the third configures the house ation 3)” (Andersen et al., 2012). Because this is the
extensions and other components like outside toi- only quantitative information for each of the three
lets and trees. situations, it’s established the probability of occur-
rence for each of the situations: situation 1 (the most
Stage 1 - Plots and blocks common) will occur three times in every six cases;
The composition starts with a given point (0,0,0) as- situation 2 will occur two times in every six cases; sit-
sociated with the symbol * (Figure 3). To this initial uation 3 will occur once in every six cases. The house
shape is then established the location of the first is represented by the 7m x 3.5m rectangle H, com-
plot. The plot is for now represented by the rectangle posed by the lines a’, a’’, b’ and b’’. Rectangle H can
be positioned horizontally (if a = 7m then b = 3.5m) and that dh1 = 0 or dh2 = 0 depending on the house
or vertically (if a = 3.5m then b = 7m) inside the plot. being placed in the right or in the left corner of the
In no case Type A house appears in the front of the plot.
plot. One important issue raised was the probable In Situation 2 (rule 2.2), where only the short end
pre-existence of trees. Trees are represented by cir- of the house is connected with the plot boundary
cumference i and its diameter corresponds to each (anyone but the front) there are different conditions
species average treetop diameter. To ensure that depending on vertical or horizontal positioning. If
no house is placed under a tree, two conditions are it is horizontal then dh1 = 0 or dh2 = 0 depending
established. The first is that circumference i cannot on the house being placed in the right or in the left
intersect rectangle H and the second is that circum- side of the plot. Also dv1 ≥ 1 to ensure some space
ference i cannot contain rectangle H. in the back of the house and dv2 ≥ l/3 (since there is
To control the placement of rectangle H (or any no case where the house is placed in the front of the
other component) inside the plot, to each side of plot it was established that the minimum distance to
rectangle H (a’, a’’, b’ and b’’) is added a dimension the front end is one third of the plot length – about
arrow d (dh for horizontal arrows and dv for vertical 9.3 meters). If it’s vertical then dv1 = 0. Also dh1 ≥
ones) that will manage the distances between each 1 or dh2 ≥ 1 depending on the house being placed
side of the house (rectangle H) and the limits of the more to the right or more to the left side of the plot,
plot (rectangle P). ensuring some space in the back of the house in any
Situation 1 (rule 2.1) is the most common loca- of the cases.
tion of the house (three in every six cases), where its In Situation 3 (rule 2.3) the house has a centered
corner coincides with one of the far corners of the position in the plot. None of its walls touch the
plot. Whether the house in a vertical or in horizontal boundaries of the plot which means that no dis-
position, the condition is that dv1 = 0 plus dv2 = l - b tance d equals 0. So dh1, dh2, dv1 ≥ 1 and dv2 ≥ l/3,
maintaining the same criteria of placing the house the examples shown in Home Space have the toilet
away from the front. placed in the front end of the plot. So line e (from
rectangle P) can not contain any line p (from square
Stage 3 - Extensions and components T). To ensure a considerable distance from the house,
“The Home Space study provides evidence that the lo- it is established that the toilet must be placed in one
cation of toilets and bathrooms most commonly are of the other three plot’s limits and that the minimum
in a separate building or a screened off location as far distance to the house is six meters. This minimum
as possible from the main house. This configuration distance is assured by the placement of an auxiliary
was seen in 74% of the cases” (Andersen et al., 2012). circumference c with a six meter radius. Circumfer-
It also shows that the transition from outdoor to ence c is centered with square T. Circumference c
indoor toilets corresponds to an upgrade process cannot intersect or contain rectangle H (house).
that seems to be slow due to the lack or insufficient The next stage is to divide the house (rectangle
sewage infrastructures. Because Type A house is the H) in two labeled divisions – bedroom B (B = 3m x
most basic one (associated with the lower income 3.5m = 10.5m2) and ‘sala’ L (L = 4m x 3.5m = 14m2)
families) and corresponds to the starting stage of – and mark the door label with a triangle. As men-
the house building construction, it is settled that for tioned above, the main entrance is in the center of
this type all toilets are outside of the house. For the the long façade. Because there are two, the door
grammar the toilets are represented by the 1.5m x label is to be placed in the one that has a longer
1.5m square T, composed by the lines p1, p2, p3 and distance d (whether it is a dv or a dh). This condition
p4 (Figure 5). Square T is inside rectangle P (plot). denies any chance of having a door facing directly at
Another observation we can make is that none of the boundary wall. It´s important to stress that the
yard is a scene for “everyday life and a space for so- Extension 1 adds one room to the front of the
cialising, household work and sometimes also a space house. Extension 2 creates a big living area in the
for economic activities” (Ibid). The toilet door follows front of the house and a new entrance way to the
similar criteria. In this case the only condition is that side of the ‘sala’ (formally labeled with L). The only
it never faces the front side of the plot (line e). In condition here is that the distance d at this side
other words, it can only be placed in its horizontal must be greater than the opposite d. This will grant
distance dhn or in its vertical dv1. a larger yard area in front of the new entrance of
“House type A can be extended in various ways” the house. The only extension that can be made to
(Ibid). The extensions used for the grammar fol- the back of the house is Extension 3. It basically mir-
low the four examples shown in Home Space. Type rors the house to the back or to the front, creating a
A houses are extended with additional one or two new inner door. If the extension is to the back, the
rooms with similar size and shape as the existing distance d in the back of the house has to be big-
ones (larger extensions would transform the Type A ger than 4.5 meters (3.5m for the new body plus 1m
house into other types). Therefore, five different lay- for the new back door passage). Extension 4 creates
outs are created for Type A house: extension 0, 1, 2, a large living room to the front plus a ‘veranda’. “The
3 and 4 (in extension 0 the house keeps its original veranda is not only a transition space between outside
configuration - Figure 6). In the absence of quanti- and inside - which can be used for practical purposes
tative information, it is established that all the five as cooking, storage, or a social space for - but also a
layouts are applied in the same number (each is ap- way of representing the house in the neighbourhood.
plied once in every five cases). The application of Many of the verandas had burglar bars and some of
these transformation rules implies the elimination these were richly ornamented” (Andersen et al., 2012).
of the labels in the house and the insertion of the This extension requires that the house have a hori-
doors. zontal position because the “veranda is always facing
Figure 6
House extensions.
Abstract. Gulou is a type of building found in ethnic Dong people’s settlements in south
west China. It plays a significant role in the traditional Dong architecture and shows
both social and technical values. In the near future the technique as an intangible culture
heritage would face the risk of extinction because of globalization. The paper argues
that the use of formal grammar and computer tools could help the preservation and
learning of the design knowledge of Gulou Structure and develop Gulou designs which
would be adapted to modern needs. A shape grammar called Gulou Structure Grammar
(GSG) and its computer implementation are made to achieve the goals of capturing the
design knowledge of Gulou structure, generating new Gulou designs and promoting the
education of Gulou building techniques.
Keywords. Gulou structure; shape grammar; parametric model; ethnic building
technique.
INTRODUCTION
Gulou is a type of building found in ethnic Dong social activities and communications. For example,
people’s settlements in south west China. Dong the children should be given their names in Gulou;
settlement is composed of the basic family group people gather in the Gulou after work to exchange
“Dou”, every Dou should have its own Gulou as a ideas and chat with each other. All the cultural and
symbol for the family group. There trend to be 2 to social activities of Dong people are related to and
4 Gulou in a village (Figure 1). Gulou plays a signifi- influenced by Gulou. The culture identity and value
cant role in the traditional Dong architecture and system of Dong people are emerged from the activi-
shows both social and technical values. The wooden ties carried out in Gulou.
tower is the most important public building in a set- Apart from the social importance and value, Gu-
tlement as the village council and the senior states- lou is also famous for the unique uprising shape and
men’s centre. It also acts as an important place for the excellent building techniques without using any
Figure 2
The typical section of Gulou.
metal parts. As an ancient high-rise wooden struc- willing to learn the technique from the old. In the
ture which is still been applied nowadays, Gulou is near future the technique as an intangible culture
one of the most excellent traditional Chinese wood- heritage would face the risk of extinction. Unlike the
en buildings. There are 2 kinds of main columns in building and construction manuals for the official
Gulou: the inner columns and the outer columns. Chinese traditional buildings, the rules of Gulou are
The inner columns together with the connecting not originally organized in any written form or draw-
beams play the very similar structure role as the core ing. Instead, they are passed down from generations
in the modern high-rise tower. The outer columns to generations by pithy formulas. While designing
are connected to the core by beams which support and building a Gulou, the craftsman seldom produc-
the multi-level uprising structure. The flexible joint es any drawing as well. The implicit way of design
design also enhances the anti-earthquake perfor- makes the technique to be difficult to understand
mance. In terms of ecology, the Gulou technique and to learn, which also limits the wide spread of the
accumulates the long term low-tech experience of building culture. Although various studies managed
Dong people to avoid the impact of the hot and hu- to uncover the rules of Gulou building technique,
mid weather condition. For instance, the multi-level few efforts were made in the area of the formal and
roofs could promote ventilation while keep the inte- computational approach (Cai, 2004).
rior dry from the rains (Figure 2). The paper argues that the use of formal gram-
Like the other developing regions in China, the mar and computer tools could help the preserva-
old and vernacular building and technique are chal- tion and learning of the design knowledge of Gulou
lenged with globalization. Seldom young men are Structure and develop Gulou designs which would
Figure 4
By the control of the position
of main columns, the number
of the position of main columns, number of levels, brackets. of levels and the height of
height of levels and the decreasing distance of each 3. Equally divide each edge of the profile in to N levels, the facade profile could
level, the facade profile could be adjusted (Figure segments and get the division points. In this be adjusted.
4). Equalized decreasing will result in the tilted lin- case N=10.
ear profile while the uneven decreasing will result in 4. Connect the points with the odd number index
curved profile (Figure 5). i on the odd number level to the points with
The top of the Gulou is also called “honey comb” according index on the upper level; connect
by the extinguished look (Wu Lin, 2009). It is com- the points with even number index i on the
posed of many layers of overhanging and self-sup- even number level to the points with according
porting wood pieces. Each layer is subdivided into index on the upper level. For instance, point 3
many segments so the overall structure of the top is on the 1st level will be connected to point3 on
a complicated cell-looking system. The composition the 2nd level; point 6 on the 4th level will be
of the top could be illustrated by the following steps connected to point 6 on the 5th level.
(Table 1): 5. Connect the points with the odd number index
1. Define the plan profile of the base of the top. In i on the odd number level to the points with in-
this case the profile is an octagon. dex i + 1 on the upper level; connect the points
2. Offset the profile to get the shape of each lev- with even number index i on the even number
el. In this case the top is composed of 6 level level to the points with index i + 1 on the upper
Figure 5
By the control of the decreas-
ing distance of each level, the
facade profile could be adjust-
ed. Equalized decreasing will
result in the tilted linear profile
while the uneven decreasing
will result in curved profile.
In this case the decreasing is
controlled by a Bezier curve.
level. For instance, point 3 on the 1st level will are generated from step 4 to 6. The brackets are
be connected to point 4 on the 2nd level; point attached according to the guide lines to form
6 on the 4th level will be connected to point 7 the base part of the honeycomb top.
on the 5th level.
6. Connect the points with the odd number index The content of GSG
i on the odd number level to the points with in- After the analysis of the compositional rules of Gu-
dex i - 1 on the upper level; connect the points lou, GSG was formulated. It consists of 3 initial de-
with even number index i on the even number sign and 24 rules. Rules were divided into 3 groups:
level to the points with index i - 1 on the upper plan rules (Table 2), body section rules (Table 3) and
level. For instance, point 3 on the 1st level will top rules (Table 4).
be connected to point 2 on the 2nd level; point
6 on the 4th level will be connected to point 5 THE COMPUTER IMPLEMENTATION OF
on the 5th level. GSG
7. All the guide lines for the leaf-shaped brackets A parametric model was built based on GSG. Sev-
Table 2
Rules of plan generation.
Plan initial shape R1: get the section guide 1 R2: get the section guide 2
initial shape: the inner column R3: get the outer column R4: get an end point of beam
R5: add an outer beam R6: get the centre column R7: get an inner beam
R11: get a rafter R12: extend a column to a rafter R13: get a connecting beam
eral key parameters were identified: type of plan, no3d is chosen as the platform to develop the para-
distance between inner columns, distance between metric model. Both the axis and the solid model of
inner and outer columns, height of the base, body the wooden frame pieces can be obtained from the
level height, number of body level, a Bezier curve to model (Figure 6).
control the span decreasing of each body level, top
level height, number of top level and the increase THE APPLICATION OF GSG AND THE
span of each top level. Detailed parameters were PARAMETRIC MODEL
also defined: cantilevered distance of beams, rafter As an ancient building type oriented from the Ming
angle, and column lower extension length. Dimen- dynasty (1368–1644), Gulou is still being built and is
sion parameters were added to determine the size playing an important role in the life of Dong people
of the components such as the radius of columns nowadays. It is also widely used in public parks and
and the height of the beams. Grasshopper in Rhi- tourism sites in non-Dong areas for its distinguished
initial shape: the inner columns R16: get the plan profile R17: offset the profile
R18: divide the edge by n R19: connect the division points R20: connect the division points
R21: connect the division points R22: connect the division points R23: connect the division points
symbolic form and strong landmark effect. From view of the tool. He held the view that the tool could
the modern use of Gulou, it could be identified as a rapidly generate designs according to the rules and
type of contemporary architecture. However, both parameters, therefore the communication with cli-
its design and construction are still based on the ent could be carried out efficiently. Also the model
old manual approaches. Digital technologies could provided all the dimensions of the main structure
serve the design and construction of Gulou as new pieces and a spread sheet of the use of material.
instrument, therefore Gulou could evolve and be The work used to take months to do could be com-
adapted to the information age. pressed to be finished in days.
The parametric Gulou model was used in the de- GSG is also used in the teaching of the course:
sign of a landmark structure in a resort area in San- Guangxi ethical buildings in the architecture school
jiang, Guangxi province. The famous Dong crafts- of Guangxi University. Gulou is an important topic
man Wu Shikang was invited as a design consultant of the course. GSG and the parametirc model are in-
for the project. During the design process, a series troduced to unveil the design rules and construction
of design models were generated with the help of process of Gulou. The students can learn the rules
the parametric tool (Figure 7). Wu gave a positive re- from GSG in a graphic and formal way. Then design
experiments could be carried out. Students use their DISCUSSION AND FUTURE WORK
own set of parmeters to generate designs with the The shape grammar of Gulou structure was formu-
parametric model. The vivid digital way of teach- lated as shown by the paper and a computer tool
ing encourges the students to ananlysis traditional was made to assist the design and education of the
chinese buildings in a computational point of view, ethnic building. However, the potential of GSG and
and to explore new designs based on the traditional its computer implementation is not fully explored.
building tecniques. The paper serves as a start point for the long term
Figure 7
4 different designs are gener-
ated for the project in a tourist
site.
The theme of this conference is the role of computation in the consideration of performance in
planning and design.
Since long, a building no longer simply serves to shelter human activity from the natural envi-
ronment. It must not just defy natural forces, carry its own weight, its occupants and their pos-
sessions, it should also functionally facilitate its occupants’ activities, be esthetically pleasing,
be economical in building and maintenance costs, provide temperature, humidity, lighting and
acoustical comfort, be sustainable with respect to material, energy and other resources, and so
forth. Considering all these performance aspects in building design is far from straightforward
and their integration into the design process further increases complexity, interdisciplinarity
and the need for computational support.
One of the roles of computation in planning and design is the measurement and prediction of
the performances of buildings and cities, where performance denotes the ability of this built
environment to meet various technical and non-technical requirements (physical as well as psy-
chological) placed upon them by owners, users and society at large.
eCAADe — the association for Education and research in Computer Aided Architectural Design
in Europe – is a non-profit making association of institutions and individuals with a common
interest in promoting good practice and sharing information in relation to the use of computers
in research and education in architecture and related professions. eCAADe was founded in 1983.
ISBN: 978-94-91207-05-1