50536951
50536951
50536951
%0$503"5%&-6/*7&34*5²%&506-064&
%ÏMJWSÏQBS
1SÏTFOUÏFFUTPVUFOVFQBS
Anke M. Brock
-F mercredi 27 novembre 2013
5Jtre :
Interactive Maps for Visually Impaired People:
Design, Usability and Spatial Cognition
ED MITT : Image, Information, Hypermedia
6OJUÏEFSFDIFSDIF
Institut de Recherche en Informatique de Toulouse (IRIT) - UMR 5505
%JSFDUFVS T
EFʾÒTF
Delphine Picard - Professor, Aix Marseille University, Institut Universitaire de France
Christophe Jouffrais - CNRS Researcher, IRIT, CNRS & University of Toulouse 3, France
3BQQPSUFVST
Susanne Boll - Professor, University of Oldenburg, Germany
Vincent Hayward - Professor, Université Pierre et Marie Curie, Paris, France
"VUSF T
NFNCSF T
EVKVSZ
Jean-Pierre Jessel - Professor, IRIT, University of Toulouse 3, France
Miguel Nacenta - Lecturer, University of St Andrews, Scotland
to Fabrice
Acknowledgments
About four years ago I decided to change my life, quit my job, and move to France
with the long harbored dream of doing a PhD. Writing this thesis made me realize how
much I have learnt since that time. Although doing a PhD is a challenging task, it is also
very fulfilling and I am very glad that I had the courage to dare this adventure. I would not
have been able to achieve this goal without the help of many people who have been
involved and therefore in this section I wish to express my gratitude for their help.
First of all, a successful thesis requires a fruitful collaboration between student and
supervisors. I feel very thankful that I have found supervisors who always cared about my
work and who were willing to spend time and energy on introducing me to science.
Above all, I would like to thank Delphine Picard and Christophe Jouffrais for guiding me
on this journey and for being supportive, both morally and in terms of my work. Many
thanks for your patience and effort for introducing me to cognitive science, for your
availability, and for your confidence in my work! Next, I am obliged to my “unofficial”
supervisors. I am grateful to Philippe Truillet for showing me his support from the very
first day we met during the student research project in the Master IHM. You have opened
up many opportunities for me, regarding my research as well as my teaching, and I
appreciate this very much. Thanks also to Bernard Oriola who introduced me to the world
without sight and who has been a very available and patient beta-tester. Thanks also for
your inimitable sense of humor. Furthermore, I am grateful to all of you for giving me the
space to bring in my own ideas.
This PhD would not have been possible without financial support. I feel very
grateful that I have been granted a “Bourse Ministerielle” (scholarship by the French
ministry) for doing this thesis. I am also indebted for the financial support that part of my
work received from the French National Research Agency (ANR) through TecSan
program (project NAVIG ANR-08TECS-011) and the Midi-Pyrénées region through
APRRTT program. Moreover, I am grateful to Google for supporting part of my thesis
through the Anita Borg Memorial Scholarship EMEA 2012.
Next, I am obliged to my reviewers, Susanne Boll and Vincent Hayward, and the
examiners, Miguel Nacenta and Jean-Pierre Jessel, for immediately accepting the
invitation to participate in the jury of this thesis and for the helpful feedback. I really
appreciate your interest in my work and I hope that this will be the start of a scientific
collaboration.
5
This work would not have been possible without visually impaired users who
participated in the participatory design cycle. I would very much like to thank all of them
for their time and energy. Due to lack of space and privacy protection I cannot name all,
but I’d like to specifically express my gratitude to Isabelle, Léona and Pierre who have
given me a lot of useful advice alongside the official meetings. Many thanks also to the
associations who helped us get in touch with visually impaired people, notably Institut
des Jeunes Aveugles Toulouse (IJA), Association Valentin Haüy Toulouse, Christian Call at
Radio Occitania and Jean-Michel Ramos-Martins at the Médiathèque José Cabanis. In
particular, I am grateful to Claude Griet at the IJA for her dedication to help us organize
meetings with visually impaired people for the past years.
I’d also like to thank all those people who contributed to the content of this thesis.
First of all the students that I have supervised during my thesis: Kévin Bergua, Jérémy
Bourdiol, Alexandre Buron, Charly Carrère, Sébastien Dijoux, Julie Ducasse, Alexis
Dunikowski, Hélène Jonin, Ronan Le Morvan, Antoine Lemaire, Céline Ménard, Alexis
Paoleschi and Sophien Razali (in alphabetical order). I am obliged to Mathieu Raynal for
not only letting me use the SVG Viewer for my prototype but also for adapting it to my
needs. Thanks to Stéphane Conversy for helpful advice on participatory design.
Moreover, I am grateful to Stéphane Chatty who let me use the Stantum multi-touch
display during several months.
I am indebted to all those people who helped me proofread my work. First of all
I’d like to thank my friend Laura for the many hours that she spent on the corrections and
feedback on this thesis writing as well as on prior papers. I would like to acknowledge
Kasey Cassells from Wise Words Editorial for proofreading chapter 3 of this thesis.
Thanks also to my friend Militza for editing one of my previous papers. Furthermore, I am
grateful to all those who granted me the right to use photographs and illustrations of their
work within this thesis: Carlo Alberto Avizzano, Christophe Bortolaso, Shaun Kane, Uwe
Laufs, Charles Lenay, Miguel Nacenta, Laurent Notarianni, Grégory Petit, Delphine
Picard, Martin Pielot, Thomas Pietrzak, Nicolas Roussel, Johannes Schöning, Matthieu
Tixier, Philippe Truillet, Jean-Luc Vinot, Gerhard Weber and Limin Zeng (in alphabetical
order). Thanks to Jean-Pierre Baritaud, Marc Macé and Louis-Pierre Bergé for helping me
with the technical organization of my thesis defense.
Working in the Elipse group has been a very enjoyable time. I am grateful to
Antonio Serpa for the technical help with the experimentations. I would also like to thank
Olivier Gutierrez and Slim Kammoun with whom I founded the Elipse writing group which
has proved very helpful and motivating. I’d also like to thank Emmanuel Dubois for his
advice regarding my application as Teaching Assistant and my further career path.
6
Finally, I am obliged to all of my colleagues, who for lack of space I can’t cite by name,
for integrating me into the group. Keep up the good ambiance!
Teaching was also an essential part of my thesis. I am indebted to thank all those
people who gave me a chance and who integrated me among the teaching staff of the FSI
(“Faculté des sciences et de l’ingénierie”). Many people have been involved and have
given me useful advice and even though I cannot cite all of them I am very grateful.
Above all, I’d like to thank Martin Strecker who has been my teaching mentor for the last
three years and who continues to be even though the official mentoring period is over. I
am very thankful for all the great advice and discussions about teaching, research and
living in France as a German. Furthermore, I am grateful to all those who contributed to
my application as Teaching Assistant and for the “qualification CNU 27” with their
recommendation letters.
This PhD was also the occasion for my first scientific collaborations. I’d like to
thank Tiago Guerreiro, Shaun Kane and Hugo Nicolau for co-organizing the Special
Interest Group NVI at the CHI conference 2013. I am also obliged to Samuel Lebaz for our
collaboration on the Kintouch project. Furthermore, I’d like to thank Mathieu Simonnet for
the ongoing collaboration on non-visual exploration techniques for interactive maps. I
would like to express my gratitude also to Anicia Peters, Andrea Peer and Susan Dray
who gave me the chance to participate in the Women’s Perspective in HCI Initiative at
CHI 2012. This project gave me the chance to meet very interesting people and led to the
organization of a workshop at CHI 2014.
My journey to the PhD started during the Master2 IHM. I would like to thank all of
its professors for introducing me to Human-Computer Interaction. I am especially grateful
to Yannick Jestin, the heart and soul of the master. Thanks also to my fellow students who
integrated me into their group and made me feel at home in Toulouse very quickly. In
particular, I am indebted to Julien Molas and Jean-Luc Vinot who worked with me on a
student research project. It is from there that my thesis project originated.
Thanks also to Martine Labruyère at the Ecole Doctorale MITT (Graduate school)
for her advice and dedication to the PhD students. Moreover, I would not want to forget
all the administrational and support staff who work hard behind the curtain of the IRIT and
the university and without whom our work would not be possible.
Many thanks also to those people who piqued my interest for research. First, to
Peter Drehmann, my former physics teacher and headmaster, for introducing me to
science. Second, I am grateful to Maria Rimini-Döring who has unofficially mentored me
during my time at Bosch and who unofficially keeps doing so.
7
In 2012 my dancing chorus prepared a performance on interactive maps for
visually impaired people. I am very grateful to my dance professor Sylvie Pochat-
Cottilloux for giving me the chance to approach my thesis from an artistic point of view. I
am also very thankful that my fellow dancers were motivated to participate in this
experience.
Last but not least, I want to thank my family. First, my parents Angela and Werner
because you always believed in me and because it is from you that I learnt to aim high
and pursue my dreams. I am also grateful for accepting my need to move far in order to
do the work I wanted to do. I hope that this thesis makes you as proud as it makes me!
Thanks also to my sister Heike. I have always taken you as an idol for the easiness with
which you succeed in everything you try. Furthermore, I’d like to express my gratitude to
my parents-in-law, Nicole and Michel, for your understanding and support in these past
years, especially for the board and lodging during the thesis writing. Most importantly, I
am grateful to my fiancé Fabrice. Thank you so much for encouraging me to live up to my
dreams and for always believing in me when I didn’t believe in myself. Thanks for your
understanding when I needed to put work first and for supporting this thesis. Without you
I would not be where I am today and therefore I want to dedicate this thesis to you.
8
9
Table of Contents1
ACKNOWLEDGMENTS ......................................................................................................................... 5
TABLE OF CONTENTS......................................................................................................................... 10
I INTRODUCTION .............................................................................................................................. 14
III DESIGNING INTERACTIVE MAPS WITH AND FOR VISUALLY IMPAIRED PEOPLE ........................... 108
IV USABILITY OF INTERACTIVE MAPS AND THEIR IMPACT ON SPATIAL COGNITION ........................ 160
V.1 OBSERVING VISUALLY IMPAIRED USERS HAPTIC EXPLORATION STRATEGIES ................................................. 200
V.2 ENHANCING MAPS THROUGH NON-VISUAL INTERACTION ........................................................................ 216
V.3 CONCLUSION .................................................................................................................................. 239
1
An extended version is available at the end of the document.
10
VII APPENDIX .................................................................................................................................. 262
REFERENCES.................................................................................................................................... 323
11
Chapter I - Introduction
I Introduction
I.1 Context
Worldwide, 285 million people are visually impaired (WHO, 2012). This
population faces important challenges related to orientation and mobility. Indeed, 56% of
visually impaired people in France declared having problems concerning mobility and
orientation (C2RP, 2005). These problems often mean that visually impaired people travel
less, which influences their personal and professional life and can lead to exclusion from
society (Passini & Proulx, 1988). Therefore this issue presents a social challenge as well
as an important research area.
Recent technological advances have enabled the design of interactive maps with
the aim to overcome these limitations. Indeed, interactive maps have the potential to
provide a broad spectrum of the population with spatial knowledge, irrespective of age,
impairment, skill level, or other factors (Oviatt, 1997). This thesis aims at providing
answers and solutions concerning design and usability of interactive maps for visually
impaired people.
Interactive maps are accessible and pertinent tools for presenting spatial
knowledge to visually impaired people,
they are more usable than raised-line paper maps
and they can be further enhanced through advanced non-visual interaction.
14
Chapter I - Introduction
What is the design space of interactive maps for visually impaired people?
And what is the most suitable design choice for interactive maps for
visually impaired people?
We reply to the first part of this question in chapter II. With design space we
address the existing solutions for interactive maps for visually impaired people. We
performed an exhaustive research of the literature and analyzed the corpus regarding
non-visual interaction. We provide an additional analysis of the corpus with regard to
terminology, origin of the map projects, timeline and map content and scale in the
appendix.
Based on this analysis we investigated the second part of the question in chapter
III. More specifically we defined the context as the acquisition of spatial knowledge in a
map-like (allocentric) format prior to traveling. We argue in favor of the design of an
interactive map based on a multi-touch screen with raised-line overlay and speech
output. We present the iterative design process of interactive map prototypes based on
this concept.
15
Chapter I - Introduction
This question is investigated in chapter IV. As mentioned before, there has been
no study that compared usability of interactive maps with raised line maps with braille.
Thus, so far we do not know whether interactive maps are better or worse solutions than
raised-line maps. We reply to this question through an experimental study with 24 blind
participants who compared an interactive map prototype with a classical raised-line map.
We specifically address usability in the context of acquiring spatial knowledge of an
unknown area prior to travelling. Spatial knowledge was part of usability, as effectiveness
was measured as spatial cognition scores that resulted from map exploration. We believe
that it is interesting to have a separate look on spatial cognition as it is very rich. For
instance, it is interesting to study the different types of spatial knowledge (landmark,
route and survey), as well as short- and long-term memorization.
I.3 Methodology
The methodology of this thesis has been divided in several steps (see Figure I.1).
First we performed an exhaustive study of the literature on background knowledge,
including visual impairment, spatial cognition and (tactile) maps. Furthermore, we
proposed a classification of existing interactive maps. Based on this, we have developed
interactive map prototypes for visually impaired people following a participatory design
process. In this thesis we present how we adapted the process of participatory design to
make it more accessible. Based on the proposition by Mackay (2003), we applied a
participatory design process in four phases: analysis, generation of ideas, prototyping,
and evaluation. The analysis phase allowed us to identify users’ needs as well as technical
16
Chapter I - Introduction
constraints. In the second phase, the generation of ideas, we proposed brainstorming and
“Wizard of Oz” techniques for stimulating creativity. In the third phase, prototyping, we
developed prototypes through an iterative process. In the fourth phase, evaluation, we
compared usability of an interactive map prototype with a classical raised-line map. The
outcome of this design cycle has led to preliminary studies on the design of advanced
non-visual interaction with the purpose of further enhancing the interactive maps.
Figure I.1: Methodology of this thesis. The development of interactive map prototypes has
been based on a participatory design cycle adapted for visually impaired users. Several
micro and macro iterations have been conducted.
Chapter II details the background of this thesis. As the research field of interactive
maps for visually impaired people is multidisciplinary, this chapter includes knowledge
from psychology, computer science and human-computer interaction. More precisely we
investigate human spatial cognition. We also look at the impact of visual impairment on
spatial cognition, as well as the role of the other sensory modalities. Then, we present
maps as tools for spatial cognition, and the different types of knowledge related to tactile
maps. Furthermore, we study existing interactive map prototypes and specifically non-
visual interaction in these prototypes. By doing so we intend to reply to Research
Question 1 and open up the design space for accessible interactive maps.
17
Chapter I - Introduction
Chapter VI summarizes the work presented in this thesis. We describe how each
research question was answered and reflect on the thesis statement. The chapter also
presents a future work section proposing new avenues for research.
Part of the research work presented in this thesis was presented at peer-reviewed
workshops and published in peer-reviewed conferences and journals. The full list of
publications is available in Appendix VII.1.
18
Chapter II - Theoretical Background
II Theoretical Background
The research area of interactive maps for visually impaired people is a
multidisciplinary field. It includes psychology through the study of spatial cognition,
haptic exploration strategies and the impact of visual impairment on these aspects. It
includes computer science, and more precisely human-computer interaction, through the
study of interactive map prototypes and non-visual interaction.
This chapter details the theoretical context of our research in these different fields.
First, we introduce the context of visual impairment, its definition, distribution and
importance for society. We also detail the perception of the different human senses. In a
second step, we present spatial cognition and the impact of visual impairment on spatial
cognition. Then, we detail maps as a tool for spatial cognition. In this section we present
how tactile maps for visually impaired people are designed and produced. We also study
how visually impaired people explore and memorize accessible maps. Furthermore we
introduce a classification of interactive maps for visually impaired people. More
precisely, we investigate terminology, origin of the projects, timeline, map content and
user studies of the different projects. We also present a detailed section on non-visual
interaction technologies, in which we present multimodal interaction, as well as audio,
tactile and gestural interaction techniques that have been used in different accessible
map projects. This classification answers to Research Question 1 (What is the design
space for interactive maps for visually impaired people?) as it presents various design
solutions.
2
http://www.euroblind.org/resources/information/nr/215 [last accessed August 16th 2013]
3
http://www.faf.asso.fr/article/la-cecite-en-france [last accessed July 16th 2013]
21
Chapter II - Theoretical Background
Table II.1: Classification of Visual Impairment based on visual acuity as defined by (WHO,
2010).
22
Chapter II - Theoretical Background
impairment. In the United States, the term “legal blindness” is defined as having a visual
acuity of 20/200 or less, or having a visual field of 20 degrees or less (Lévesque, 2005). It
is therefore possible that people who are referred to as blind still have light perception,
i.e. they can differentiate between a bright and a dark environment.
Another influential factor is the cause of the impairment which can be genetic,
related to an illness or accidental. The cause again can have an impact on the affective
reaction to the impairment. If the impairment appears suddenly, it is likely to lead to
depression (Thinus-Blanc & Gaunet, 1997). Congenital blindness will also more likely
lead to autism or stereotyped behaviors. In the appendix (VII.2) we present a glossary of
different eye diseases.
23
Chapter II - Theoretical Background
orientation (see Figure II.1). In a report on visually impaired people in France, 56% of
them declared having problems concerning mobility and orientation (C2RP, 2005). These
problems often mean that visually impaired people travel less, which influences their
personal and professional life and can lead to exclusion from society (Passini & Proulx,
1988). Therefore this issue presents a social challenge as well as an important research
area.
Figure II.1: Challenges for a visually impaired traveler in daily life. Displays of grocery
stores and road works are blocking the way. Reprinted from (Brock, 2010a).
tasks, vision is not the most adapted sense. An example is sitting in a stationary train
while an adjacent train starts to move off. Our visual sense tricks us into feeling that our
train is moving. Kinesthetic information will tell us that we are static (Ungar, 2000). Most
activities are based on the simultaneous and interactive participation of different senses.
Exchanges between people and their environment are multimodal and must be
integrated through complex processes (Hatwell, 2003).
To better understand this, we will have a separate look at the three senses that are
most commonly employed in interactive systems: vision, audition and touch. As the
senses of smelling and taste are so far very rarely employed in assistive technology, we
will exclude them from this presentation.
II.1.3.1 Vision
The sensors for the sense of vision are the eyes. Vision works without direct
contact. Vision allows the simultaneous viewing of a large spatial field. The point of
foveation itself is quite limited, yet other objects are still present in peripheral vision. It is
possible to perceive nearby and far objects (Ungar, 2000). Vision is the only sense that
allows this simultaneous perception and thus gives a global view of an environment. Each
eye perceives a two-dimensional image. It is possible to recreate a 3D image and thus
gain information on distances and depth by integrating information on vergence and
disparity from both eyes, but also on occlusion, shades, light, gradients of color, texture,
size and information (Hinton, 1993).
Vision excels in different fields. Among them are perception of space, landmarks,
directions, distances, orientation and speed of moving objects. Vision has the greatest
spatial resolution of all senses and is best adapted for coordinating movements in space
(Cattaneo & Vecchi, 2011) and for avoiding obstacles (Hatwell, 2003). Finally, it is also
well adapted for recognizing shape and material of objects (Thinus-Blanc & Gaunet,
1997).
On the downside, vision has poor alerting functions as compared to other senses
(Cattaneo & Vecchi, 2011). Despite its strength in many areas, vision can be tricked. In
addition to the moving train illusion cited above, other illusions that concern the
perception of size, color or symmetry are well known. Equivalence between visual and
haptic illusions has been demonstrated (Gentaz, Baud-Bovy, & Luyat, 2008).
II.1.3.1 Audition
The sensors for audition are the ears. Sound is transmitted as a wave. Its
frequency—measured in Hertz—determines the pitch; its intensity—measured in dB—
determines the loudness (Cattaneo & Vecchi, 2011). As for vision, no direct contact is
25
Chapter II - Theoretical Background
Audition is best adapted for the perception of temporal stimuli such as length,
rhythm and speech (Hatwell, 2003). Indeed speech is a powerful means of communication
that has similar properties as written text (Blattner, Sumikawa, & Greenberg, 1989).
Audition is also well adapted for perceiving distances, as the perceived loudness
decreases with increasing distance. The sound source can be localized using the three
values azimuth, elevation and range (Cattaneo & Vecchi, 2011). The movement of objects
can be recognized through Doppler effects, i.e. changes of frequency (Nasir & Roberts,
2007). Furthermore, objects can be identified based on the specific sound they emit
(Gaunet & Briffault, 2005). Finally, audition has good alerting functions as we
automatically react to unexpected sound (Cattaneo & Vecchi, 2011).
Cutaneous perception concerns the touching of the skin of any part of the body
which remains motionless. This is perceived by mechanoreceptors and thermoreceptors
in the skin (Lederman & Klatzky, 2009). Perceptual processing only concerns information
related to the stimuli applied to the skin without exploratory movements. The sensors for
touch are spread over the whole body (Hatwell, 2003). The accuracy of the detection of
the tactile stimulus is higher with greater density of receptors and a reduced size of the
receptive field (Cattaneo & Vecchi, 2011). For this reason, the accuracy of tactile
perception in the fingertips is superior to other body parts.
26
Chapter II - Theoretical Background
During tactile perception, direct contact with the perceived environment needs to
be established and perception is limited to the contact zone (Hatwell, 2003). However
objects, such as a white cane, can be used as an extension of the own body (Gaunet &
Briffault, 2005). Touch perception differs from vision in that there is no peripheral
perception. Besides, in contrast to vision, touch perception is sequential, i.e. we need to
explore an object part by part. In contrast to audition, touch is segmented and the order
of perception is not fixed, i.e. we can decide in which order we explore an object but not
in what order we listen to the notes of a musical piece. Skin sensations cease as soon as
the physical contact ends. The objects not currently examined must be maintained in
memory and no cues are available to draw attention in any particular direction (Ungar,
2000). This implies a high charge for working memory as the person needs to make a
mental effort to integrate a unified representation of an object or environment (Hatwell,
2003). Therefore Hinton (1993) argues that every tactile perception has two components,
a sensory and a memory component.
Lederman and Klatzky (2009) differentiate “what” and “where” systems. The
“What” system deals with perceptual (and memory) functions. It serves the recognition of
objects and is especially used for identification of physical and geometrical properties.
Touch excels in the perception of physical properties of objects, such as shape, texture
(roughness, stickiness, slipperiness or friction), size (measured for instance as total area,
volume, perimeter), compliance (i.e. deformability under force), temperature or weight
27
Chapter II - Theoretical Background
(Hatwell, 2003). Familiar objects are thus recognized very quickly. Different exploratory
movements are applied for exploring different characteristic (Lederman & Klatzky, 2009).
For instance, texture is explored via a lateral movement, hardness via a pressure
movement, and global shape via contour following. Picard et al. (2003) demonstrated that
participants were able to associate tactile descriptors with different fabrics experienced
by touch. The “Where” system, on the other hand, provides a description of the spatial
relation of objects in the world. It can identify direction, localization and distances.
Localization can be done with respect to either an egocentric reference frame (distances
and directions are specified relative to one’s own body), or to an allocentric (i.e.,
external) reference frame. Evidence for a “what/where” distinction for the
somatosensory system is derived from fMRI as well as behavioral studies (Lederman &
Klatzky, 2009). Furthermore human tactile perception is considered to have high
temporal acuity. Humans are capable of distinguishing successive pulses with a time gap
as small as 5 ms, whereas it is 25 ms for vision (Choi & Kuchenbecker, 2013). Only
audition is more precise with 0.01 ms.
On the downside, spatial discrimination with the finger is less accurate than with
the eye. Different tests exist for measuring thresholds of tactile spatial acuity (Lederman
& Klatzky, 2009). The results vary between 1 and 4 mm for the fingertip. Tactile spatial
acuity varies significantly across the body surface, with the highest acuity on the
fingertips and lowest on the back. Moreover, as for the visual sense there are also
illusions for the sense of touch. In the radial-tangential illusion, radial lines (lines going
towards the body or from the body outwards) are overestimated compared to tangential
lines (side to side) of the same length. In addition, according to Gentaz et al. (2008) it is
more difficult for humans to understand the characteristics of an oblique line as
compared to a horizontal or vertical line. They argued that people use a vertical and
horizontal frame of reference. Encoding oblique lines requires thus more calculation.
Besides, curved lines are perceived as shorter than straight lines (Cattaneo & Vecchi,
2011). Finally, systematic distortions are possible because parts of the body are moving
(e.g. the position of the hand in relation to the body), and because a force is exerted to
pursue an exploration task (Golledge et al., 2005).
answer to this question. Their study showed that congenitally blind people who gained
sight after eye surgery did not immediately succeed in visually recognizing shapes that
they had explored by touch before. However this ability was acquired after a very short
time which suggests a coupling between the representations of both modalities based on
experience. As argued by Cattaneo and Vecchi (2011), vision does not only require
functional eyes and optical nerves, but also functioning brain structures in order to create
mental representations. It has originally been assumed that visually impaired people
were incapable of creating mental representations (deficiency theory, see Fletcher,
1980). Today, it is known that mental representations can be created without sight but that
these representations differ from those developed by sighted individuals (difference or
inefficiency theory).
It is interesting to take a closer look at the differences for each of the senses.
Concerning audition, studies have revealed both advantages and disadvantages for blind
people and sometimes results are contradictory. However, there is a general tendency
that especially congenitally blind people outperform sighted people on different auditory
tasks (Cattaneo & Vecchi, 2011). This advantage could even be confirmed in studies when
the prior musical experience of sighted and visually impaired participants was
controlled. Blind people could perceive ultra-fast synthetic speech and pseudo speech
that is impossible to understand for sighted people. They also showed an improved
memory for auditory and verbal information as well as an improved serial memory. This
improvement of memory appears to depend on an improved stimulus processing and
encoding. Concerning the localization of auditory cues, two theories exist (Cattaneo &
Vecchi, 2011). The deficit theory argues that visual calibration is needed for auditory
localization to work properly. The compensation theory argues that multimodal
mechanisms compensate for the vision loss. Results of different studies are controversial
and depend on the presentation of the stimulus and on the method of pointing to the
stimulus (Thinus-Blanc & Gaunet, 1997). For instance, Macé, Dramas, and Jouffrais (2012)
observed that performance of blind users did not differ in accuracy from sighted users’
performance in an auditory object localization task. Cattaneo and Vecchi (2011) argue
that localization of auditory cues is possible without vision even if it may be
impoverished. They propose that blind people are able to calibrate localization through
their own body movements like turning the head or moving towards another position.
Finally, early blind people can develop echolocation, which demands a subtle
processing of sound (Thinus-Blanc & Gaunet, 1997). Echolocation is based on sounds
created by the listener, for instance clicking noises with the tongue, which are reflected
from objects (Stockman, 2010). This phenomenon can even serve as an additional sense
of orientation.
29
Chapter II - Theoretical Background
Tactile capacities can be evaluated with haptic test batteries (see for instance
Mazella, Picard, & Albaret, 2013). Findings of different studies on tactile capacities are
again not consistent and depend on inter-subject variability and task type. It is also
important to say that different results may be observed depending on how important a
test task is in the daily life of a visually impaired person. Moreover, congenital blind
participants have revealed to be superior to sighted people in temporal-order judgments
(Cattaneo & Vecchi, 2011). It appears that tactile acuity is retained with age in blind
people, but not in sighted people. Also studies revealed superior tactile acuity for blind
people, independently of braille reading skills, previous visual experience or remaining
light perception. However, sighted people were able to gain tactile acuity after training
(Cattaneo & Vecchi, 2011). The precise relation between visual deprivation, training and
tactile acuity, still needs further investigation.
locations and attributes of phenomena in his everyday spatial environment (Downs &
Stea, 1973). Research suggests that cognitive maps are not organized like a cartographic
map in the head (Downs & Stea, 1973; Montello, 2001; Siegel & White, 1975). Cognitive
maps are not unitary integrated representations. They are often fragmented, augmented,
schematized, or distorted. Sometimes they are composed by several separate pieces that
are interlaced, linked or hierarchically related (e.g., France as a smaller entity is part of
Europe as a larger entity). Cognitive maps are part of our daily lives as they are the basis
for all spatial behavior (Downs & Stea, 1973). Primary functions of spatial representations
are facilitation of location and movement in (large-scale) environments, prevention from
getting lost, finding one’s way and providing frames of reference for understanding
spatial information (Siegel & White, 1975). The process of transformation of information
from absolute space to relative space demands operations such as change in scale,
rotation of perspectives, abstraction, and symbolization. The necessary information to
create such a cognitive map is location information (where is the object) and attributive
information (what kind of object) (Downs & Stea, 1973). Spatial properties of objects
include size, distance, direction, separation and connection, shape, pattern, and
movement (Montello, 2001).
31
Chapter II - Theoretical Background
32
Chapter II - Theoretical Background
maintaining devices. It has been shown that they play an important role also for map
reading (Roger, Bonnardel, & Le Bigot, 2011). Landmark knowledge is predominantly of
visual nature for human adults (Siegel & White, 1975). Second, routes are defined as an
ordered sequence of landmarks and represent familiar lines of travel. There are many
possibilities to connect landmarks. Route selection typically accommodates time
constraints, overall distance covered, and ease of access (Magliano et al., 1995). People
must at least identify locations at which they must change direction and take action.
Optionally, they possess knowledge about distances, angles of turn, terrain features or
memories of traversed routes (Thorndyke & Hayes-Roth, 1982). This knowledge type is
predominantly of sensorimotor nature (Siegel & White, 1975). Finally, survey knowledge
(also called configurational knowledge) is a multidimensional representation of spatial
relations involving a set of distinctive environmental features (Magliano et al., 1995). It is
comparable to a map based on gestalt elements. It includes topographic properties of an
environment; location and distance of landmarks relative to each other or to a fixed
coordinate system (Thorndyke & Hayes-Roth, 1982). Survey knowledge is considered
more flexible than route knowledge. When based on route knowledge, travelers are
restricted to the routes they have previously memorized. On the contrary, survey
knowledge provides a global representation of an area and allows flexible travel
(Jacobson, 1996). The type of exposure to spatial knowledge influences what type of
knowledge a person acquires. Thorndyke and Hayes-Roth observed that route
knowledge was normally derived from direct navigation, whereas map exploration led to
the acquisition of survey knowledge. Although it is possible to acquire survey knowledge
from direct experience, it can be obtained more quickly and with less effort from map
reading. Magliano et al. observed that subjects remembered different types of map
knowledge (route or survey knowledge) depending on the instruction given before
exploration of space. They also observed that landmark knowledge was always obtained
first as a basis for the two other knowledge types.
come up with alternative routes. Egocentric representations are updated as the person
moves in space, whereas allocentric representations are not (Cattaneo & Vecchi, 2011).
Both representations coexist in parallel and are combined to support spatial behavior.
Many studies have observed differences of spatial cognition between sighted and
visually impaired people. We will investigate this difference in detail in the following
subsection.
35
Chapter II - Theoretical Background
participants perform as well as the sighted. Gaunet and Briffault (2005) underlined a high
variability in spatial skills among the group of visually impaired people. It is still difficult
to validate either the inefficiency or the difference theories, as empirical evidence exists
for both theories (Ungar, 2000). Recent studies favor the difference theory. For instance,
Thinus-Blanc and Gaunet (1997) observed different behavioral strategies. Cattaneo and
Vecchi (2011) proposed that visually impaired people can acquire spatial skills through
compensatory task-related strategies and training. Cornoldi, Fastame, and Vecchi (2003)
argued that congenital absence of visual perception does not prevent from processing
mental images. However, these mental images are expected to be organized differently.
In a recent study the cortical network of blind and sighted participants during navigation
has been observed with functional magnetic resonance imaging (fMRI). Results suggest
that blind people use the same cortical network— parahippocampus and visual cortex—
for spatial navigation tasks as sighted subjects (Kupers, Chebat, Madsen, Paulson, & Ptito,
2010).
36
Chapter II - Theoretical Background
decisions than sighted people (1988). Besides, landmarks used by the sighted may be
irrelevant for visually impaired people (Gaunet & Briffault, 2005).
Quantitative advantages for vision over other senses (precision, greater amount of
available information as described above) appear to introduce qualitative differences in
encoding spatial information (Thinus-Blanc & Gaunet, 1997). In this respect early-blind
and late-blind are often regarded separately (see II.1.2.2). Indeed, many studies show
that congenitally blind and early blind are more impaired when performing spatial
cognition tasks than late blind and sighted people. This suggests that vision plays an
important role in setting up a spatial-processing mechanism. However, once the
mechanism is established—as is the case for late blind—it can process information from
other modalities (Thinus-Blanc & Gaunet, 1997). Visually impaired people perceive space
sequentially from a body-centered perspective and have difficulties in processing
information simultaneously (Cornoldi et al., 2003; Thinus-Blanc & Gaunet, 1997).
Accordingly, several studies have demonstrated that congenitally and early blind people
tend to code spatial relations in small-scale space in an egocentric rather than allocentric
representation (Ungar, 2000). This perspective even persists after a time delay, whereas
for sighted people a switch from egocentric to allocentric representations happens
(Cattaneo & Vecchi, 2011). In relation to this, cardinal directions and Euclidian distances
can pose difficulties (Gaunet & Briffault, 2005). Yet, congenitally and early—blind
participants do have the potential to use allocentric coding strategies, but these
strategies are more cognitively demanding (Cattaneo & Vecchi, 2011; Thinus-Blanc &
Gaunet, 1997; Ungar, 2000). Most studies agree that early-blind perform as well as late-
blind and sighted people in tasks that involve spatial memory, but less well in tasks
involving spatial inference and mental rotations (Cattaneo & Vecchi, 2011; Thinus-Blanc &
Gaunet, 1997). Early-blind have demonstrated problems updating spatial information
when the perspective changes. This might result from different brain structures, from less
experience with mental rotations—and consequently a lack of adapted strategies—or
from the predominant egocentric representation (Cornoldi et al., 2003; Ungar, 2000).
However, studies are controversial depending on the degree of rotation, complexity of
the environment and individual strategies (Cattaneo & Vecchi, 2011). Cornoldi et al.
compared spatial cognition with or without blindness for configurations of different sizes,
with different length of pathways and in 2D and 3D conditions. Their results demonstrate
that blind and sighted can generate and process a sequence of positions. However, they
experience more problems in 3D environments. Also they face more problems when the
length of the pathway is increased. They also observed that performance of the blind
could be improved by using experimental material that they know better.
37
Chapter II - Theoretical Background
This leads to the second possible source of discrepancy: the blind represent a
very heterogeneous group with a variety of inter-individual differences, such as age at
onset and duration of blindness, etiology, mobility skills, braille reading skills and use of
assistive technology (Cattaneo & Vecchi, 2011). It is extremely difficult to evaluate and
control the influence of factors related to the subjects’ personal histories (Thinus-Blanc &
Gaunet, 1997). As discussed before, age at onset of blindness seems an important factor.
For instance, Fletcher (1981a) observed that late blind children performed better than
early blind children. However, different studies use different classification for early-blind
(either before age 1 or age 3), therefore making it difficult to compare results. Congenital
blindness may lead to delayed development of sensorimotor coordination (Thinus-Blanc
& Gaunet, 1997). The etiology of blindness is another important factor. To this end,
Fletcher (1981a) observed lower spatial performance with subjects attained with
retinopathy of prematurity (ROP, see VII.2.11). ROP occurs mostly due to excessive
oxygen given to prematurely born babies. Consequently, Fletcher suggested that the
observed spatial impairment might be caused by overprotective parents who limit their
children’s spatial exploration. Depending on etiology and age at onset of blindness
psychological reactions to the onset of blindness may include depression, autism and
stereotyped behavior patterns (Thinus-Blanc & Gaunet, 1997). Another important factor
38
Chapter II - Theoretical Background
concerns differences in locomotion training. Incoherence in studies might result from the
effect that recently blind people have access to better training (Cattaneo & Vecchi, 2011).
Fletcher (1981a) also observed a positive influence of intelligence—defined as
brightness, thinking skill and reasoning ability—on spatial results. It can be assumed that
today in contrast to earlier generations at least in our Western societies, visually
impaired people have better access to education. Finally, socio-cultural factors also
impact results (Cattaneo & Vecchi, 2011).
39
Chapter II - Theoretical Background
Bosco et al. (2004) developed a battery of spatial tests. For evaluating different
aspects of spatial orientation, they combined active and passive tests as well as
simultaneous and sequential tests. Landmark tests were based on passive and
simultaneous processing. Survey tasks combined all four types of processes. Route tests
required essentially passive and sequential processing, but the need to update
information required also active processing.
However, many spatial orientation tests are largely based on the visual sense—for
drawing or recognition of scenes—and are therefore not adapted for use with visually
impaired subjects. It is indeed difficult to design memory tasks that are adapted to
measure spatial memory of the blind (Cornoldi et al., 2003). Miao and Weber (2012)
proposed an evaluation method for cognitive maps of blind people. They let users model
a labyrinth layout after exploration of this labyrinth. Then, they analyzed the model based
on four main criteria: number of elements, property of streets, arrangement of streets and
number of errors. Also they weighted the criteria following the importance that the
visually impaired users had attributed them. However, this method was not adapted for
more complex environments as it did not include knowledge on points of interest, curved
streets and crossings.
evaluating landmark, route and survey knowledge, Kitchin and Jacobson only proposed
tests for route and survey knowledge. Bosco et al. introduced two sets of landmark
questions: landmark recognition and landmark surrounding recognition. Both were
based on identifying the correct picture within a selection of pictures and thus purely
visual.
Figure II.2: Diagram of methods for evaluating route knowledge of visually impaired people
as proposed by Kitchin and Jacobson (1997).
41
Chapter II - Theoretical Background
Figure II.3: Diagram of methods for evaluating survey knowledge of visually impaired
people as proposed by Kitchin and Jacobson (1997).
42
Chapter II - Theoretical Background
A different kind of test is used for self-evaluation of spatial knowledge. The “Santa
Barbara Sense Of Direction Scale” (SBSOD, Hegarty, Richardson, Montello, Lovelace, &
Subbiah, 2002) proved to be internally consistent and had good test–retest reliability. It
consists of 15 questions related to orientation and traveling. Examples are “I very easily
get lost in a new city” or “I have trouble understanding directions”. About half of the
questions are each formulated positively or negatively. The SBSOD has been used in
4
This method has been classified as route-based technique by Kitchin & Jacobson (1997). In
contrast, Bosco et al. (2004) classified it as configurational method. We follow the latter as we
believe that Euclidian distance in contrast to functional distance is part of configurational
knowledge.
43
Chapter II - Theoretical Background
previous studies (see or instance Ishikawaa et al., 2008; Pielot & Boll, 2010). An
alternative is the “Everyday Spatial Questionnaire” (Skelton, Bukach, Laurance, Thomas,
& Jacobs, 2000). It consists of 13 questions that are very similar to those from the SBSOD.
Examples are “Do you get lost when you go into large buildings for the first time?” or “Do
you stop and ask for directions when you are on your way to some place?” In comparison
with the SBSOD, more questions are investigating whether the respondent gets lost in an
unknown place. The SBSOD seems to have questions of a higher variety of content.
Electronic Travel Aids (also called Electronic Mobility Aids) are normally used
during traveling. They can further be distinguished in devices “sensing immediate
environment” —such as electronic white canes—and “navigation” devices—such as GPS
systems (Loomis et al., 2007). As our focus is on acquisition of spatial knowledge prior to
traveling, we will not further detail Electronic Travel Aids in this thesis. Recent reviews
44
Chapter II - Theoretical Background
were presented by Kammoun (2013) and Roentgen, Gelderblom, Soede, and de Witte
(2008).
Verbal descriptions can be helpful for the acquisition of mental maps as well as
during the traveling. In general, verbal descriptions are based on natural language. They
require an automatic system or a guide who has more spatial knowledge of the
environment than the guided person. Information indicated by language is either
qualitative or quantitative and imprecise (Montello, 2001). Statements about pathway
connections and approximate location are more important than precise distance or
direction estimation. For example, most people would have more problems interpreting
“turn left after 400 m south east direction” than “turn left at the next crossing”.
Concretely, Roger et al. (2011) studied the role of landmark information in speech-based
over-the-phone guidance systems for sighted people. They observed that instructions
containing landmarks enabled more efficient wayfinding, less error prone direction
estimation and higher navigation performance. Besides, they were preferred by the
participants. Very few studies investigated form, content, and functional information of
route descriptions for blind people. Gaunet and Briffault (2005) proposed functional
specifications for verbally guiding blind pedestrians in unfamiliar areas. They identified
which information and instructions should be given and at which concrete location.
Generally, it is a disadvantage of verbal guidance that the sense of the verbal
descriptions depends on the speaker as a person, the speaker’s location, the
environmental context, the previous topic of conversation, etc. (Montello, 2001). For
instance, Roger et al. (2009) observed in a study with sighted people that a person
verbally guiding another one adapted to the guided person. They provided more
landmark information when guiding a person without prior knowledge of the
environment. Besides, depending on the guide’s spatial abilities the landmark
information was either provided in an egocentric or allocentric reference frame.
Decoding these verbal messages is possible only with highly symbolized intellectual
development (Siegel & White, 1975).
45
Chapter II - Theoretical Background
experience. However, the children had not had any training on map exploration prior to
the experiment. A disadvantage of small-scale models is that they are costly in
production and cumbersome for transportation.
Figure II.4: Small-scale model for visually impaired people as presented by (Picard & Pry,
2009). Reprinted with permission.
II.3.1.1 Maps
Maps are representations of the environment that communicate information
(Lloyd, 2000). They are two-dimensional, projective and small-scale (Hatwell & Martinez-
Sarrochi, 2003). Maps are representations of space which are in themselves spatial. The
environment is presented from an allocentric survey perspective. Maps differ from
photographs in that a cartographer has arranged spatial information for communication
(Lloyd, 2000). A map is thus a device for storing spatial information by a cartographer
and a source of knowledge for the map reader.
Scale refers to the actual spatial extent that is represented in a map. Maps can
have different scales, ranging for instance from a room to a representation of the entire
globe (Hatwell & Martinez-Sarrochi, 2003). Another measure is granularity which
characterizes the amount of detail that is offered (Klippel, Hirtle, & Davies, 2010).
Maps can have different purposes. Orientation and mobility maps provide the
possibilities of exploring unknown areas, getting an overview about the surrounding of a
landmark, localizing specific landmarks or preparing travel (Heuten, Wichmann, & Boll,
2006). The location of a particular place can be determined in absolute terms—latitude,
longitude—or in relative terms—in comparison to a reference point (Lloyd, 2000).
Besides, maps allow the estimation of distances and directions (Hatwell & Martinez-
Sarrochi, 2003). Thus, map readers can connect objects and structures in their cognitive
46
Chapter II - Theoretical Background
maps with objects and structures in the real world (Klippel et al., 2010). Edman (1992)
further differentiated maps that are used for teaching geography, topological maps
(maps based on few selected features, without necessarily respecting distance, scale and
orientation) and thematic maps (maps giving qualitative or quantitative information on a
specific topic). Cloropleth maps are specific thematic maps in which the shading of areas
indicates the measurement of a specific variable (Zhao, Plaisant, Shneiderman, & Lazar,
2008).
One specific type of map is you-are-here (YAH) maps. YAH maps are reference
maps showing features in the environment of the map reader (Montello, 2010). They
normally include a symbol representing the location and possibly the orientation of a
person viewing the map. YAH maps are always in situ, i.e. they represent the location in
which they are placed. These maps intend to solve wayfinding problems for a person in a
navigation situation.
Maps for visually impaired people are traditionally tactile—i.e., raised-line maps
(see II.3.3). On a tactile map, information is presented through relief—raised lines—with
the help of different lines, symbols and textures. Braille is used to add textual information.
Tactile maps have proved to be useful tools for presenting survey knowledge. In this
regard, tactile maps provide at least two important benefits (Ungar, 2000). In the short-
term they introduce a person to a particular space, in the long-term they improve spatial
cognition. Accordingly, they have been used both as wayfinding aids and as mobility
learning aids in education (Jacobson, 1996). We will investigate this in more detail in a
later section (II.3.2.2).
By learning the patterns on the map, spatial information represented on the map
becomes spatial knowledge (Lloyd, 2000). In order to access the information from the
map, map readers must possess the necessary perceptual and cognitive skills that allow
access to embedded symbolic codes (Hatwell & Martinez-Sarrochi, 2003). On the
perceptual level, they must first discriminate map elements and comprehend the general
spatial structure. On the cognitive level, readers encounter several challenges. First, they
must operate a transformation from 2D to the larger 3D space. Second, map readers have
to project themselves into the map which includes finding one’s position and rotation of
the map if necessary. Third, directions, distances and landmarks have to be extracted
and held in memory. The cognitive mapper thus simplifies the map information, that is
already a simplified representation of the environment (Lloyd, 2000). The set of
operations for encoding and decoding of the map is called the signature (Downs & Stea,
1973). Concretely, Downs and Stea define three operations as part of the signature:
rotation of viewpoints, scale change and abstraction to a set of symbols.
Map reading is especially difficult in the case that rotation needs to be applied.
This happens if the map and the heading of the reader are misaligned, i.e. not facing in
the same direction (Thinus-Blanc & Gaunet, 1997). Conventionally, (YAH) maps are
considered aligned when the “up” direction on a vertically-displayed map or the forward
direction on a horizontally-displayed map correspond to the direction in which the map
reader faces in the environment (Montello, 2010). (Mental) rotation of maps can result in
orientation errors. The problem arises because spatial perception is “orientation-
dependent” (Montello, 2001). The cost of time, error and stress resulting from the
misalignment are denominated the “alignment effect”. Maps can be misaligned by any
48
Chapter II - Theoretical Background
angle, but not all degrees of misalignment produce an equal cost. Also, it appears that
performance can be improved through practice. Concretely, Montello (2010) defines a
four-step strategy that map readers apply in order to resolve misalignment. First, they
must be aware of the misalignment. Second, they must determine the degree of
misalignment. Third, they must determine an approach to either mentally or physically
transform the orientation of the map or their own orientation. Fourth, they need to carry
out this approach successfully.
Recently, studies have compared GPS-based navigation systems with maps and
direct experience. Münzer, Zimmer, Schwalm, Baus, and Aslan (2006) compared
acquisition of spatial knowledge from using a navigation system with map usage.
Consistently with Thorndyke and Hayes-Roth they observed a better survey knowledge
for map readers than for navigation system users. Although route knowledge was good in
both conditions, it was significantly better in the map condition. They hypothesize that
spatial knowledge of navigation system users is poorer, because being guided by a
navigation system does not stimulate users to actively encode and memorize spatial
information. In comparison, map usage encourages active learning as the route has to be
kept in working memory. Ishikawaa et al. (2008) compared navigation guided by a GPS-
based navigation system, by a map and navigation based on experience of walking
routes accompanied by a person. Their results show that navigation system usage affects
the user’s wayfinding behavior and spatial knowledge differently than do the maps and
direct experience. GPS-based navigation was less smooth than direct experience and
map learning, i.e. participants made more stops, travelled further distances and took a
49
Chapter II - Theoretical Background
longer time. Besides, route knowledge was acquired more precisely through direct
experience than through navigation system usage. However, performance improved
over time both for the navigation system and the map group.
To sum up, there is evidence that map use can improve spatial knowledge. Due to
their allocentric layout, maps especially improve survey knowledge. Survey knowledge
can also be acquired from direct experience but it takes more time and effort to do so.
Finally, navigation systems seem to impoverish the acquisition of spatial knowledge.
5
More recently audio maps or maps combining audio and tactile output emerged. See II.3.4.
50
Chapter II - Theoretical Background
In general it has been observed that the short-term memory for tactile sense is less
functional than for vision. However, this advantage for the visual sense vanished when
vision was limited to a small aperture (Picard & Monnier, 2009). Based on this finding it
can be hypothesized that disadvantages for the tactile sense regarding identification of
structures are due to the serial character of touch—as compared to vision that is quasi
parallel. Wijntjes, Van Lienen, Verstijnen, and Kappers (2008a) observed sighted people
exploring raised-line drawings. Participants had memorized the outlines correctly and
were able to draw them, but could only identify them visually when seeing their drawing.
They conclude that mental capacities required for identification of raised-line drawings
were inadequate.
Another criteria that influenced tactile image recognition is the complexity of the
image (Hinton, 1993). The more complex the drawings, the longer the response times
and the lower the accuracy and response rates (Lebaz, Jouffrais, & Picard, 2012). Raised-
line drawings themselves are two-dimensional but they may represent objects either in a
2D or 3D perspective (Lebaz et al., 2012). It has been observed that raised-line drawings
depicting 3D objects are more difficult to identify than those depicting 2D objects (Picard
& Lebaz, 2012). Participants in a corresponding study were faster at identifying objects
when they identified 2D drawings than 3D images. This result was valid independently of
visuo-spatial capacities (Lebaz et al., 2012). It might be hypothesized that 3D raised-line
images place higher demands on visual imagery. Lebaz et al. (2012) argued that 3D
raised-line images place higher demands on haptic exploration and on the parsing of the
drawing into significant representational units. This might be due to the fact that the
strategy of contour following (Lederman & Klatzky, 2009) does not prove effective in the
case of 3D objects because of the perspective included in the drawing.
51
Chapter II - Theoretical Background
Furthermore, different studies have proved that tactile picture identification and
haptic memory span improve with age from childhood to adulthood (see for instance
Picard, Albaret, & Mazella, 2013; Picard & Monnier, 2009). The developmental curve for
tactual-spatial learning was measured as parallel to the developmental curve for visual-
spatial capacities (Picard & Monnier, 2009).
52
Chapter II - Theoretical Background
Strategies for exploring tactile images are important as they may influence
performance. We present different studies on raised-line map exploration in detail in
subsection V.1.1. In brief, some studies have investigated whether the number of fingers
implied in haptic exploration has an impact on tactile image recognition. Findings
suggested that increasing the perceptual field by using more than one finger improved
raised-line picture identification. Besides, it seems that blind people have an advantage
over sighted people with regard to the use of systematic exploration strategies. It can be
hypothesized that the same is valid for raised-line maps, even if this has so far not been
investigated. To confirm the validity of findings on haptic exploration for tactile maps,
there is need for further investigations.
In another study, Jacobson and Kitchin (1995) let three visually impaired adults
study a tactile map of the area they lived in. Participants had to respond to three different
tests afterwards. They failed in estimating relative distances between towns. On the
contrary, they succeeded in locating towns on a partially complete tactile map and in
determining which map out of a set of three was correctly oriented despite the map being
rotated. In a follow-up study, Jacobson (1998b) compared spatial knowledge acquisition
between two groups of visually impaired adults. A first group learnt spatial knowledge by
walking a route with a mobility instructor. A second group explored an audio-tactile map
before walking the route with a mobility instructor. Both groups then had to walk the
route unaided. Afterwards they were asked to draw a sketch map of the route and to
describe the route in as much detail as they remembered. The resulting sketch maps
proved to be more accurate for the group that had explored the audio-tactile map.
53
Chapter II - Theoretical Background
Afterwards, participants had to walk the route unaided. As a result, participants who
learnt the route from a combination of direct experience and tactile map reading proved
a gain in spatial knowledge of the environment—both practical and representational—
than those in the two other conditions. In a second experiment, Espinosa et al.
investigated whether learning a route from a tactile map before having any direct
experience of the environment facilitates acquisition of spatial knowledge in comparison
with direct experience of the route. Ten visually impaired adults learnt an unfamiliar
route either by direct experience or through map learning. Participants performed just as
well after exploring a tactile map as they did after direct experience in the environment.
This finding demonstrates that tactile maps are an effective means for introducing blind
and visually impaired people to the spatial structure of a novel area.
Caddeo, Fornara, Nenci, and Piroddi (2006) found similar results. They observed
visually impaired adults learning a novel route either by using tactile maps and direct
experience or direct experience with verbal descriptions. The participants who had
access to the tactile map showed better performance in walking time and deviation from
the route. Some studies demonstrated that tactile maps are more suitable tools for
presenting spatial knowledge to blind people than verbal descriptions (Cattaneo &
Vecchi, 2011). This might be due to a higher working load for verbal information.
Similar results have been found, when studying tactile map use for visually
impaired children. In a study by Ungar, Blades, Spencer, and Morsley (1994) visually
impaired children, aged between 5 and 11, learnt the outline of familiar toys on the floor
of a large hall either by direct experience or by a tactile map. Totally blind children in
this study learnt the environment more accurately from the map than from direct
exploration. Ungar and Blades (1997) compared tactile map usage by visually impaired
children (again aged 5 to 11 years) and sighted peers for inferring distances. Visually
impaired children’s performance was lower than that of sighted control, but after
instructions on how to infer distance from a map, their performance improved. Wright,
Harris, and Sticken (2010) compared differences in studies on tactile map learning for
visually impaired children. They observed that studies obtained contradictory findings
on the influence of chronological age and the amount of residual vision on results, but that
teaching efficient map reading strategies appeared as an important factor.
To sum up, tactile maps help visually impaired adults as well as children to
acquire spatial knowledge of familiar and novel environments. Tactile maps preserve
relations between landmarks in space but present those relationships within one or two
hand-spans (Ungar, 2000). Thinus-Blanc and Gaunet argue therefore that exploration of a
tactile map demands a smaller working load than exploration of a real environment. Also
54
Chapter II - Theoretical Background
they outline that during exploration of manipulator space, it is possible to keep a fixed
reference point, whereas during exploration of locomotor space the participant is moving
and so is the reference system (the own body). Besides the tactile map is simplified in
content and therefore free from noise that is present in the environment (Ungar, 2000).
Ungar (2000) also underlined that exploration of locomotor space may be subject to fears
related to traveling, whereas exploration of maps can be done without danger and
anxiety. Therefore it can undoubtedly be concluded that maps are an important tool for
improving spatial cognition of visually impaired people.
(a) (b)
Figure II.6: Different production methods for tactile maps. (a) Vacuum-formed map of
Europe. (b) Swell-paper map of a bus terminal
55
Chapter II - Theoretical Background
Vacuum-forming (see Figure II.6 a), also called thermoforming, is a technique that
allows reproduction of several images from the same master. Concretely, a master can
be produced from various material (Edman, 1992). Once the master established, a sheet
of plastic is placed on the master in a specific machine and heated while evacuating the
air. The sheet is thus permanently deformed. With this method it is possible to create
recessed lines as well as several levels of height, which can improve readability of the
image (Edman, 1992). On the downside its production is costly and special equipment is
needed (Tatham, 1991).
Figure II.7: Production of swell-paper maps. A swell-paper map is passed through the
heater.
Swell paper—also called microcapsule paper or heat sensitive paper (see Figure
II.6 b) works with a normal printer, special paper and a heater (Figure II.7). The map is
printed on paper that contains microcapsules of alcohol in its coating. As the black ink
absorbs more heat, the capsules of alcohol expand when the paper is passed in the
heater (Edman, 1992). The production method is easy and costs are lower than for
vacuum forming (Lobben & Lawrence, 2012). The printer is a normal inkjet printer or
copying machine and the paper is sold at around 1€ per sheet. Only the heater is specific
hardware which is more costly. Images can be prepared using a computer and it is easy
to reprint the same image. The resulting image is perceivable both by the tactile and the
visual sense, therefore making it possible to share information between a visually
impaired person and a sighted assistant. It is also possible to annotate a printed image
with colors for a person with low-vision. A disadvantage is that tactile images created
with this method are binary, i.e. the surface is either flat or raised and height cannot be
modulated. Also corners are rounded and it is not possible to produced accurate angles
56
Chapter II - Theoretical Background
(Bris, 1999). Repeated touching may damage the relief and images can therefore lose
their readability after a number of uses.
Existing raised-line maps use various symbols and textures, and there are no strict
rules on how to design tactile maps. This lack of standardization makes the map reading
more challenging for visually impaired people (Lobben & Lawrence, 2012). Attempts of
standardization are described in the appendix (VII.4.2.1). In addition to these attempts,
existing guidelines can be helpful for the design of tactile maps and images. Edman
(1992) gives an exhaustive overview of current practices in the design and production of
tactile maps and images. Paladugu et al. (2010) evaluated different tactile symbols with
six blind participants. They measured rating, accuracy of naming streets and time for
finding the symbols. Their study led to the proposition of a tactile symbol set. Other
57
Chapter II - Theoretical Background
guidelines are specifically based on the perceptive constraints of the haptic sense (Bris,
1999; Picard, 2012; Tatham, 1991). For instance tactile acuity impacts the minimum and
maximum distances between two lines. In the appendix we detail recommendations
concerning lines, symbols and textures (VII.4.2.2). It is especially important to note that
the number of elements on the map must be limited as much as possible, in order to
avoid overloading the map reader with too much information. On one hand this concerns
the total number of map elements. Picard suggested to display not more than 30 map
elements in total (maximum 20 inside the map and 10 outside the map outline). She
proposed to display not more than 2 elements per 2 cm2 as this corresponds to the
perceptive space of the fingertip moving laterally. On the other hand, the number of
different symbols and textures that need to be differentiated should be limited. Tatham
proposed to use not more than 8 to 10 different symbols. Bris defined the maximum as 3
to 5 different textures.
II.3.3.1.b Braille
Braille is a raised dot embossed alphabet. It has been invented by Louis Braille in
1824 and is today the international standard writing system for blind people. It is used in
many countries, for text, music, mathematics and even computer science. Each letter in
the alphabet is a rectangular cell composed by 6 to 8 points. A braille cell can be entirely
covered by a fingertip (Cattaneo & Vecchi, 2011). Different versions exist such as
contracted braille to speed up the reading process (Tobin, Greaney, & Hill, 2003).
According to Hatwell (2003), braille is remarkably well adapted to the sensory capacities
of the index finger. However she argues that it is challenging to learn it. Difficulties lie in
the perceptual aspects, spatial aspects (no spatial reference frame exists), phonological
and semantic aspects. Tobin et al. (2003) state three factors that make reading by touch
more challenging. First, there is a higher demand on short-term memory as information
must be memorized until sufficient information is available to permit “closure” and
interpretation of the whole word or phrase. Second, the left to right scanning and location
of the next line demand fine psycho-motor control. Third, the person needs to have
enough motivation to continue reading despite the additional time required for decoding
and assimilating the information. Braille reading speed depends on several factors, such
as the age when braille reading skills are acquired. If braille is learnt at a younger age, a
higher reading speed can be accomplished (Tobin et al., 2003). As a result, only a small
part of the visually impaired population can read braille. In France 15% of blind people
read braille and only 10% of them read and write it (C2RP, 2005). In the United States,
fewer than 10% of the legally blind people are braille readers and only 10% of blind
children are learning it (National Federation of the Blind, 2009).
58
Chapter II - Theoretical Background
Braille is the standard means of annotating tactile maps (Edman, 1992). However,
including braille into tactile maps is challenging (Tatham, 1991). First, braille text needs a
lot of space and therefore dominates what is on the map (Hinton, 1993). Second, it is
inflexible in size, inter-cell spacing and orientation (Tatham, 1991). In comparison, text on
visual size can be written in fonts of different size and style, it can be rotated to squeeze it
in open spaces, upper and lower cases as well as color can be used to highlight important
text. Braille lacks all of these possibilities. More concretely, the height of braille letters
correspond to a normal font in size 26 points (Tatham, 1991). Because of a lack of space to
include entire text lines on the map itself, normally a legend is used to display braille text
(see Figure II.8). The use of a legend or key can make a map more readable. Yet,
alternating between reading the map and the legend, disrupts the reading process
(Hinton, 1993).
Some critique concerns the production of the map. Rice, Jacobson, Golledge, and
Jones (2005) argued that the creation of tactile maps was very costly in time. They
59
Chapter II - Theoretical Background
suggested that the automatic creation of maps from Geographic Information Systems
(GIS) could speed up map creation. Besides, often GIS do not comprehend information
that is important for the visually impaired for orientation, such as sidewalks or sonified
traffic lights (Kammoun, Dramas, Oriola, & Jouffrais, 2010).
Some critiques concern the representation of map content. First, because of the
perceptual limits of the tactile sense, less detail can be represented on a tactile map than
on a visual map. Once the map is printed, its content is static and cannot be adapted
dynamically. Therefore these maps are quickly out of date (Yatani, Banovic, & Truong,
2012). It is also difficult for visually impaired people to access specific information such as
distances. Whereas maps for sighted people normally present a scale in order to indicate
distances, this is more difficult on maps for the sight impaired. Furthermore, as discussed
above the use of braille labels is problematic.
This leads to the question how new technology can improve access to spatial
information for visually impaired people.
6
https://maps.google.com [last accessed August 13 th 2013]
7
http://www.bing.com/maps/ [last accessed August 13th 2013]
8
http://www.openstreetmap.org [last accessed August 13th 2013]
60
Chapter II - Theoretical Background
Digital Library, SpringerLink, IEEE Explorer, and Google Scholar) revealed 43 articles
that were published over the past 26 years that matched our inclusion criteria. First, we
only considered interactive maps that aimed at visually impaired people. Second, we
searched for publication in a journal or peer-reviewed conference. Projects that have
been published as PhD Thesis or Master Thesis only have not been considered. Third,
publications that proposed concepts without implementation were also discarded, except
for Parkes (1988) who presented the very first prototype. Fourth, we only considered one
publication for each prototype. Exceptions were made if changes occurred between
versions of different prototypes. For instance, Weir, Sizemore, Henderson, Chakraborty,
and Lazar (2012) based their prototype on the one proposed by Zhao et al. (2008) but
changed the content of the map. Pielot, Henze, Heuten, and Boll (2007) based their map
on a prior prototype (Heuten, Henze, & Boll, 2007) but integrated tangible interaction.
Kaklanis et al. (2013) integrated speech output in their second prototype, whereas their
first prototype (Kaklanis, Votis, Moschonas, & Tzovaras, 2011) was based on non-speech
output. Fifth, we focused on interactive maps and not on guidance systems. Therefore, we
also excluded systems that presented map information purely from an egocentric
perspective. This concerned virtual environments that commonly present environments
from a egocentric perspective as the user can walk around and perceive the environment
from a traveler’s perspective (see for instance Kammoun, Macé, Oriola, & Jouffrais, 2012;
Lahav & Mioduser, 2008; Merabet, Connors, Halko, & Sánchez, 2012). Some projects on
the transition between egocentric and allocentric perspective were included in our
overview. For instance, Hamid and Edwards (2013) presented a map from a bird’s eye
view. Yet, this map could be turned in order to follow the egocentric perspective of the
observer. The prototype presented by Pielot et al. (2007) was based on 3D sound that
adapted to the rotation of a tangible object. Yet, the object itself was situated in an
allocentric reference frame. Milne, Antle, and Riecke (2011) provided 3D sound that
adapted to the user’s body rotation. However, their prototype also worked with a
tangible object on a touch interface, thus in an allocentric reference frame. The prototype
of Heuten et al. (2007) was included in our classification but not the one of Heuten et al.
(2006). The prototype of Heuten et al. (2007) was an extension of the earlier prototype
including a bird’s eye view perspective. When it comes to user studies some additional
publications have been considered, that have been conducted with the prototypes in the
original corpus.
Note that some prototypes allowed different variations. For instance, Parente and
Bishop (2003) proposed a system that works with different input devices. Other projects
allowed to display more than one type of map content (e.g., Schmitz & Ertl, 2012). In these
cases our analysis takes into account all variations. Due to this reason, the number of map
62
Chapter II - Theoretical Background
prototypes for each classification criteria is not necessary equal to 43. It is also important
to underline that in some cases the publications did not give clear information about
certain aspects. For instance, sometimes a map prototype is presented without specifying
which kind of content is displayed on the map.
In the following subsections we will only detail the analysis of this map corpus
regarding non-visual interaction. In the appendix (VII.5) we provide an additional
analysis of the corpus with regard to terminology, origin of the map projects, timeline
and map content and scale. We also present projects that have been developed outside
academia (VII.5.5).
Figure II.9: Overview of modalities employed in the interactive map prototypes for visually
impaired people.
Figure II.9 shows how modalities have been used in existing interactive map
prototypes. We distinguished modalities that relate to the two senses audition and touch
(including cutaneous, kinesthetic and haptic perception). Most systems relied on some
sort of touch input and only few systems used both touch and audio (speech recognition)
as input (Bahram, 2013; Iglesias et al., 2004; Kane, Morris, et al., 2011; Kane, Frey, &
Wobbrock, 2013; Simonnet, Jacobson, Vieilledent, & Tisseau, 2009). In these prototypes
63
Chapter II - Theoretical Background
audio and touch input were assigned to different tasks. Only the system by Bahram (2013)
provided redundant input interaction. Touch and speech input are presented in detail in
subsection II.4.3. When looking at the output modalities, all systems relied on some form
of audio output except the prototype proposed by Levesque et al. (2012). Some
prototypes combine audio output with some form of touch output. Audio and touch output
are presented in detail in subsection II.4.4.
II.4.2 Devices
Figure II.10: Number of publications on interactive maps for visually impaired people
classified by devices. Categories are presented on the left grouped by the main categories
Haptic, Tactile actuators, Touch and Other. The x-axis represents the number of
publications. Bars in dark grey present the total for each category.
It is important to analyze the use of different devices as the user has the most
direct contact with the system’s hardware. Also it has been shown that the type of device
contributes to the mental representation that the user creates through interaction (Buxton,
1986). Additionally, different devices are adapted for the execution of different tasks.
Devices are not identical to interaction techniques, but interaction techniques are closely
linked with the devices. An interaction technique is a way of using a physical device to
perform a task in human-computer interaction (Foley, Van Dam, Feiner, & Hughes, 1996).
More precisely, interaction techniques are defined by the combination of a physical
device and an interaction language (Nigay & Coutaz, 1997). In the case of interactive
maps we judged it more interesting to look at devices that react to or produce haptic
65
Chapter II - Theoretical Background
output, as there is a larger variety in haptic than in audio devices (i.e., headphones and
loudspeakers produce the same audio information). We will have a closer look at the
audio input in subsection II.4.3.1 and audio output in subsection II.4.4.1.
It is also possible to distinguish between input and output devices. Input devices
are controlled by the user to communicate information towards the computer
(Dragicevic, 2004). Output devices present information from the system to the user. Some
of the devices in our classification sense input, some also actively provide output. For
instance, haptic devices react to the user input, but do also apply a force in order to
render kinesthetic cues to the user. We propose to classify the devices in four categories
according to common principles of sensing input and representation of information:
haptic devices, tactile actuator devices, touch-sensitive devices and other (Figure II.10).
Figure II.11: Photograph of the GRAB interface as presented by (Iglesias et al., 2004).
Reprinted with permission.
First there are devices that allow interacting in three dimensions through a handle
such as the Geomagic Touch X9 (formerly the Sensable Phantom Desktop) or the Novint
Falcon 10 . Both are tensioned cable systems, i.e. the handle is moved to different
directions by several actuated cables (El Saddik et al., 2011). The resulting force is
perceived by the user grasping the handle. These devices allow a large workspace. The
9
http://geomagic.com/en/products/phantom-desktop/overview [last accessed August 21st 2013]
10
http://www.novint.com/index.php/novintfalcon [last accessed August 21st 2013]
66
Chapter II - Theoretical Background
Geomagic Touch X provides six degrees of freedom (DoF), i.e. the possibility to vary
position and orientation along the three spatial axes, as well as 3D force feedback. The
Novint Falcon allows three DoF (El Saddik et al., 2011). Several maps have been
implemented with these devices (De Felice, Renna, Attolico, & Distante, 2007; Kaklanis et
al., 2011, 2013; Lohmann & Habel, 2012; Simonnet et al., 2009). Iglesias et al. (2004)
worked with the “GRAB” interface (see Figure II.11). This device also uses 3D force-
feedback. It was based on two distinct robotic arms placed on two bases in front of a
visualization screen. Both robotic arms possessed six DoF and covered a workspace of
0.6 m width, 0.4 m height and 0.4 m depth. Two fingers, either of the same or two hands,
could be placed in two thimbles to which two independent force-feedbacks were
applied.
Computer mice with force feedback have been used in some projects (Campin,
McCurdy, Brunet, & Siekierska, 2003; Lawrence, Martinelli, & Nehmer, 2009; Parente &
Bishop, 2003; Rice et al., 2005; Tornil & Baptiste-Jessel, 2004). Force feedback mice
function as standard computer mice with the additional capability to produce
programmable haptic sensations (Rice et al., 2005).
Alternatively, gamepads with force feedback have been proposed (Parente &
Bishop, 2003; Schmitz & Ertl, 2010) as well as force feedback joysticks (Parente & Bishop,
2003). Both are affordable and widely employed. They generally possess a small number
of DoF and moderate output forces (El Saddik et al., 2011).
Many devices use mechanical needles or pins that are raised mechanically either
by electromagnetic technologies, piezoelectric crystals, shape memory alloys,
pneumatic systems or heat pump systems (El Saddik et al., 2011). Visually impaired
people commonly know this kind of stimulation, as they often use dynamic braille
displays that are based on a similar principle (Brewster & Brown, 2004). These braille
displays are used for displaying textual information (see Figure II.12). Braille displays are
made of a line of 40 to 80 cells, each with 6 or 8 movable pins that represent the dots of a
braille letter (Brewster & Brown, 2004). The user can read a line of braille text by
67
Chapter II - Theoretical Background
touching the raised pins of each cell. As presented by Borodin, Bigham, Dausch, and
Ramakrishnan (2010) braille displays are an interesting alternative to audio output. They
present the same information as audio output in an alternative modality, making
information less fugacious. In contrast to audio output, braille displays give users the
possibility to spend as much time as they want on reading the content and repeat reading
if necessary. Additionally, spelling of words can be made accessible to visually impaired
people. Although this has also been done with speech output (Miele, Landau, & Gilden,
2006), presenting the spelling of words seems easier when written than when spoken.
Finally, the ISO standard 16071 suggests to enable output alternatives in different
modalities (ISO, 2003), so it appears advantageous to provide audio as well as braille
output. The disadvantage of braille displays consists in their elevated price. Despite the
above reported advantages, within the corpus of interactive map projects, a braille
display has been used in only one prototype for displaying textual information (Schmitz &
Ertl, 2012). As it has only been used as a complement and not for displaying spatial
information we have not mentioned it in Figure II.10.
So far, few alternative tactual actuator systems are commercially available. Within
the corpus of interactive maps, several prototypes use displays of various size composed
by actuated pins (Schmitz & Ertl, 2012; Shimada et al., 2010; Zeng & Weber, 2010). These
displays function as a tactile map: the information is displayed as relief and the user
moves the hand across the display for exploring. In addition, it is possible to dynamically
change the content or to highlight elements by dynamically altering raised and recessed
pins. Concretely, Zeng & Weber (2010) used the BrailleDis 9000 tablet which was
composed by 7200 pins actuated by piezo-electric actuators and arranged in a 60x120
pin matrix (see Figure II.13 a). Schmitz & Ertl (2012) used the HyperBraille display11, a
commercial version of the BrailleDis 9000. Shimada et al. (2010) constructed a display
with 3072 raised pins. This system additionally had a scroll bar that showed which parts of
11
http://www.hyperbraille.de/?lang=en [last accessed August 21st 2013]
68
Chapter II - Theoretical Background
the image were outside the displayed area. Both devices contained touch sensors in
order to react to user input.
(a) (b)
Figure II.13: Tactile actuator devices. (a) BrailleDis 9000 tablet as used by (Zeng & Weber,
2010). (b) VTPlayer mouse with two 4x4 arrays of pins. Reprinted with permission.
In the laterotactile system proposed by Petit et al. (2008), tactile feedback was
produced by laterally stretching the skin of the finger (Figure II.14 b). This was done by
two assembled devices: the “STReSS2” (Stimulator of Tactile Receptors by Skin Stretch)
and the “Pantograph”. The STReSS2 device contained a matrix of 8x8 piezoelectric
bending motors that moved laterally by approximately 0.1 mm and produced a force of
around 0.15 N (Lévesque & Hayward, 2008). The size of the tactile feedback zone was
12x10.8 mm. The Pantograph allowed movements on a two-dimensional surface. The
STReSS2 mounted on the Pantograph then related tactile feedback to the position on the
69
Chapter II - Theoretical Background
2D surface. A revised but functionally equivalent display was used by Lévesque et al.
(2012).
(a) (b)
Figure II.14: Tactile actuator devices. (a) Schema of the Tactos device as proposed by
(Tixier et al., 2013) (b) The STReSS2 laterotactile device as used by (Petit et al., 2008).
Reprinted with permission.
Within the corpus, most touch-sensitive maps were based on touch screens, i.e.
static touch-sensitive devices having the size of a regular computer screen (Brock,
Truillet, Oriola, Picard, & Jouffrais, 2012; Campin et al., 2003; Hamid & Edwards, 2013;
Miele et al., 2006; Senette, Buzzi, Buzzi, Leporini, & Martusciello, 2013; Wang, Li,
Hedgpeth, & Haven, 2009; Weir et al., 2012). Some prototypes were based on touchpads
or digitizer tablets (Daunys & Lauruska, 2009; Heuten et al., 2007; Jacobson, 1998a;
Parente & Bishop, 2003; Zhao et al., 2008). These tablets have been commercially
available before multi-touch surfaces. Recently, tablet PCs (Carroll, Chakraborty, &
Lazar, 2013; Senette et al., 2013; Simonnet, Bothorel, Maximiano, & Thepaut, 2012) and
smartphones (Poppinga, Magnusson, Pielot, & Rassmus-Gröhn, 2011; Su, Rosenzweig,
70
Chapter II - Theoretical Background
Goel, de Lara, & Truong, 2010; Yatani et al., 2012) have been used. These devices allow
using an interactive map in mobile situations. Few prototypes were destined for touch
tables, i.e. large touch-sensitive surfaces (Kane, Morris, et al., 2011; Yairi, Takano, Shino,
& Kamata, 2008). Finally, some projects did not specify the type or size of touch device
and might therefore function with different hardware (Bahram, 2013; Parkes, 1988).
Figure II.15: Access Lens application as proposed by (Kane, Frey, et al., 2013). The map
labels are located and recognized by image recognition and then translated into speech
output. The menu on the right displays an index of on-screen items. Reprinted with
permission.
II.4.2.5 Summary
To sum up, from Figure II.10 it is obvious that a lot of prototypes are based on
touch-sensitive devices. This is certainly favored by recent technical advancements,
which have led to very affordable touch devices. Haptic devices, as the Phantom, are
71
Chapter II - Theoretical Background
usually more expensive. Displays with tactile actuators appear very interesting, as they
provide refreshable cutaneous information. However, many of the projects are not yet
commercialized or are very expensive. For instance, in 2012 the HyperBraille terminal
cost about 50 000€. The use of tangible interaction has been very little studied and it
might be interesting to further investigate this possibility. Few studies have compared the
use of different devices. Lazar et al. (2013) observed five blind users exploring a map
either with a keyboard, a touchscreen interface or a touchscreen with raised-line overlay.
Their study showed that users preferred the touchscreen with overlay over the
touchscreen alone and the keyboard interface. We therefore conclude that touch-
sensitive devices appear to be affordable and interesting solutions.
On the other hand, the adoption rate for speech input systems is low and users
often report many problems. It has been observed that actual recognition rates of these
systems are often lower than those indicated by suppliers (Feng & Sears, 2009). In
addition confidentiality is problematic, as speech input is public (Bellik, 1995). It is also
subject to interference with the environment as the system needs to distinguish between
72
Chapter II - Theoretical Background
input commands and background noise, or the user speaking to another person (Bellik,
1995). Finally speech input cannot be used in all situations, such as a classroom or a
theater.
To reduce recognition problems it is possible to limit the input vocabulary (Feng &
Sears, 2009). It is also important to choose an adapted vocabulary. Short words or words
that sound similar make it more difficult for the system to recognize the input correctly
(Bellik, 1995). Oviatt (1997) suggested to guide users’ language toward simplicity.
Furthermore, the environment influences the recognition rate, for instance the quality of
the microphone or the background noise. Recognition rate can also be improved by
training the system to the voice of the interlocutor (Bellik, 1995).
Oviatt (1996) compared interactive maps for sighted people based on speech
input with maps based on speech recognition and pen input on a touch surface. She
observed that the time for completing map-based tasks was shorter with multimodal than
speech input. She argued that it was easier, quicker and more precise to designate
locations and shapes by touch than by speech. Also the number of disfluencies was
higher in the speech-only condition, due to the difficulty to express spatial locations by
speech. On the other hand, speech-input allowed users to locate objects that were not
currently displayed on the screen, simply describing landmarks and streets. Speech also
allowed a dialogue with the map interface, such as asking for navigational assistance or
information about specific landmarks.
Some interactive map projects for visually impaired people used speech input as
complementary interaction technique. Whereas position information was acquired
through touch input, speech recognition allowed to access additional information, such as
distances (Simonnet et al., 2009), directions (Kane, Morris, et al., 2011; Kane, Frey, et al.,
2013; Simonnet et al., 2009) or lists of on-screen or nearby targets (Kane, Morris, et al.,
2011; Kane, Frey, et al., 2013). Speech recognition was also used for centering the map on
a point or for zooming (Bahram, 2013).
II.4.3.2 Touch
As mentioned before, touch is involved in the use of many different devices, such
as computer mice, keyboards, touch surfaces or haptic devices.
73
Chapter II - Theoretical Background
mouse movement is visual and thus visually impaired people cannot easily orient their
exploratory movements. However, more accessible mice include force-feedback
(Campin et al., 2003; Lawrence et al., 2009; Parente & Bishop, 2003; Rice et al., 2005;
Tornil & Baptiste-Jessel, 2004) or cutaneous feedback via a braille cell (Jansson et al.,
2006). Similarly, laterotactile devices combined a position-sensing device with cutaneous
feedback (Lévesque et al., 2012; Petit et al., 2008). These devices provide input and
output. The output modalities of these devices are discussed in subsection II.4.4.2.
The keyboard is a standard device for both sighted and visually impaired people.
It has been used in different interactive map projects (Bahram, 2013; Parente & Bishop,
2003; Simonnet et al., 2009; Weir et al., 2012; Zhao et al., 2008). While typing on a
keyboard is very well adapted for entering linguistic information, it appears less adapted
to determine locations on a map. Exploring a map by pressing the arrow keys is discrete,
symbolic, and less sensitive to irregularities (Delogu et al., 2010). When interacting with
a keyboard, the user moves the cursor position on the map step by step with every
keystroke, whereas other technologies allow to jump from one point on the map to
another (Lazar et al., 2013). In addition keyboard interaction does not provide any
reference frame (see II.4.4.2.a). Accordingly, keyboard input is used in some interactive
map projects as complementary input rather than for providing the location on the map.
For instance, it has been used to change the map heading (Simonnet et al., 2009) or to
enter commands such as zooming or scrolling (Bahram, 2013). Delogu et al. (2010)
compared navigation in a sonified map with a keyboard or a tablet. Despite the above
reported disadvantages of keyboard interaction, they did not observe any significant
difference in a map recognition task according to the type of input. Both input interactions
led to an effective cognitive map. However, they observed that touch-tablet users
explored the map content in more details than keyboard users. Also tablet users changed
the direction of exploration more often. Furthermore, they were faster. Delogu et al.
suggested that due to the missing haptic reference frame, keyboard interaction
demanded more cognitive effort for reconstructing the position after each step.
74
Chapter II - Theoretical Background
interactive map corpus have proved that haptic devices can successfully be employed for
the exploration of geographic data by visually impaired individuals.
(a) (b)
Figure II.16: Tangible interaction as proposed by (Pielot et al., 2007). (a) User interacting
with the tangible object, a toy duck. (b) The setup: camera filming from above. Reprinted
with permission.
In general, few projects have investigated the use of tangible interaction in non-
visual contexts. McGookin, Robertson, & Brewster (2010) developed a tangible prototype
for the non-visual exploration of graphs. This system combined a fixed grid for
orientation and movable tangible objects. User studies with visually impaired people
revealed high accuracy both for constructing and exploring graphs. The evaluation
allowed providing guidelines for the development of tangible interfaces for visually
impaired people. For instance tangible objects should be physically stable so that users
do not knock them over while exploring the interface. Although first experiments with
non-visual tangible interaction are promising, there is surprisingly little work in this area.
75
Chapter II - Theoretical Background
Gestural Interaction
The rise of multi-touch devices has led to an increased use of gestural interaction
in general. However, gestural interaction has so far rarely been used in interactive maps
for visually impaired people. Yet, we believe that this interaction technique is promising.
In this subsection we do not investigate how recognition of gestures is technically
implemented but how this interaction technique can be used for non-visual exploration of
geographic data.
76
Chapter II - Theoretical Background
and that the data was being captured to trigger an event. To sum up, the term gesture
defines dynamic and intentional movements of certain body parts (likely the hands and
arms). Because these movements follow a defined path they can be recognized by a
computer and trigger a reaction.
Some studies focused on the design of usable gestures. Morris, Wobbrock, and
Wilson (2010) studied users’ preferences for gestures that were either created by three
HCI researchers or by end users. They observed that users preferred gestures that have
been designed by a greater number of people. Users also preferred simple gestures, for
instance gestures using one finger instead of the whole hand, or one handed gestures
over two-handed gestures. They noted that researchers alone tended to develop gestures
that were too complex. Nacenta, Kamber, Qiang, and Kristensson (2013) studied
memorability of user-designed gestures, pre-defined gestures and random gestures.
They observed an advantage of user-defined gestures over pre-defined gestures and a
significant disadvantage of the random gestures in comparison with the two other
conditions. The study also revealed that the difference in memorability often occurred
from association errors between the gesture and the resulting action, and not from
incorrect execution of the gestures. Both studies can be seen as a strong argument in
favor of developing gestural interaction in a participatory design process. Yet, as argued
by Nacenta et al. (2013) it is not always possible to imply users in the creation of gesture
sets. For instance it is technically more challenging to recognize user-created gestures.
Designers pay attention to define gestures that are sufficiently different for the gesture
recognizer, whereas for user-designed gestures the differences between two gestures
might be very small. Also it may be desired that gestures are the same between different
applications, and thus defining specific gestures is not possible.
77
Chapter II - Theoretical Background
As touch screens become widely spread, making them more accessible to visually
impaired people is an important task. Kane, Morris, et al. (2013) classified touch screen
accessibility in three categories. First, there are software only approaches. They make
use of accessible gestural interaction and audio output. This category includes
commercial solutions as for example proposed by Apple 12 . Second, hardware-only
approaches apply hardware modifications on the touchscreen. For instance, some
visually impaired people glue tactile dots and braille labels on the screen (Kane et al.,
2008; S. Xu & Bailey, 2013). Third, hybrid approaches combine the use of hardware
modifications, such as overlays, and audio output. Audio-tactile maps (interactive maps
that are based on raised-line overlays and speech output) are for instances included in
this category.
reduce the demand for selection accuracy, to provide quick browsing and navigation, to
make gestural mapping intuitive and to enable a “return home” function. Basic
interaction techniques that have been developed as a result of this were for instance a
one-finger scan for browsing lists, a second finger to tap and select items, a multi-
directional flick for additional actions and a L-shaped gesture for browsing hierarchical
information. Slide Rule did not have any visual interface but was accessible entirely
through gestural input and audio output. Applications that have been made accessible
with these gestures included a phone book, an email client and a media player. In a user
study with ten blind participants comparing Slide Rule with a Pocket PC device, Slide
Rule proved to be significantly faster. However, users made more errors with the Slide
Rule prototype. Results concerning satisfaction were contradictory.
Kane, Wobbrock, and Ladner (2011) conducted two user studies on the use of
gestural interaction by blind and sighted people. This work was set in the continuity of
above reported studies with sighted people that revealed users’ preferences for gestures
that have been developed by end-users (Morris et al., 2010). In a first study, Kane et al.
(2011) asked ten blind and ten sighted users to invent gestures for specific tasks on a
tablet PC. Gestures created by blind people varied from those created by sighted people
in that blind people showed strong preferences for gestures that included screen
corners, edges and the use of multiple fingers. Also, blind participants often proposed
gestures based on a QWERTY keyboard layout. In a second study with the same
participants, Kane et al. (2011) investigated whether blind and sighted users performed
the same gestures differently. This study revealed that gestures produced by blind
people were larger, slower and varied more in size than those produced by sighted
people. Moreover, they identified that location accuracy, form closure and line
steadiness were less precise for blind people. Furthermore, some blind participants
were not able to produce gestures, such as letters, numbers and symbols. This led to the
establishment of several guidelines for creating gestures for visually impaired people.
First, symbols and letters from print writing should be avoided. Second, edges and
corners are helpful cues for identifying one’s position on the screen and gestures should
thus be positioned close to them. Third, the expected location accuracy should be low.
This is in line with the guidelines proposed by McGookin et al. (2008). Fourth, gestures
should take into account that blind people may need more time for executing them.
Finally, it can be useful to rely on familiar layouts such as QWERTY keyboards.
Wolf, Dicke, and Grasset (2011) studied gestural interaction for spatial auditory
interfaces on smartphones. Although they did not specifically aim at visually impaired
people, the study is interesting because it included not only 2D gestures but also 3D
79
Chapter II - Theoretical Background
A different aspect that was evoked in the study by McGookin et al. (2008) was how
to teach gestures to blind people. Schmidt and Weber (2009) suggested three methods
for this purpose: keeping gestures simple enough to describe them verbally, letting a
second person guide the user’s hand, or providing illustrations for each gesture. As these
methods are not necessarily applicable outside laboratory settings, Schmidt and Weber
proposed a technical system for teaching gestures to visually impaired people. This
system was based on the previously mentioned BrailleDis 9000 (see II.4.2). The device
was capable of detecting a hand positioned on the surface and even to identify fingers
and the size of the hand. Gestures were rendered as static or animated relief patterns
through the dynamic pins of the display. Although this concept is interesting, it is limited
because most blind people do not have access to a raised-pin display. It would therefore
be interesting to develop teaching methods for gestures on multi-touch surfaces.
As stated above, gestural interaction is not yet widely spread among interactive
maps for visually impaired people. Zeng and Weber (2010) implemented basic gestures
such as panning and zoom. Yatani et al. (2012) proposed the use of flick gestures for
navigating lists or selecting items. Carroll et al. (2013) suggested using basic gestures
such as pinch-to-zoom or drag but altering the action that resulted from the defined
movement. Performing a two-finger pinch would then not result in zooming the map but
for instance in changing the content. A major contribution has been made by Kane,
Morris, et al. (2011) who proposed three innovative types of gestural interaction for large
touch table maps. They called these accessible interaction techniques “Access Overlays”
because they were technically implemented as transparent windows over an existing
map application. In a first bimanual technique, called “Edge Projection”, a menu was
projected to the edges of the screen. The positions of the points on the map were
projected to the x- and y-axis in the edge menu. Users could thus quickly browse the
menu for exploring all onscreen targets. If they identified a target they could drag both
fingers from the edge to the interior of the screen to locate the desired landmark (see
Figure II.17). The second technique was called “Neighborhood Browsing”. Its principle
was to increase the target size by claiming the empty area around landmarks. Touching
within the neighborhood announced the targets name. The system then guided the user
80
Chapter II - Theoretical Background
to the nearest onscreen target by announcing spoken directions. The third interaction
technique was called “Touch-and-Speak”. This interaction combined touching the screen
with speech recognition, for instance for listing all on-screen targets. Again, it then
guided the user to the target by giving spoken directions. 14 blind users compared these
three interaction techniques to a standard implementation of Apple’s VoiceOver. The
results revealed that Touch-and-Speak was the fasted technique followed by Edge
Projection. Furthermore, there were significantly more incorrect answers with VoiceOver
than with the other techniques. Moreover, users also ranked Edge Projection and Touch-
and-Speak significantly better than VoiceOver. Consequently in a later prototype,
“Access Lens”, Kane, Frey, et al. (2013) implemented an edge menu similar to the one
provided in Access Overlays.
Figure II.17: Access Overlays as presented by (Kane, Morris, et al., 2011). The image shows
the edge projection technique. The user is exploring the menu on the x- and y-axis.
Reprinted with permission.
II.4.3.3 Summary
In this subsection we presented different input interaction techniques that have
been used in interactive maps for visually impaired people. Speech recognition seems
promising for accessing complementary information or for entering input commands.
However, it seems less well adapted for localizing positions on a map. Different
interaction techniques involve the sense of touch. Touch input on a touch-sensitive device
appears to be especially adapted for exploring geographic maps. It allows a quicker and
81
Chapter II - Theoretical Background
more dynamic exploration than for instance keyboard input. The accessibility of
touchscreens per se is limited, but can be improved through software, hardware or
hybrid solutions. We suggest that accessible gestural interaction should be further
investigated. Also, tangible interaction, although rarely used, may provide interesting
possibilities.
II.4.4.1 Audio
Audio output is often associated with speech. Yet, non-verbal output can also be
used to communicate information. We based the analysis of different auditory output
modalities in interactive accessible maps on the schema proposed by Truillet (1999) as
shown in Figure II.18. Indeed, the types of output that we found are speech, spearcons,
auditory icons, earcons and music. Most prototypes use 2D audio output, but sometimes
spatial sound is used.
II.4.4.1.a Speech
Our analysis revealed that speech is the auditory output type that was used the
most often in interactive maps for visually impaired people (Bahram, 2013; Brock,
Truillet, et al., 2012; Campin et al., 2003; Daunys & Lauruska, 2009; De Felice et al., 2007;
Hamid & Edwards, 2013; Iglesias et al., 2004; Jacobson, 1998a; Jansson et al., 2006;
82
Chapter II - Theoretical Background
Kaklanis et al., 2013; Kane, Frey, et al., 2013; Kane, Morris, et al., 2011; Krueger & Gilden,
1997; Lohmann & Habel, 2012; Miele et al., 2006; Parente & Bishop, 2003; Parkes, 1988;
Petit et al., 2008; Poppinga et al., 2011; Rice et al., 2005; Schmitz & Ertl, 2010, 2012;
Schneider & Strothotte, 1999; Seisenbacher et al., 2005; Senette et al., 2013; Simonnet et
al., 2012, 2009; Tixier et al., 2013; Tornil & Baptiste-Jessel, 2004; Wang et al., 2009; Weir
et al., 2012; Yairi et al., 2008; Yatani et al., 2012; Zeng & Weber, 2010; Zhao et al., 2008).
Verbal descriptions can transmit information about geographic elements, which on a
visual map is usually represented in the form of printed text, and on a tactile map is
usually represented as braille text. To go even further, verbal descriptions can
communicate global spatial knowledge such as spatial relations of streets and landmarks
(Lohmann & Habel, 2012). Indeed, speech and text have many similar properties as a
means of communication, such as their sequential nature, their dependence on a specific
natural language, and their demand for the user’s full attention (Blattner et al., 1989).
83
Chapter II - Theoretical Background
terms of transporting the meaning. Digitized speech has the advantage of the voice being
very natural and having a good prosody. To this regard, Petit et al. (2008) compared a
version of their interactive map with TTS and one with recorded speech. They observed
that participants understood the recorded voice slightly better than the synthesized
voice. However, as every message needs to be recorded in advance, digitized speech is
less flexible than synthesized speech. Also digitized speech requires more disk space. It
is therefore best adapted for systems with little speech output which does remain stable
over a long time. On the other hand, the flexibility of TTS systems comes with a higher
algorithmic complexity (Truillet, 1999).
Visually impaired users probably present the largest user group for synthetic
speech systems (Stent, Syrdal, & Mishra, 2011). As we described in II.1.4, the auditory
capacities of visually impaired people differ from those of sighted people. For instance,
they tend to prefer intelligibility of TTS over naturalness (Stent et al., 2011). It is therefore
important to test text-to-speech synthesis specifically with this user group. Asakawa,
Takagi, Ino, and Ifukube (2003) asked 7 legally blind people to evaluate listening speeds
for linearly time compressed natural speech in Japanese language. They observed that
novices were able to comprehend almost 100% of the text when TTS was 1.6 faster than
the default value. Advanced users could still understand about 50% when TTS was 2.6
faster than the default value. In a study with 36 early-blind people, Stent et al. (2011)
compared different TTS systems . They observed a significant effect of speaking rate as
performance decreased, when speaking rate increased. Also, accuracy appeared to be
higher for male than for female voices. Stent et al. also observed that participants under
the age of 25 had the highest accuracy, and participants over 51 had the lowest accuracy.
Furthermore, accuracy was positively correlated to frequency of using TTS systems.
Krueger and Gilden (1997) tried to speed up the speech output of their system by
experimenting with audio abbreviations of the names of the states in the USA. They
observed that the first syllables, for instance "Cal" for California, "Kan" for Kansas, and
"Ken" for Kentucky, were not easy to distinguish from each other. However, they found
that it was often faster and more understandable to say two different syllables (for
instance, "rado" for Colorado, and "sota" for Minnesota). These audio abbreviations
appeared quite easy to learn and use for familiar places.
Spearcons present an alternative idea (Walker, Nance, & Lindsay, 2006). They are
spoken phrases which are sped up until they may no longer be recognized as speech.
Thus, they are not simply fast spoken items, but distinct and unique sounds that are
acoustically related to the original text. Spearcons are created automatically by speeding
up the resulting audio clip from a TTS without changing pitch. Each spearcon is unique
84
Chapter II - Theoretical Background
due to the specific underlying wording. Walker et al. observed 9 sighted participants
using different auditory interfaces. Spearcons proved to be faster and more accurate than
speech. In a study with 39 sighted participants, Dingler, Lindsay, and Walker (2008)
observed that spearcons were as easy to learn as speech. Despite this promising results,
so far spearcons have been very rarely used in interactive maps for visually impaired
people (Su et al., 2010).
Several interactive map projects have used auditory icons (Campin et al., 2003;
Hamid & Edwards, 2013; Heuten et al., 2007; Jacobson, 1998a; Lawrence et al., 2009;
Milne et al., 2011; Parente & Bishop, 2003; Pielot et al., 2007; Senette et al., 2013; Simonnet
et al., 2009, 2012; Su et al., 2010; Zhao et al., 2008). Often auditory icons are used in
combination with speech output. Only few projects (Heuten et al., 2007; Lawrence et al.,
2009; Milne et al., 2011; Pielot et al., 2007; Su et al., 2010) made use of non-speech output
only. Common auditory icons in interactive maps at the scale of a city use the sound of
cars for representing streets, splashing water or waves for rivers and oceans and birds
chirping for parks. In maps that represent regions and countries, water can also be
associated with oceans and cars can represent entire cities. This demonstrates that the
meaning of auditory icons is ambiguous and depends on the context.
85
Chapter II - Theoretical Background
use of a metaphor for water was perceived as irritating. Lazar et al. (2013) report that a lot
of users did not understand the sonification tones and preferred speech. Krueger and
Gilden (1997) argue that many features such as mountains, rivers, lakes, etc. do not have
unambiguous sound associated with them. These findings are in contrast with other
studies, in which participants have successfully created mental maps from auditory icons
(Heuten et al., 2007; Pielot et al., 2007; Su et al., 2010). These studies report that auditory
icons are a successful means for supporting spatial cognition. Furthermore, haptic cues
seem to be harder to identify than auditory icons (Lawrence et al., 2009). This may be
explained by the fact that haptic cues in contrast to auditory icons cannot benefit as easily
from analogy to everyday objects. Delogu et al. (2010) suggested that auditory icons are
more adapted for exploring and learning the macrostructure of a spatial environment
than for learning spatial details. As a conclusion, it is possible to create mental
representations from auditory icons. Yet, the conveyed information is different than when
speech is used. Learning the meaning of auditory icons is necessary.
86
Chapter II - Theoretical Background
Earcons have been used in numerous interactive maps for visually impaired
people (Carroll et al., 2013; Daunys & Lauruska, 2009; Iglesias et al., 2004; Kaklanis et al.,
2011, 2013; Rice et al., 2005; Su et al., 2010; Tixier et al., 2013; Weir et al., 2012; Zhao et
al., 2008). As for the auditory icons, often earcons are used in complement with speech.
Su et al. (2010) combined the use of auditory icons and earcons, without speech output.
Kaklanis et al. (2011) used only earcons. However, in their later prototype they combined
the use of earcons with speech (Kaklanis et al., 2013). Despite the challenge related to
learning the meaning of earcons, Delogu et al. (2010) have demonstrated that earcons
could be successfully used for creating mental maps both for sighted and for visually
impaired people.
87
Chapter II - Theoretical Background
II.4.4.2 Touch
Touch output in interactive map prototypes is mostly used complementary to
audio output. Nevertheless it plays an important role for presenting graphical information
non-visually (Giudice, Palani, Brenner, & Kramer, 2012). As stated before, touch includes
cutaneous, kinesthetic and haptic perception (see II.1.3.2). The definition of which
modalities can stimulate touch perception is relatively vague. According to El Saddik et
88
Chapter II - Theoretical Background
al. (2011), this category includes output from actuators which act as force and position
source on the human body to simulate a tactile sensation. Other devices do not actively
produce a force. Yet, through the manual exploration of a device, the user perceives
kinesthetic information. This information contributes to the mental representation of
space. In the following we present the analysis of the interactive map corpus, regarding
cutaneous, kinesthetic and haptic output modalities.
Not all devices provide a haptic reference frame. Keyboard interaction does not
provide any perceivable relation between the hand movement and the movement on the
map. Delogu et al. (2010) reported that keyboard exploration, being strictly symbolic
and discrete, required more cognitive effort for reconstructing the position after each
step. In comparison with touch interaction it was therefore slower.
Likewise, haptic devices do not provide a reference frame. Some haptic devices
(like the Phantom) even provide 3 or more degrees of freedom. Integrating the third
dimension into their mental representation may be challenging for visually impaired
people.
The movement of a computer mouse is not linearly related with the movement on
the map. Thus, differences exist between the perceived local distances and the global
distances (Jetter, Leifert, Gerken, Schubert, & Reiterer, 2012). Also, users often lift the
mouse and move it (Lawrence et al., 2009). Due to this behavior, disorientation can occur
when operating the mouse without vision (Golledge et al., 2005; Pietrzak et al., 2009). In
order to overcome this, Rice et al. (2005) included a haptic grid overlay and a haptic
frame in their map, which was operated by a force feedback mouse. The haptic grating of
the grid within the map produced haptic feedback and allowed users to maintain a sense
of distance, scale, and direction. The haptic frame around the map served as a barrier to
limit the map. Rice et al. (2005) reported that the frame was very helpful. Yet, Lawrence et
al. (2009) observed that even with a similar grid system, spatial updating proved difficult
for most participants.
89
Chapter II - Theoretical Background
Tactile actuator displays consisting of one or several braille cells do not provide a
haptic reference frame. For instance for the Tactos device, tactile feedback was coupled
with the kinesthetic movement, but users could never perceive the whole figure at the
same time (Gapenne, Rovira, Ali Ammar, & Lenay, 2003).
On the other hand, tactile actuator displays with large matrices of pins that are
explored with one or both hands provide a haptic reference frame (Schmitz & Ertl, 2012;
Shimada et al., 2010; Zeng & Weber, 2010). Other displays that typically provide
reference frames are touch-sensitive devices. Both display types provided absolute
pointing coordinates in two dimensions and allows continuous finger movements within a
fixed frame. The kinesthetic feedback from arms and fingers, combined with the position
relative to the outer frame of the touch display, may benefit users’ spatial awareness
(Zhao et al., 2008). Also, for touch screens that are larger than a typical mouse pad the
intensity of kinesthetic feedback is greater (Jetter et al., 2012). Consequently, Tan,
Pausch, Stefanucci, and Proffitt (2002) observed a significant improvement of spatial
recall by sighted participants when using a touchscreen in comparison with using a
computer mouse. They reported that especially female participants benefitted from using
the touchscreen. They explained the improvement with the kinesthetic cues from the
touchscreen interaction as compared to interaction with a computer mouse. Of course this
is of limited validity for touch-sensitive devices with smaller display size. A map
displayed on a smartphone is limited in size and it may be necessary to scroll and zoom
in order to explore the whole map, which makes it difficult to construct a mental map.
Indeed, it has been shown for sighted people that spatial memory performance was
higher when using a touch device than when using a mouse, but that this advantage
vanished when panning and zooming were involved in exploration (Jetter et al., 2012). To
our knowledge, there are no studies on spatial representations resulting from zooming
and panning maps with regard to visually impaired people.
91
Chapter II - Theoretical Background
touchplates concept, we decided not to include it in the corpus of interactive maps for
visually impaired people.
Previous studies have identified benefits of raised-line maps for the cognitive
mapping of visually impaired people (see II.3.2.2.c). It may be hypothesized that the
benefits of raised-line maps for cognitive mapping also apply to interactive maps with
raised-line map overlays. However, no conclusive study has been done so far.
92
Chapter II - Theoretical Background
equivalent to icons and earcons. Similarly as earcons, the meaning of tactons is not
analogue but has to be learnt. Advantages are that tactons are quicker to perceive than
braille text and that they are universal and not bound to a specific language. Numerous
tactons can be robustly distinguished (Choi & Kuchenbecker, 2013). Similarly as earcons,
tactons possess different parameters that can be modified (Brewster & Brown, 2004).
Concretely, these parameters are frequency (within the range of the perceivable
frequencies 20 to 1000 Hz), amplitude (i.e. intensity), waveform and duration, rhythm.
Furthermore, Brewster and Brown suggested that the location of the actuator with regard
to the body could be used for coding the tactons. Within the corpus of interactive maps,
vibration is presented at the fingertips of the user. However, other body parts can also
perceive vibration and this has successfully been used in different prototypes (see for
instance Heuten et al., 2008). Burch and Pawluk (2011) proposed to apply vibro-tactile
feedback to multiple fingers in parallel. In a picture recognition task with visually
impaired participants, they observed a higher accuracy and a decrease in response time
when participants used three instead of one fingers if the objects were represented with
textures. This is in line with studies on raised-line images, that observed a higher picture
identification accuracy if five fingers rather than one were used in the exploration
process (Klatzky et al., 1993). So far no interactive map project for visually impaired
people has investigated this possibility, although it seems promising.
Within the corpus of interactive maps, user studies demonstrated that cognitive
maps of participants were more accurate after exploring a smartphone with vibro-tactile
and audio feedback than with audio feedback alone (Yatani et al., 2012). Furthermore,
participants stated that it was quicker and easier to integrate information from both
channels. However in a study on learning of non-visual graphical information a vibro-
audio tactile interface proofed to be less efficient than raised-line drawings (Giudice et
al., 2012). It can therefore be hypothesized that the perception of lines and curves is
harder when presented through vibrations than when printed as raised-line drawing, but
that it can be a helpful complement to audition. There is still a need for further
investigation and comparison of vibro-tactile output with other output modalities.
93
Chapter II - Theoretical Background
Their findings revealed that exploration was significantly quicker if the map was
represented with “empty” forms, i.e. forms without texture filling. They observed that
textures were confounded with borders and lines. As an alternative Pietrzak et al. (2009)
proposed to apply the concept of tactons that has been developed for vibro-tactile
displays to raised-line cells. They designed a set of distinguishable static and dynamic
tactons for representing directional information with a VTPlayer tactile mouse (see Figure
II.20). Each pin had two states, up and down. A pattern combined the states of several
pins within a cell at a given time, thus forming static tactons. Beyond that, dynamic
patterns included lists of patterns and durations which are associated with a certain
tempo. They could be either blinking, i.e. alternating, or have wave forms, i.e. represent
the evolution of a shape in space and time. It would certainly be interesting to investigate
whether the concept of raised-pin tactons could be applied to interactive map
exploration.
(a) (b)
Figure II.20: Set of Tactons for indicating cardinal directions as proposed by (Pietrzak et al.,
2009). (a) Static Tactons. (b) Dynamic tactons. Concatenating the patterns in the colons leads
to a wave form tacton.
Some of the interactive map prototypes made use of larger displays with actuated
pins (Schmitz & Ertl, 2012; Shimada et al., 2010; Zeng & Weber, 2010). These devices
display more information than smaller ones. The users then explore the display by
moving their hands and arms, and thus perceive information in parallel. They then
integrate the cutaneous with kinesthetic information However the rendering of
information remains challenging as the resolution is lower than for a normal screen (Zeng
& Weber, 2010). Schmitz and Ertl (2012) displayed streets as lines and buildings as
rectangles. They proposed varying line size and filtering out details, in order to avoid
overloading the map reader. Zeng and Weber (2010) proposed a tactile symbol set for
displaying different types of information such as bus stops or buildings. In a second
publication they further extended the tactile symbol set (Zeng & Weber, 2012).
94
Chapter II - Theoretical Background
The type of haptic device influences the interaction. For instance as reported
above, some devices allow interaction with a single point while others allow sending
different feedback with two fingers. It can be hypothesized that the integration of
information from a single point of contact into an ensemble of information is cognitively
demanding (Brewster & Brown, 2004). This hypothesis is supported by studies on raised-
line picture reading in which better performance has been observed if more than one
finger was used for exploration (II.3.2.2.b).
95
Chapter II - Theoretical Background
(a) (b)
Figure II.21: Haptic Tabletop Puck. (a) User exploring a surface with the finger placed on the
rod. (b) View of the insight of the puck with different actuators taken from below. Reprinted
with permission.
Haptic devices such as the Phantom have been criticized for their low
accessibility, high price and fragility (Simonnet et al., 2012). Consequently, the BATS
project (Parente & Bishop, 2003) aimed to use consumer grade haptic devices with lower
costs. They allowed a variety of devices, including mice, trackballs, joysticks, and
gamepads capable of providing force feedback. Marquardt et al. (2009) introduced the
“haptic tabletop puck” (HTP) as a device for rendering haptic feedback on a multi-touch
table (see Figure II.21). The HTP consisted of a block within which a rod could be
adjusted in height by a servo motor. The rod served at the same type as input (by using a
pressure sensor) but also as output for rendering haptic feedback. Furthermore, a servo
motor could push a rubber plate against the surface in order to introduce friction. The
tracking of the position was done by the multi-touch table by following fiduciary markers
on the HTP. Dynamic sinusoidal vibration with varying frequency or amplitude could be
used for transporting information. The different sensations that could be perceived by
making use of the combination of actuators included height, texture, softness and friction.
Each HTP only had one single rod and thus only one contact point, but it was possible to
use several devices at once. The HTP has been accompanied by a haptic touch toolkit
(Ledo, Nacenta, Marquardt, Boring, & Greenberg, 2012). This toolkit provided an easy
programming interface with access to the hardware, a behavioral layer and a graphical
layer. The HTP has been successfully used in geographical map applications, where the
topographical relief has been mapped to the height of the rod, different types of terrain to
different textures and ocean temperature to vibration frequencies (Marquardt et al.,
2009). To our knowledge this device has only been evaluated with sighted people without
blindfold, with haptic feedback being a complement to vision. It would certainly be
interesting to test whether visually impaired people could create a mental representation
based on this device. Furthermore, it would be interesting to study whether the use of
96
Chapter II - Theoretical Background
multiple devices instead of one positively impacts spatial cognition, as research on tactile
image reading suggests (see II.3.2.2.a).
Several researchers have investigated the design of haptic effects that could be
created with different devices. Within the corpus of interactive maps for visually impaired
people, cues that were used in the BATS project included bumps at the boundaries of
countries and states and constant vibrations on cities (Parente & Bishop, 2003). Lawrence
et al. (2009) presented information via vibration with a force-feedback mouse. In a user
study they compared haptic feedback to auditory icons. They observed that the haptic
symbols were less easily understood than the auditory icons. Some users even
experienced the haptic feedback as annoying. Lawrence et al. argue that this might be
related to the limited range of available haptic cues in comparison to auditory cues.
Differences between haptic signals were subtle and it was therefore more difficult to
differentiate between different symbols. Golledge et al. (2005) presented different haptic
effects that could be created with force-feedback devices. For instance, shapes could be
defined as “virtual walls”. Passing the virtual wall with the device would demand an extra
force of the user. On the other hand, following the outline of the virtual wall would reveal
the shape. Golledge et al. suggested using this effect for boundaries of buildings.
Multiple virtual walls can be used to build an effect of texture. This could be used to
differentiate geographic regions. A “gravity well” effect could draw a cursor to the center
of an object. Golledge et al. proposed using this effect for guiding the user to different
regions on the map. “Rubber banding” make it easier to remain on the object. This effect
could be used for facilitating following roads or borders. However, Golledge et al. also
stated that identifying irregular polygons is difficult just by following the outline with the
tactile sense. Furthermore, they suggested to represent texture through vibrations (see
subsection II.4.4.2.c). Golledge et al. also reported that users tended to get lost during
map exploration with haptic devices due to the missing haptic reference frame.
Consequently, Rice et al. (2005) proposed to compensate for this lack of orientation, by
introducing a frame with haptic feedback around the map. They reported that this frame
helped users reorient themselves when they got lost during map exploration. Pietrzak,
Martin, and Pecci (2005) studied haptic pulses with a Phantom device. They varied
duration, amplitude and direction of the pulses. Their study revealed that directions were
easier to identify than differences in amplitudes. The concept behind their study was to
combine direction and amplitude alternation, in order to create haptic icons, as an
equivalent to the previously mentioned earcons and tactons. It appears that there is need
for further investigation on how to represent map information with tactile feedback.
97
Chapter II - Theoretical Background
II.4.4.3 Summary
In this subsection we presented different output interaction techniques that have
been used in interactive maps for visually impaired people. We divided the techniques in
the two categories audio and touch.
Audio output includes verbal and non-verbal modalities. We suggest that speech
output is the most powerful means for communicating information as some information
(e.g., names) can only be transmitted by speech or text. Yet, non-verbal output has also
been used in interactive maps with controversial results. While it has been successfully
used in some projects, other studies report difficulties for learning and recognizing the
sound. Earcons demand more learning time than ambient sound. We suggest to use
ambient sound and earcons to provide complementary information to speech. It also
appears interesting to investigate the use of musical output. Finally, spatial information
can be presented through spatial sound.
Touch output can be of cutaneous, kinesthetic and haptic nature. We suggest that
for the exploration of spatial information, it is especially important to provide a fixed
haptic reference frame. Vibro-tactile output seems to be a helpful complement for audio
information. Yet, vibrations are often not spatially located and they proved less efficient
for communicating graphical information than classical raised-line drawings. Raised-pin
displays seem promising, especially if the display is large enough to be explored with
both hands and provide a haptic reference frame. Yet, these devices are costly. A new
possibility is provided by laterotactile displays. Although few studies have investigated
laterotactile output for accessible interactive maps, the existing findings suggest that this
research direction is promising. Many studies have investigated how (map) information
can be displayed with haptic devices. Unfortunately these devices do not provide a fixed
haptic reference frame. In conclusion, we argue that raised-line map overlays are highly
adapted for presenting spatial information to visually impaired people. First, they rely on
previously acquired map reading skills. Second, they also provide a fixed reference
frame.
We suggest that non-visual interactive maps should at least provide speech output
and best be combined with some sort of haptic feedback. More precisely we suggest
using raised-line drawings. The speech output could also be combined with non-verbal
audio output.
98
Chapter II - Theoretical Background
Several differences exist between studies. A first difference is whether the studies
are of qualitative or quantitative nature. Some experiments report precise variables, a
precise protocol and statistically analyzed results. However, a lot of papers report very
vague information on the experimental protocol, observations and measures that were
taken and results often are of qualitative nature (for instance “participants expressed
positive feedback about the prototype”).
Second, studies have various objectives. Many studies aim at evaluating the
overall usability of the prototype, whereas others focus specifically on the spatial
cognition resulting from its use. It has to be noted that usability can be assessed with
quantitative measures for effectiveness, efficiency and satisfaction (ISO, 2010), but that
these quantitative measures are rarely taken. Most studies observe only qualitatively
whether users can successfully interact with the prototype. This is what we call “overall
usability”. Furthermore, spatial cognition can be part of usability as the spatial cognition
that results from map exploration is a measure of its effectiveness. We believe that it is
interesting to have a closer look at spatial cognition, for instance at the different types of
spatial knowledge (landmark, route and survey). Therefore in this analysis we regard
“overall usability” and “spatial cognition” separately. Furthermore we also classified
experimental results on differences between populations.
Also, studies differ in the number of participants. Within the 30 papers that have
reported experimental results the number of participants varied between 1 and 60, with
the median being 8 participants.
Furthermore, some studies are done with blindfolded sighted people, others with
visually impaired people. As reported before, spatial representations, behaviors as well
as perceptual capacities differ depending on the visual capacities (see II.1.4 and II.2.2).
99
Chapter II - Theoretical Background
We classified the corpus with regard to these different aspects. However it has to
be noted that comparison of user evaluations of different interactive maps is limited as
prototypes vary a lot, in content, the used devices and interaction techniques.
Schmitz and Ertl (2010) asked four sighted blindfolded people to use their
gamepad based map application. Participants succeeded in using the interface, but
performance depended on expertise with using gamepads. It might be hypothesized that
visually impaired people are less experienced than sighted peers, as there are few
accessible videogames based on gamepad devices. It would therefore be necessary to
evaluate the concept with visually impaired people in order to know whether it has an
interest.
complete the tasks and were enthusiastic about the application. It therefore appears that
image recognition can be successfully used in interactive maps. Krueger and Gilden
(1997) observed five blind people using their application with auditory feedback and a
tactile grid. All subjects were able to use the interface. Beyond that, they took basic
observations of spatial cognition: two users could produce acceptable drawings of the
environment (yet the term “acceptable drawing” remains vague). The haptic-audio
prototype proposed by De Felice et al. (2007) was evaluated by 24 visually impaired
people. Users appreciated the control to trigger vocal messages on demand. All but two
participants were able to explore the entire range of application. Kaklanis, Votis, and
Tzovaras (2013) observed 9 partially sighted, 5 blind and 18 normally sighted users
testing their system with haptic and auditory feedback. Most participants agreed that the
concept was innovative but needed adjustment. 58% of the participants declared their
willingness to use the prototype for learning a map. Zhao et al. (2008) took quantitative
measures. They let seven blind people evaluate their prototype which was based on
keyboard input and auditory output. They measured that participants successfully
completed 67% of the training tasks and 90% of the tasks on the following day. To sum
up, these findings suggest that different devices can successfully be used in accessible
interactive prototypes. However, the validity of most results remains limited due to small
sample sizes, or the absence of quantitative measures and systematic analysis.
First we report the experiments with sighted people. Poppinga et al. (2011) asked
eight sighted users to explore their smartphone application with vibration and audio
output and draw a map of the perceived environment during the exploration phase. They
compared two zoom levels and observed that the “zoom in” condition resulted in a more
accurate drawing than the more distant zoom condition. Participants correctly perceived
basic information and relations. Also, Poppinga et al. hypothesized that the task was
cognitively demanding as participants needed up to 15 minutes for drawing the map
which was not large. Lohmann and Habel (2012) let 24 sighted blindfolded people
compare two map conditions of a haptic prototype with speech output. In a first condition
only names of landmarks and routes were indicated. In a second condition,
supplementary information on the relation of geographic objects was provided. In the
second condition, participants were able to answer more questions on spatial knowledge
101
Chapter II - Theoretical Background
correctly. However, the result also depended on the type of spatial knowledge.
Landmark scores showed a larger difference between the two conditions than route
scores. Two studies have observed spatial cognition that resulted from tangible
interaction. Pielot et al. (2007) evaluated a tangible user interface with spatial audio
output with 8 blindfolded sighted participants. Their study showed that the tangible
interface could be used to determine the position and orientation of the virtual listener
(represented by the tangible object) with only small deviations from the real orientation.
Furthermore, they discovered that users spontaneously rotated the tangible object to
determine the spatial location of objects. It depended on the type of source how well the
rotation could be used for determining orientation. Milne et al. (2011) compared a
version of their prototype with tangible interaction and a version with body-rotation with
5 blindfolded sighted participants. They observed problems related to the shifting of
reference frames between egocentric and allocentric perspectives. As tangible
interaction has rarely been proposed for visually impaired people it would indeed be
interesting to reproduce both studies with visually impaired participants.
Other studies were done with blind participants. Simonnet et al. (2012) let one
blind user explore an interactive map prototype—a tablet application with auditory and
vibrational feedback—and draw a map of the explored environment. They observed that
the resulting map drawing was relatively precise. However, the result is of course of
limited validity due to the small sample size. In another study, Simonnet et al. (2009) let
two blind sailors learn a maritime environment with a haptic device and then navigate at
sea in the same environment. Their study revealed that getting lost in egocentric mode
during exploration of the virtual environment forces the blind sailor to coordinate his
current view with a more allocentric view. It was therefore advantageous for constructing
an allocentric representation. If this is the case, virtual environments can indeed be used
as training environments. Getting lost and consequently exploring the environment with
the objective to find one’s way back, seems less stressful in a virtual than a real
environment.
Heuten et al. (2007) let eleven blind users explore their application—a purely
auditory map—with the objective to understand spatial relations and distances between
objects. Users perceived the exploration as easy, except for the identification of objects’
shapes. Also they observed confusions when two similar objects were close because of
the similar auditory feedback they emit. Yairi et al. (2008) asked four blind people to
explore their application with musical feedback and then walk the route unaided. All
participants reached the goal, even if one was unsure about it. Even if people made
102
Chapter II - Theoretical Background
wrong turns at cross points or felt lost, they were able to correct their route. Both studies
show that non-verbal audio feedback can effectively be used for creating mental maps.
Yatani et al. (2012) let ten blind and two low-vision participants explore their
application and then asked them to draw sketches. The application was compared in two
conditions: smartphone with auditory output and smartphone with auditory output and
additionally feedback through 9 vibrational motors. Even if all participants could
understand the spatial tactile feedback, drawings were more accurate after audio and
haptic feedback than after audio feedback alone. This study is in line with Giudice et al.
(2013) who reported that different sensory sources led to composite representations that
were insensitive to the origins of the knowledge. Beyond that, it suggests that the mental
representations created from multiple sensory sources may be more complete than those
created from one source alone.
Jacobson (1998a) asked five visually impaired and five blind people to evaluate
his application—a touchpad application with auditory feedback. He analyzed verbal
descriptions, map drawings and qualitative feedback and observed that all users
successfully created mental representations from the map usage. Besides, they found
interface simple, satisfying and fun. In another study, Jacobson (1996) compared route
learning with a mobility instructor versus with a touch-sensitive interactive map and
audio beacons. Verbal descriptions, sketch recall, distances by ratio-scaling, tactile
scanning assessment and talk aloud protocol revealed that both groups were able to
complete the route and verbally describe it. All sketches showed a high degree of
completion and correctness but the group who had explored the interactive map was
more accurate. Participants also provided positive qualitative feedback. This is in line
with studies that revealed an advantage of tactile map use over direct exploration (see
II.3.2.2.c). It can therefore be hypothesized that interactive maps as well as tactile maps
are helpful tools for acquiring spatial cognition. However, so far no study directly
compared these two map types.
103
Chapter II - Theoretical Background
children, liked the device and performed well. They did not observe any difference
between sighted and visually impaired people regarding exploration time but between
perceived easiness with an advantage for visually impaired people. Furthermore,
children were faster than adults. Delogu et al. (2010) observed 10 congenitally blind, 10
late blind and 15 blindfolded sighted while exploring an interactive map based on the
use of earcons. Their study did not reveal any significant difference between sighted or
blind people in the accuracy of the resulting mental map. Yet, blind subjects reported to
be helped more than sighted subjects by the fact that the sonification was spatial.
Furthermore, both early and late blind participants rated the interface easier than sighted
users. According to Delogu et al. both findings could be explained with the greater
experience of visually impaired people with non-visual interfaces.
II.4.5.4 Summary
Numerous interactive map projects exist and different evaluations have been done
with interactive maps. We classified these evaluations according to overall usability,
spatial cognition, and comparison of different populations. As there is a large variety
among the map projects regarding devices, interaction techniques and content, it is hard
to compare the results.
To sum up, the different studies demonstrate that various types of devices and
interaction techniques seem advantageous for acquiring spatial cognition. No study
systematically compared different map types, so that there is no knowledge whether
certain types of maps are more advantageous than others. Even if it is possible to create
mental images from one sensory source (e.g., audio output), results suggest that
feedback from more than one sensory modality may enhance the mental representation.
Yet, further studies would be necessary. Also, studies suggest that interactive maps could
have the same positive impact on spatial cognition than tactile maps. Again, there is a
need for further investigation.
It has to be noted that many studies only report qualitative results and that the
experimental setup and protocol often remains vague. Additionally, some studies have
limited validity due to the low number of participants. Moreover, some studies are done
with sighted and others with visually impaired people. Comparing the results between
both groups is delicate, as it is known that perception and spatial cognition depend on
visual capacities. However, as it is difficult to recruit visually impaired participants,
testing with blindfolded sighted people remains an interesting means to evaluate
concepts and open up design space. In general a lot of positive feedback has been
reported on interactive maps and it appears that they are a promising solution for
104
Chapter II - Theoretical Background
II.5 Conclusion
In this chapter we presented the broad theoretical context that concerns
interactive maps for visually impaired people.
In a third part, we focused on maps as tools for acquiring spatial cognition without
vision. We underlined that maps can be helpful tools for improving orientation skills. We
compared maps to other tools such as verbal guidance, small scale models and GPS-
based navigation systems. Then we investigated the cognitive mapping related to map
reading both for sighted and visually impaired map readers. Traditionally, maps for
visually impaired people are tactile maps. We demonstrated that different haptic
exploration strategies influence the result of raised-line image exploration. Furthermore,
different studies, which have proved the interest of tactile maps for visually impaired
people, have been presented. Then we detailed the methods related to drawing and
producing tactile maps. Although tactile maps are efficient means for the acquisition of
spatial knowledge, several limitations and problems are associated with them. We
argued that therefore interactive maps are an interesting means of improving usability of
maps for visually impaired people.
105
Chapter II - Theoretical Background
The acquired knowledge from this chapter will serve as a basis for the following
chapters.
106
Chapter III - Designing Interactive Maps with and for Visually Impaired People
This chapter also answers Research Question 2 (How to involve visually impaired
people in a participatory design process?). Participatory design includes users during
the whole design process, but many of its methods are based on the use of the visual
sense. During this PhD work, we have been facing many situations where the classic
participatory methods were not adequate. Here, we first introduce this process and then
report how we adapted the design process. We detail the contribution for each of the four
steps of this design cycle: analysis, creation of ideas, prototyping and evaluation. As
participatory design is an iterative process, several implementations of prototypes have
been developed and continuously evaluated with visually impaired users.
This chapter also presents the material and methods for the user study in the
following chapters IV and V.
108
Chapter III - Designing Interactive Maps with and for Visually Impaired People
When developing for users with special needs, as for instance visually impaired
people, the notion of usability becomes even more important. Accessibility is defined as
usability of a product, service, environment or facility by people with the widest range of
capabilities (ISO, 2003). Accessibility is about designing user interfaces that more people
can use effectively, efficiently and with satisfaction, in more situations (Henry, 2007). The
ISO standard (2003) specifically states that the concept of accessibility addresses the full
range of user capabilities and is not limited to users who are disabled. People without
impairments can be temporarily handicapped—for instance when forgetting one’s
glasses—or be in a situational impairment—for instance when sunlight is reflecting on a
display (Henry, 2007). Accessibility comprises these situations. Therefore, for
Stephanidis (2009) accessibility is a prerequisite for usability.
is not the process of creating products that everyone can use but ensuring that designs
can be used by as many people as is practical. In comparison, Newell and Gregor (2000)
distinguish between “mainstream design” (design for everyone except people with
impairments), “assistive technology” (only for people with impairments) and “design for
all” (including all users). The latter is a very difficult, if not often an impossible task.
Providing access to people with certain types of disability can make the product
significantly more difficult to use by people without disabilities, and often impossible to
use by people with a different type of disability. Products designed for all must be
capable of accommodating individual user requirements in different contexts of use,
independent of location, target-machine and runtime environment (Emiliani, 2009). Even
if elderly and disabled persons are included in the mainstream design process, it is not
possible to design all products and devices so that they are usable by all people (G. C.
Vanderheiden, 2012). One of the challenges in universal design is thus to recognize the
full range of users who might be interested in using a particular system (Petrie & Bevan,
2009).
Including impaired users in the design process raises specific challenges (Newell
& Gregor, 2000). Impaired users may have very specialized and little known
requirements. It may be difficult to get informed consent from some users. Besides,
participation in the development of new assistive or design for all technology may raise
disappointment among participants. Newell and Gregor underlined that some research
may show that certain techniques are not successful, and that even if the research is
successful, the user who was involved in the research may never personally benefit from
the outcomes of the research. They also emphasize the importance of including users but
not in a dominant role, i.e. leaving the establishment of the research agenda to the
researcher. Finally, Lévesque (2005) argued that while it is useful to include the blind into
the design process, designers of innovative devices must be ready to face a natural
opposition to changes that contravene existing conventions. Yet, going against accepted
ideas may lead to interesting and innovative results.
112
Chapter III - Designing Interactive Maps with and for Visually Impaired People
Concretely, our design process was iterative and composed of four phases. The
analysis phase allowed us to identify users’ needs for orientation and mobility, challenges
that they face when traveling, and the reading of tactile maps. We also analyzed the
literature on different design concepts and technical aspects. In the second phase, the
generation of ideas, we proposed brainstorming and “Wizard of Oz” techniques as
means for stimulating creativity. In the third phase, prototyping, we iteratively developed
prototypes. In the fourth phase, evaluation, we developed evaluation methods for a user
study. The experiment itself will be presented in chapter IV.
Figure III.1: Participatory design cycle as applied on our project. The cycle was iterative
and composed of four phases (analysis, design, prototyping and evaluation). Translated and
reprinted from (Brock, Molas, & Vinot, 2010).
III.2.1 General
The design process for our interactive map prototype was based on participatory
design. As stated above, different approaches exist for participatory design. As depicted
in Figure III.1, we chose the four steps proposed by Mackay (2003): 1) analysis phase
113
Chapter III - Designing Interactive Maps with and for Visually Impaired People
(observe use), 2) design phase (generate ideas), 3) prototype design, 4) evaluate system.
In this regard, our approach was also similar to that proposed by Sánchez (2008).
III.2.2 Participants
Participatory design demands close collaboration with users. Our aim was not
“design for all”, but developing assistive technology for visually impaired people
through an accessible design process. One of the challenges was identifying the accurate
user group. There is a large variety of visual impairments. Achieving usability for blind
people does not mean achieving usability for people with low vision (Henry, 2007).
Likewise, the ISO standard 16071:2003 (ISO, 2003) gives different recommendations for
working with people with low vision and people with blindness. In our project we
focused on legally blind people.
Some criteria for selecting participants apply to all types of user groups, such as
age or gender. When working with visually impaired people, additional criteria are
important. Examples are the degree of visual impairment, the proportion of lifetime with
blindness or the age at onset of blindness, autonomy in everyday life, braille reading
skills or use of assistive technology (Cattaneo & Vecchi, 2011). We applied these criteria
in the selection of our participants.
Another aspect concerns working with experts versus working with novices.
According to Henry (2007) it is advantageous to include experts in the beginning of the
design process as it is possible to learn a lot from them. Novices can then evaluate the
prototype. From our experience it is difficult to recruit visually impaired people and
there is not always a choice concerning user’s characteristics. Visually impaired users
13
Our contribution to the participatory design cycle is partially published in (Brock, Vinot, et al.,
2010).
114
Chapter III - Designing Interactive Maps with and for Visually Impaired People
that volunteer for participating in an experiment on new technologies are often rather
experienced with using them, which means that recruiting novices is especially
complicated. Furthermore, we have noticed that due to participating in the experiments
some of the visually impaired users have become even more experienced. For instance,
in the beginning we observed quite a high anxiety concerning multi-touch displays.
Recently, some visually impaired users in our user group have acquired smartphones.
The fact that visually impaired people get used to multi-touch interaction in their daily
lives, then makes it even more interesting to include this interaction in assistive
technology. In any case, we benefitted from the fact that one of the members of our
project is himself visually impaired. He was involved throughout the whole project and
contributed with a lot of ideas. He was also always available to evaluate the accessibility
of technology before inviting participants for user studies.
Participants in our user studies were all blind with at most light perception, i.e.
classified in the categories 3 to 5 as defined by the WHO (2010). This decision was made
as the diversity of visual impairment is very large (II.1.2.1) and thus hard do study. For
brainstorming sessions and pretests we also accepted people with residual vision if no
other participants could be recruited. Etiology of the impairment was diverse, resulting
either from congenital disease, accidental disease, or trauma. The user group was also
diverse concerning the age at onset of blindness. Most of the participants were either
14
Navig was a project between different institutions investigating safe navigation and object
localization for visually impaired people (Katz et al., 2012).
15
http://www.ijatoulouse.org/ [last accessed May 14th 2013]
16
http://www.avh.asso.fr/rubrics/association/association.php?langue=eng [last accessed May 14 th
2013]
17
http://www.irit.fr/LACII/ [last accessed August 6th 2013]
115
Chapter III - Designing Interactive Maps with and for Visually Impaired People
working or in training, which is not surprising given the close contact with the Institute of
the Young Blind. In early phases of the project we also included locomotion trainers of the
Institute for the Young Blind to obtain information on orientation and mobility training for
visually impaired people.
18
https://sites.google.com/ [last accessed July 5th 2013]
19
http://www.doodle.com/ [last accessed July 5th 2013]
20
https://www.google.com/calendar [last accessed July 5th 2013]
21
http://www.google.com/drive/apps.html [last accessed July 5th 2013]
116
Chapter III - Designing Interactive Maps with and for Visually Impaired People
We observed that participants in our study got attached to the researchers they
knew and felt more confident with them. It was perceived as negative if researchers
changed regularly (for instance students that left when their project ended). It seems at
least necessary to explain to the participants why their contact person changed. We
observed that for some people it is more difficult to share details on their impairment with
a stranger (i.e. the researcher) than for others. Whereas some of our participants did not
mind explaining details about their impairment, others would question why we would
need this information. This is delicate, as the type and onset of impairment may impact
spatial cognition (see II.2). Our strategy was to explain the need for this information and
to reassure users that the information would be kept anonymous.
III.2.2.2 Logistics
Working with impaired users requires adapting rooms and buildings. Henry
(2007) gives some recommendations. With regard to the facilities it is important to
describe the outline of the room and not to move objects without telling the user. In our
user group several participants had a guide dog and it was necessary to provide space to
seat the dog during the sessions.
As stated by Lazar et al. (2013) users can arrange their own transport but need to
do so in advance. This means that sessions cannot be scheduled spontaneously. Our
sessions were planned in advance and took either place in the Institute of the Young Blind
in the center of Toulouse or in the IRIT research laboratory on the campus of the
University of Toulouse. Both were easily reachable by public transportation and if
necessary we guided participants from the metro station to the building. If participants
preferred, we provided alternative transportation using the “Mobibus service”22, a local
transportation service for people with special needs. Costs for transportation were
reimbursed.
22
http://www.tisseomobibus.com/ [last accessed June 14th 2013]
117
Chapter III - Designing Interactive Maps with and for Visually Impaired People
it is important to clearly inform the user about these probable outcomes before getting
involved in the experiment. We also feel that this message needs to be repeated during
the design process, as users get more and more engaged and motivated for the research
project as its design advances.
According to Newell and Gregor (2000) users are not very good at explicitly
stating their needs concerning a technology which does not yet exist. Their needs can be
unconscious and people may not be able or willing to articulate them, or they might come
up with solutions that are not optimal. Therefore, methods are needed for facilitating the
observation phase. ISO 16982 proposes different methods for analyzing the context
including user observation, questionnaires, interviews, or the study of available
documents. Petrie and Bevan (2009) present an overview of different accessibility
guidelines that can be helpful. However they state several problems with using
guidelines. First, they demand substantial effort to learn and apply appropriately;
second, the evaluation of a function in a system is time consuming; and third, there may
be circumstances where guidelines conflict or do not apply. Zimmermann and
Vanderheiden (2007) suggest capturing accessibility requirements through use cases
and personas. In the following subsections we present in detail how we implemented the
analysis phase in our project.
In section II.1 we have discussed that visually impaired people can compensate for
the absence of vision by making use of the other senses. However, the visual sense is
best adapted for certain tasks, such as spatial orientation, and hence the compensation
118
Chapter III - Designing Interactive Maps with and for Visually Impaired People
can only be partial. We accompanied a blind user traveling in the city (Brock, 2010a).
During this travel, we were wearing a blindfold ourselves and carried a white cane. Even
if sighted blindfolded people cannot compensate for the absence of vision as efficiently
as visually impaired, it gave an impression on how the other senses are suddenly more
actively perceived.
In section II.2 we have outlined that visually impaired people are capable of
creating mental representations of their environment. However, these mental
representations differ from those of sighted people. Orientation and mobility therefore
present challenges. Several meetings with users allowed us to assess their experiences
and challenges regarding mobility. For instance, we followed users travelling on an
unknown itinerary in the city center with either a white cane or a guide dog (Brock,
2010a). This allowed us to observe challenges they face during travel (for instance road
works that block the side walk, see Figure II.1). We also organized several brainstorming
meetings on the subject of orientation and mobility. The aim of these session was to
identify challenges, but also information that blind people used as cues for orientation
(different types of points of interest that could be perceived with the tactile or auditory
sense), and criteria for choosing an itinerary (for instance traffic lights with auditory
signals, avoiding large open spaces, etc.). It also helped identifying that guidance during
travel is not sufficient. Users stated their need to explore a part of the city with all the
information available (landmarks, routes and survey information) before travelling to
gain a global understanding of the area.
In subsection II.2.2.5 we showed that maps are a helpful tool for improving the
mental representations of an environment. Different concepts of maps for visually
impaired people have been presented. Two users in our project collected tactile maps,
which allowed us to see the various existing map types. As an example Figure III.2 (a)
shows a plastic map produced with vacuum forming. This map contains relief information,
and additionally different map elements are colored and text is readable for a sighted
person. Thus this map is destined for people with residual vision. In comparison the map
in Figure III.2 (b) is produced on swell paper. It is black and white and text is
represented as braille abbreviations. This map is accessible for blind people and people
119
Chapter III - Designing Interactive Maps with and for Visually Impaired People
with severe visual impairment. A legend as shown in Figure III.2 (c) accompanies braille
maps to give more precise information on the abbreviations and textures.
Figure III.2: Different types of maps for visually impaired people. (a) Colored map for
people with residual vision, produced by vacuum forming. (b) Braille map for blind
people, produced on swell paper. (c) Legend accompanying a braille map.
In section II.2 we also presented the impact of haptic exploration strategies on the
understanding of tactile maps. We observed users while exploring tactile maps (Brock,
2010a). We noticed that all but one user in a group of five people used both hands and
several fingers for exploring the map. The person that used only one hand and one finger
perceived the map reading as more difficult than the other participants. This is in line
with studies on haptic map reading (II.3.2.2.b). However, our observations were of
qualitative nature and needed further investigation.
One participant had tried out the ABAplans prototype (see VII.5.5), a map
prototype based on raised-line map placed as overlay on a touchscreen with auditory
output. He reported very positive feedback on this experience. To study the interest of
this concept for visually impaired people, we implemented a simple version of an
interactive map (Brock, Molas, et al., 2010). This simple interactive map was limited in
precision and functionality but got very positive feedback from our user group. This
encouraged us to pursue our idea of developing an interactive map prototype.
Another important aspect that emerged from sessions with our participants was
that they did not only want a tool for helping them with orientation and mobility, they also
wanted a tool that was comfortable and ludic to use. This notion of “User Experience” is
indeed often forgotten when developing assistive technology.
120
Chapter III - Designing Interactive Maps with and for Visually Impaired People
The second question concerned the interactive devices used in the prototype (see
II.4.2). Zhao et al. (2008) compared interactive prototypes with different interactive
devices. They observed that navigating a map with a keyboard was more difficult for
visually impaired users than with a touch screen. Whereas the touchscreen allows users
to change position on the map quickly (for instance to “jump” from one side to the other),
the keyboard only allows linear exploration of the map (Lazar et al., 2013). Also, the
recall of objects in space improved when using a touch screen compared to the same task
with a computer mouse (Tan et al., 2002). Accordingly, using a touch screen increased
the users’ awareness of the external frame of reference and the position of their body
compared to that frame (see II.4.4.2 Haptic Reference Frame). Users would then more
easily keep track of their position on the map and evaluate relations between different
map elements. When using a pointing device (e.g. a force-feedback mouse with a single
moving cursor), it is more difficult to keep the reference frame in mind (Rice et al., 2005).
121
Chapter III - Designing Interactive Maps with and for Visually Impaired People
The third question concerned the different modalities to use. The ISO standard
advises against using tactile output alone (ISO, 2003). Almost all prototypes in the
classification possessed some sort of audio output; either speech or non-speech (see
II.4.4.1). Some of them have additional haptic feedback which appears to facilitate
learning (see II.4.4.2). As an example, adding simple auditory cues to a haptic interface
improved the identification of shapes (Golledge et al., 2005). In another study,
externalizations of mental representations were more accurate after using a prototype
with audio and haptic feedback than after using a prototype with audio feedback alone
(Yatani et al., 2012). When comparing use of a touch screen-based system with audio
output and with or without raised-line overlay by visually impaired people, users made
fewer errors and were quicker when using the interface with the overlay (McGookin et
al., 2008). Also they expressed a preference for the overlay interface. Similarly, Weir et
al. (2012) observed that users preferred exploring a sonified interactive map application
when a raised-line overlay was placed on the touchscreen. Furthermore, tactile and audio
modalities have complementary functions when presenting spatial information (Rice et
al., 2005). For example, speech output can replace braille text. Representing information
through different modalities makes it easier to avoid overloading one modality with too
much information. We therefore decided to use both auditory and tactile output
modalities.
Finally the question is how to represent information with the tactile modality.
Giudice, Palani, Brenner, and Kramer (2012) evaluated a vibro-audio tablet interface
against a raised-line drawing for learning non-visual graphical information, both with
blind and blindfolded participants. The vibro-audio tablet interface synchronously
triggered vibration patterns and auditory information when the users touched an on-
screen element. The tactile graphic was embossed and did not offer any additional
feedback than the embossed relief. They observed that learning time with the interactive
prototype was up to four times longer than with the paper diagram. Giudice et al. (2012)
suggested that lines and curves are harder to perceive when indicated by vibrations than
when printed in relief. As most blind users have learnt how to explore raised-line maps at
school, using an interactive prototype based on a raised-line map relies on their
previously acquired skills and is thus probably easier to manage. Furthermore, when
using a raised-line map, it is possible to add tactile cues (e.g. outlines of the map) for
facilitating users’ mental orientation.
122
Chapter III - Designing Interactive Maps with and for Visually Impaired People
Taken together, these research projects show that the combined use of audio and
tactile feedback is especially helpful when presenting geographic information. It seems
to be easier to use a raised-line map overlay than vibration patterns. The previous studies
also show that using a touchscreen may be more appropriate for map exploration than
using a mouse or a keyboard. All these arguments led us to the design choice of an
interactive map prototype based on a touchscreen, a raised-line map overlay and audio
output (see Table III.1).
Figure III.3: Geotact tablet by E.O. Guidage. This device offered a 16*16 matrix of
rectangular interactive zones, thus providing 256 touchable zones. Reprinted from (Brock,
Molas, et al., 2010).
In our first attempt of creating an interactive map during previous projects (Brock,
Molas, et al., 2010) we had worked with a “Geotact” tablet of the French company E.O.
Guidage23 (see Figure III.3). This device offered a 16*16 matrix of rectangular interactive
23
http://eo-guidage.com/ [last accessed June 10th 2013]
123
Chapter III - Designing Interactive Maps with and for Visually Impaired People
zones. The tablet was connected to a computer over a serial port so that each zone could
be associated to a sound output file. Obviously the resolution of this technology was very
limited as only 256 touch input zones could be differentiated. Feedback from users
underlined this problem. It was therefore important to choose a multi-touch screen with
higher resolution.
The choice and purchase of a more elaborate multi-touch table was preceded by
the comparison of different multi-touch devices. Largillier et al. (2010) proposed a
classification of tactile performances for multi-touch panels based on three categories:
features (performances related to quantity and richness of tactile information provided by
the touch panel), transparency (performances that make the users feel like they interact
seamlessly with the user interface) and trustworthiness (measurable performance that
impact the confidence a user may have for the touch device). Buxton (2007) proposed
distinctions for differentiating multi-touch technology that served as a basis for our
analysis. These criteria included for instance size, orientation, precision and the number
of inputs. Beyond that, we proposed more precise criteria that are important for
designing an interactive map prototype based on a multi-touch display with raised-line
map overlay (Brock, Truillet, Oriola, & Jouffrais, 2010). A detailed overview of available
technology at the time of the study compared to these criteria is presented in (Brock,
2010b). The following subsections detail these criteria.
In order to find an adapted technology we have made several tests with touch
interfaces relying on different technologies. This subsection only details information on
touch technology that was available at the time we performed the tests24 and relevant to
the size and type of display we were searching for. It is not meant to be exhaustive.
24
At the time (spring / summer 2010) the available multi-touch technology was much more limited
than nowadays and we therefore did not have much choice concerning technology. As a
comparison: the first iPad was released on April 3, 2010. Note also that technology like strain
gauge that is used in vending machines but not in consumer displays is not further investigated in
this thesis.
124
Chapter III - Designing Interactive Maps with and for Visually Impaired People
Resistive technology (as used in the Stantum SMK-15.4 Multi-Touch display in our
previous prototype) is based on two layers of conductive material that are separated by
an insulating layer, usually made of tiny silicon dots or simply air (Schöning et al., 2008).
When the user touches the screen, the two layers are pressed together and establish an
electric current. This current is measured in horizontal and vertical direction to
determine the exact touch position. As contact is established by pressure it does not
depend on the material of the touching object. A paper overlay thus does not block touch
interactions. This type of screen also presents the advantage of low power consumption.
On the downside, most resistive displays provide only mono-touch input, are slower than
capacitive screens and are also more fragile. As an exception, the Stantum touch display
offered multi-touch possibilities.
25
http://www.apple.com/ipad/ [last accessed May 22nd 2013]
125
Chapter III - Designing Interactive Maps with and for Visually Impaired People
corner of the screen. This technique, however, has low resolution and does not work with
a paper overlay.
26
http://bit.ly/M2256PW [last accessed May 22nd 2013]
126
Chapter III - Designing Interactive Maps with and for Visually Impaired People
with the map overlay. Surface Wave Touch Surfaces, is based on a similar principle with
ultra-sonic waves instead of infrared rays (Schöning et al., 2008).
Figure III.5: General setup of a FTIR system by Tim Roth, reprinted from (Schöning et al.,
2008) with permission.
Among the infrared technology, there is also Diffused Illumination (Schöning et al.,
2008). Unlike FTIR technology, infrared rays are projected onto the touch surface by
diodes located below it (see Figure III.6). Thus, the infrared rays are not contained in the
surface, but they are projected evenly across the screen. Objects in proximity and
objects touching the screen reflect the infrared light and can be perceived by the
camera. When the user touches the surface, the contact area prevents propagation of the
rays and thus reflects a certain amount of infrared light. An infrared camera located
below the surface transmits the video stream to a computer which determines the
127
Chapter III - Designing Interactive Maps with and for Visually Impaired People
position with image processing algorithms. We tested the ILight table from Immersion27
which uses the Diffused Illumination technology. Diffused Illumination can recognize
objects other than a finger. It also works with transparent material between the screen
and the user. However, a nontransparent paper placed on top of the surface occludes the
view for the camera and thus this technology is not adapted for our project.
Figure III.6: Inside of the ILight Table: infrared rays are projected onto the touch surface by
ranges of diodes located below it. An infrared camera located below the surface transmits
the video stream to a computer which determines the position with image processing
algorithms
In-plane technology, for instance the Digital Vision Touch technology as used in
the Smart Boards28, is another optical technology (based on infrared or regular color
cameras). Two or more cameras are situated in the corners of the display and observe the
touch input. The touch position can be calculated by triangulation. We were able to
successfully test this technology with our raised-line map.
In optical “out-of plane” technology, a camera is placed above the surface and
filming down to the surface. Touch is detected by tracking the fingers in the video stream.
This technology has successfully been used for interacting with raised-line maps in
different projects (Kane, Frey, et al., 2013; Krueger & Gilden, 1997; Schneider &
Strothotte, 1999; Seisenbacher et al., 2005).
27
http://ilight-3i.com/en/ [last accessed May 22nd 2013]
28
http://smarttech.com/ [last accessed May 22nd 2013]
128
Chapter III - Designing Interactive Maps with and for Visually Impaired People
Recently, depth sensing cameras have been introduced such as the Microsoft
KINECT29 or Leap Motion30. This technology can be used for touch interaction (Wilson,
2010a). However, at the time we analyzed the multi-touch devices, this technology was
not yet commercially available.
Conclusion
Technology that corresponded to our requirement of placing a map overlay on top
of the screen included resistive technology (Stantum SMK-15.4 Multi-Touch Device),
projected capacitive technology (3M M2256PW and Apple iPad), Digital Vision Touch
Technology (SmartTable) and optical out-of-plane technology. As this condition is crucial,
in the following sections we limit our analysis to these five devices respectively
technologies.
For our project we did not need multi-person interaction. Yet, we considered it
important that a tactile device offered multi-finger and multi-hand characteristics. In the
best case we wanted it to react to at least 10 inputs in parallel for two reasons. First,
visually impaired people usually explore tactile maps with both hands and multiple
fingers (Wijntjes, van Lienen, Verstijnen, & Kappers, 2008b). Designing accessible and
usable interaction for map exploration might therefore demand more than one touch
input. Second, a multi-touch table with at least 10 touch inputs also permits to track and
register finger movements during map exploration.
29
http://www.xbox.com/en-US/kinect [last accessed May 23rd 2013]
30
https://www.leapmotion.com/ [last accessed May 23rd 2013]
129
Chapter III - Designing Interactive Maps with and for Visually Impaired People
From the touch displays that have been successfully evaluated for compatibility
with the paper map overlay, the 3M M2256PW offered multi-finger interaction for up to 20
cursors. Similarly, the Stantum device and the Apple iPad provided real multi-finger
input. Beyond that, the SmartBoard even provided multi-user capacities. For the out of
plane technology, the number of tracked fingers depends on the implementation. It is
possible to implement recognition of multiple touch inputs.
Figure III.7: Users working collaboratively on a large touch table. Reprinted with
permission from (Bortolaso, 2012).
As has been stated by Buxton (2007), the size of a multi-touch display is important.
Size largely determines what muscle groups are used for interaction. It also determines
how many hands (of one or several users) and how many fingers can be active on the
surface. The types of gestures that are suited for the device depend on its size. Finally,
the adapted size depends on the kind of information that is to be displayed on the screen.
Tatham (1991) proposed that maps for visually impaired people should not exceed a
certain size (two hand spans, 450 mm). This size permits to use one of the fingers as
anchor point to put other map elements in relation to this one (regarding distance and
direction for instance). In what concerns memorization of spatial knowledge, it can be
more challenging to use large scale touch tables, as those used for instance in
collaborative multi-touch applications. On the other hand it is also difficult to present
tactile maps in a very small format—e.g., size of a smartphone screen—as there is little
space to present all map details. In previous projects (Brock, Molas, et al., 2010) we
observed that the size of maps should be at least A4 format, but users preferred maps in
A3 format. We therefore concluded that the size of the Ipad was too small for our map
project, whereas the size of the SmartTable was too large. The sizes of the 3M Display and
130
Chapter III - Designing Interactive Maps with and for Visually Impaired People
the Stantum display were well adapted. The size of the map for out of plane technology
depended on the implementation.
III.2.3.3.d Accuracy
To associate sound output to map elements, it must be possible to identify the
exact position of a finger. As presented by Power (2006), inaccuracy of finger position
can lead to errors in the sound output of interactive maps. Largillier et al. (2010)
differentiated between pointing accuracy—the precision of a stationary contact—as well
as tracking precision—the precision of a path following a moving finger. They stated that
reaching pixel resolution with an average user’s fingertip was utopist. We propose that
pointing precision should be better than the size of one finger. Tracking accuracy not
only depends on spatial resolution but also interpolation method and acquisition rate
(Largillier et al., 2010). A slow technology (scanning rate at 40 to 60 Hz) will fail to deliver
accurate tracking of fast movements. They also observed that the precision varied
depending on the position on the screen surface. Finally the precision indicated in the
technical datasheets of a screen did not always correspond to the actual precision. For
the 3M M2256PW a precision of 0.28 mm was indicated. The tactile precision of the
Stantum SMK-15.4 Multi-Touch Device was indicated as <0.5 mm. Both resolutions proved
sufficient during our pretests. We were unable to find any information on the precision of
the SmartTable and the IPad. The precision of out-of-plane technology depends on the
camera used as well as the hardware setup (positioning of the camera, etc.).
III.2.3.3.e Orientation
As stated by Buxton (2007) orientation of a multi-touch device—vertical versus
horizontal—is important. On a horizontal display users might rest their hands on the
surface during exploration. In different sources this situation is called, unintentional,
accidental or unintended multi-touch input, i.e. hand or finger movements that do not
convey meaningful information (Pavlovic et al., 1997). This can lead to confusing
situations in which the user does not understand what is causing the output interaction. If
the number of touch inputs is limited, accidental touch input can even block the system
from recognizing the intended touch interaction from users’ fingers. Vertical surfaces do
not cause this problem. However, physical fatigue may be a problem in these systems as
users are forced to hold their arms in front of them for interaction. Another factor to take
into consideration is the application type. Visually impaired users normally explore
raised-line maps on a horizontal surface rather than fixed to a wall31. To be as close as
31
Exceptions are raised-line maps in public spaces, public transportation or touristic sights that
are often fixed to walls of buildings in vertical position.
131
Chapter III - Designing Interactive Maps with and for Visually Impaired People
possible to their habitual tools, we decided to set our map in a horizontal plane. All of the
devices that we investigated present this possibility.
III.2.3.3.f Summary
Table III.2 shows a summary of the evaluation of selected multi-touch devices
according to the criteria that have been described above in detail. Based on this
comparison, we chose the Stantum SMK-15.4 Multi-Touch Device and the 3M Multi-touch
Display M2256PW as possible solutions.
The Stantum display has a physical dimension of 20.5 * 33 cm. The default display
resolution is 1280*800. It provided real multi-touch functionality, i.e. recognition of more
than one touch input (see Figure III.8). The 3M Multi-touch Display M2256PW is a 22 inch
display and has a 1680 * 1050 resolution. It recognizes 20 simultaneous touch inputs. It is
specified for less than 6 ms response time and it has a touch precision of 0.28 mm.
Orientation can be fixed in horizontal and vertical position.
(a) (b)
Figure III.8: Photograph of different multi-touch devices. (a) The Stantum SMK-15.4 Multi-
Touch Display. (b) The 3M™ Multi-touch Display M2256PW.
132
Chapter III - Designing Interactive Maps with and for Visually Impaired People
III.2.4.1 Brainstorming
There are variations and more specialized methods such as the "Group Elicitation
Method" (Boy, 1997) which proposes "brainwriting"—a written variant of brainstorming.
However, these methods are not usable with visually impaired people because the
methods of sharing ideas are often based on vision. The dependence on vision concerns
two aspects: the content that is created and exchanged as well as non-verbal
communication between participants (Pölzer, Schnelle-Walka, Pöll, Heumader, &
Miesenberger, 2013).
Regarding the first aspect; when brainstorming with sighted people, all ideas
created during the session are written down on paper or a whiteboard. This allows
participants to access ideas at any time. It facilitates group dynamics as the sharing,
selection and structuring of ideas. The methods of visual notation used in brainstorming
also structure information spatially and organize it on multidimensional criteria (groups,
connections, graphics tags). Very little work has been done yet to provide access to
content for visually impaired people. Most recently Pölzer et al.(2013) proposed a tool for
sharing mind maps between visually impaired and sighted users. This tool uses different
input and output devices, such as multi-touch table, camera-based gesture recognition
and a braille display. The mind map is made accessible by means of the braille display
and a screen reader connected to the multi-touch device. However, during the time of
our studies this tool was not yet available. Therefore in our study a sighted facilitator was
in charge of making the content accessible.
133
Chapter III - Designing Interactive Maps with and for Visually Impaired People
and originality of ideas. The facilitator focuses on the overall approach and time
management. Blind people lose all these gestural information and are forced to request
information about the intentions of interlocutors. To manage a brainstorming session with
blind people, the facilitator must manage much more imperatively the turn taking
between speakers. He should guide and mediate the communication of the group by
distributing speech, so to avoid silence and simultaneous speech. Also, he can verbalize
if he perceives intentions of turn taking (for instance “I think Jean wants to say
something”).
Figure III.9: Brainstorming session with visually impaired users. A user is taking notes with
a BrailleNote device. Reprinted from (Brock, 2010a).
134
Chapter III - Designing Interactive Maps with and for Visually Impaired People
participants while one blind person took notes using a BrailleNote32 portable device (see
Figure III.9). Finally, each scenario was presented to the whole group.
The “Wizard of Oz” method aims to evaluate a system by simulating some or all of
the functionalities of a prototype, while users believe that they are interacting with the
real system (Kelley, 1984). Often the visual display is simulated and thus the method is
inaccessible to visually impaired people. Therefore, an adaptation of this method is
required for working with visually impaired people. It is possible to adapt it using non-
visual, i.e. mainly auditory or tactile modalities. Klemmer et al. (2000) offer a Wizard of
Oz software to facilitate the simulation for the design of voice interfaces. Their software is
designed for systems using auditory modality for input and output. Serrano and Nigay
(2009) proposed "Open Wizard", a Wizard of Oz software for multimodal systems. It
simulates the input but does not simulate output modalities. Only recently, studies on the
"Wizard of Oz" method directly addressed visually impaired people. Miao, Köhlmann,
Schiewe, and Weber (2009) proposed the use of tactile paper prototypes for evaluating
their design with visually impaired people. Their envisioned system was a display with
raised pins, thus having both tactile input and output.
We used the Wizard of Oz method for a low-fidelity evaluation of our system and at
the same time for creating ideas. We had two objectives for the Wizard of Oz Session: 1)
to test if the concept of an interactive map prototype was usable and enjoyable for the
visually impaired users, 2) to stimulate the creativity of participants.
32
http://www.humanware.com/en-usa/products/blindness/braillenotes [last accessed August
22nd 2013]
135
Chapter III - Designing Interactive Maps with and for Visually Impaired People
Figure III.10: Experimenters instructions for the simulated speech output (in French).
We organized informal Wizard of Oz sessions with four users. The map used in this
evaluation is presented in subsection III.2.5.1. A visual map containing names of map
elements in French was prepared to help the experimenter (Figure III.10). Users were
invited to individual sessions. Participants were chosen among those we had met in
previous brainstorming sessions according to three criteria: 1) their interest in map
exploration, 2) creativity during previous sessions and 3) never having explored an
interactive map. As none of the users was familiar with the Wizard of Oz method, users
were first introduced to the methodology. Then, they were asked to freely explore the
tactile map (Figure III.11). When users required audio information, they touched specific
positions on the map. The experimenter then read out the name of the map element as
indicated on the map instructions (Figure III.10). The map contained specific
“interactive” symbols for public transportation and points of interest. All other elements
(streets, parks, rivers) were not located with specific markers. They were simulated as
“entirely interactive”. Users were encouraged to express any comments or errors during
map exploration.
136
Chapter III - Designing Interactive Maps with and for Visually Impaired People
interact with the experimenter by asking questions during map exploration and did not
stick to the Wizard of Oz protocol. This might be because the method was new to the
users or because the experimenter did not impose the formal character of the session.
Nevertheless, the sessions have been very productive in terms of useful feedback and
creation of ideas.
Figure III.11: Wizard of Oz Session. A visually impaired user is exploring a tactile map
while the speech output is simulated.
III.2.5 Prototyping
Prototypes should be based on the ideas generated collectively in the previous
step. Prototyping can be done with low- and high-fidelity prototypes. Prototypes allow
users to evaluate, validate or refute concepts or interactions, and to select or propose
new ideas. To produce these prototypes there is a choice of several methods. Again,
these methods are often based on the use of visual content. For instance, Rettig (1994)
and Snyder (2003) demonstrate the use of paper prototyping, in which drawings or
collages serve as low-fidelity prototypes. The same visual constraint exists for the use of
video prototyping (Mackay & Fayard, 1999).
137
Chapter III - Designing Interactive Maps with and for Visually Impaired People
Maps were designed with the Open Source Inkscape software33 in SVG (Scalable
Vector Graphics) format. SVG is convenient to provide both a topographic view of a
geographical place and a textual description (based on XML34) of the included elements.
Therefore, many projects use the SVG format for the design of interactive maps (Campin
et al., 2003; Daunys & Lauruska, 2009; Miele et al., 2006; Tornil & Baptiste-Jessel, 2004;
Wang et al., 2009). The SVG format allowed us design maps with an image editor and to
print a raised-line map from this topological view, but also to add labels to map elements
in the textual view. As the visual view is vector-based, it can be displayed at various
output resolutions (Tornil & Baptiste-Jessel, 2004). Furthermore, it the textual view can be
parsed easily by a computer program. Another advantage of SVG is that it can be easily
created from existing Geographic Information Systems. For instance OpenStreetMap
provides the possibility to export SVG files.
river
bus stop
point of interest
roundabout
park
Figure III.12: Map drawing used in the Wizard of Oz sessions. The different "interactive"
geographic elements are highlighted. Points of interest are represented by circle symbols,
bus stops by triangles, parks and rivers by different textures. Reprinted from (Brock, 2010a).
The objective of this design step was to design a first map and to evaluate how
readable the chosen representation was for the visually impaired users (Brock, 2010a).
We chose to display non-familiar content as we wanted to observe whether users could
understand an unknown city area based on the map drawing. We based the on the
outlines of the city center of Montauban, France (Figure III.12). We used a visual map
extracted from OpenStreetMap as a model to draw the outlines of roads and buildings.
However we slightly altered the content so that it included different elements such as
roundabouts, parks and public transportation. The objective was to test different tactile
symbols for representing various elements. This map design was then used in Wizard of
Oz sessions (see subsection III.2.4.2). Users found some ambiguities concerning the
representation of the map. For instance, all users found it difficult to distinguish between
streets and open places. This information was however important for them. Also users
confused the streets and the exterior outline of the map. These findings were taken into
account to improve the map layout for the next version of the raised-line map.
139
Chapter III - Designing Interactive Maps with and for Visually Impaired People
Figure III.13: Map drawing used for the second interactive map prototype. Buildings were
represented as black elements, whereas open spaces were represented in white.
140
Chapter III - Designing Interactive Maps with and for Visually Impaired People
In this step, we decided to work on the area around the Institute of the Young Blind
which was familiar to users. The objective was to see if the represented map
corresponded to the participants’ mental representations. As people were familiar with
this area, they wanted the map content to be as close to reality as possible. However, this
was a difficult task as the area has some very small streets, green areas and rivers that
are hard to represent without making the map cluttered. It was therefore sometimes
necessary to alter the representation of distances, sizes and directions in order to fit in all
required map content (Tatham, 1991). We chose a compromise between realistic
representation and readability. Another challenge consisted in the fact that a lot of the
pedestrian zones were not represented in the Geographic Information Systems that we
used as a basis35 (Kammoun et al., 2010). This required additional manual modifications to
the map. This design step made us understand that designing realistic maps that can be
used as real orientation aid is very challenging. As our aim at long-term was the design of
an experimental map, we did not further investigate how these design steps could be
improved.
Three blind users explored the map and reported problems. Only one user
preferred the map with black elements for buildings. They also found that the map was
cluttered as we had tried to represent all the small parks and rivers at a high zoom level.
As a consequence, in the following maps, we abandoned the idea of drawing buildings as
black elements and we concentrated on the representation of a less complex
environment for experimental purposes.
35
We checked map content for Google maps and OpenStreetMap. By the time, OpenStreetMap
contained more pedestrian information than Google Maps. Since then, efforts have been
undertaken for both Geographic Information Systems to include further pedestrian information.
141
Chapter III - Designing Interactive Maps with and for Visually Impaired People
based message system. Software agents send these text messages to the bus and each
agent specifies which type of message it wants to receive. For implementing an Ivy-
based application it is therefore necessary to define precise regular expressions for the
different messages (see appendix VII.1).
Our prototype was based on four different modules: a touch detection module, a
viewer module, the interactive map module and the TTS (text-to-speech module). Figure
III.14 shows the architecture of our prototype.
Viewer Module
The objective of this module was to display the map information as well as to
determine which map element has been picked via the specified touch interaction. We
used the “SVG Viewer”, a module that had been developed in our research team
beforehand36, which provided an interface to the Ivy bus. Using the touch position it
determined the element within a SVG37 drawing that had been touched. It then sent a
message with the element’s ID on the Ivy bus. The SVG Viewer has been coded in Java.
36
The author of the “SVG Viewer” is Mathieu Raynal.
37
http://www.w3.org/Graphics/SVG/ [last accessed May 14th 2013]
143
Chapter III - Designing Interactive Maps with and for Visually Impaired People
The API should provide some basic gestures and the possibility to develop
new ones.
The API needed to handle true multi-touch with input of at least 10 fingers.
Finally all the known problems with the APIs should be identified and
discussed.
most promising: Multi-touch for Java (MT4J, Laufs, Ruff, & Weisbecker, 2010), Sparsh UI
(Ramanahally, Gilbert, Niedzielski, Velazquez, & Anagnost, 2009), Gesture Works39 and
WPF- Breeze40. Table III.3 summarizes the comparison of these APIs. All of them function
under different operation systems, are compatible with different multi-touch screens and
provide real multi-touch capacities as long as the multi-touch hardware does. These
criteria did therefore not influence our choice. All but Breeze provided possibility for
defining own gestures. Finally, we opted for MT4J as it was distributed under a free
license, Java programming language seemed a good choice, and most importantly as it
had the greatest variety of predefined gestures.
Table III.3: Comparison of different multi-touch APIs based on the criteria for our map
application.
Gesture
MT4J Sparsh UI Breeze
Works
Windows 7, XP, Windows 7,
Operation Vista, Windows, XP, Vista Windows 7,
System Ubuntu Linux Linux & Max OSX .NET 4.0
& Mac OSX 10.6
Programming Adobe Flash &
Java Java / C++ C#
Language Flex
Compatible
Multi-touch All All All All
screens
Tap, Double
Tap, Drag,
Tap, Drag, Click, Zoom,
Predefined Rotation, Click, Zoom,
Zoom, Drag,
Gestures Resize, Zoom, Drag, Rotation
Rotation Rotation
Move (2
fingers), Lasso
Create own
Yes Yes Yes No
gestures
Real multi- Depending on Depending Depending Depending
touch Hardware on Hardware on Hardware on Hardware
Demands a
gesture
Known handling No free
None None
problems server and a licence
multi-touch
simulator
39
http://gestureworks.com/ [last accessed June 6th 2013]
40
http://code.google.com/p/breezemultitouch/ [last accessed June 6 th 2013]
145
Chapter III - Designing Interactive Maps with and for Visually Impaired People
Pretests of the prototype with three blind users clearly proved why it is always
important to test prototypes with the specific user group as demanded by the ISO 9241-
210 standard (ISO, 2010). Although the single tap worked fine with sighted users, it did
not work with blind users. We observed that, visually impaired users explore tactile
maps with several fingers (Brock, 2010a). When multiple fingers were simultaneously
applied on the display, many sound outputs were produced. In addition, for most visually
impaired people, one or several fingers serve as reference points and stay in fixed
positions. Although the touch surface was supposed to produce only touch input events
when a finger touched a surface for the first time, sometimes it also produced touch input
for a resting finger. We suppose that the touch surface was sensitive to tiny movements of
the resting finger. The blind users who tested the system were then not able to
understand which finger caused sound outputs. Similarly, McGookin et al. (2008)
observed accidental speech output for single tap interaction. We could suppress some of
these messages by implementing a state machine that hindered repetition of the same
messages within a 5 second interval. Yet, the messages resulting from the use of multiple
fingers remained a problem.
Two possibilities existed for handling this situation. First, it would have been
possible to adapt the user to the interface by forcing users to rely on a single finger while
exploring the map. However, as we wanted map exploration to be as natural as possible,
we discarded this possibility. The second possibility was to adapt interaction to user
behavior. In this regard we had several ideas on adapting interaction techniques. First
the current map prototype was interactive on the whole map area. Limiting the interaction
to smaller zones would decrease the risk of multiple interactions. However, as the user
could always by chance touch several marks at the same time, this step alone was not the
optimal solution. Second, measuring the pressure of the touch input could allow
146
Chapter III - Designing Interactive Maps with and for Visually Impaired People
differentiating exploratory movements from touch input. However, most touch devices do
not indicate the force of input pressure. As a third possibility we imagined introducing a
button on the map. For activating speech output the user then had to tap with one finger
on the button and with a second finger on the map requested element (Brock, 2010a).
However, this type of interaction interrupted the natural map exploration strategies.
Finally, another possibility was implementing a double tap rather than a single tap
because it is less likely to occur by chance (Yatani et al., 2012).
Our pretests proved that this double tap technique was efficient. However, some
participants needed time to get familiarized with the unknown interacting technique.
Unintended double tap interaction occurred, mainly because of the palms of the hand
placed on the map during exploration (Buxton, 2007). We therefore asked users to wear
147
Chapter III - Designing Interactive Maps with and for Visually Impaired People
mittens during map exploration, which minimized the occurrence of unintended touch
inputs (Figure III.17).
148
Chapter III - Designing Interactive Maps with and for Visually Impaired People
Several possibilities existed for accessing touch information with the Stantum
device (see Brock, 2010 for details). We drew inspiration from the “Stantum Tuio Bridge”
(Hoste, 2010) that converted the touch events from the Stantum touch device into the TUIO
41
http://users.polytech.unice.fr/~helen/SERVER_SI_VOX/pages/index.php?page=Accueil [last
accessed June 10th 2013]
42
http://tcts.fpms.ac.be/synthesis/ [last accessed June 10th 2013]
43
http://www.stantum.com/en/ [last accessed June 10th 2013]. Stéphane Chatty (ENAC LII Lab)
generously lent us the Stantum SMK-15.4 Multi-Touch Development Kit during the development of
the first prototype.
149
Chapter III - Designing Interactive Maps with and for Visually Impaired People
Figure III.16: Visually impaired user exploring the interactive map prototype
The original “Stantum Tuio Bridge” had been implemented for a Linux system. We
implemented the driver in C++ and adapted it for Windows operating system. The driver
received touch input directly from hardware and sent it to the ivy bus (see subsection
III.2.5.2). We faced some challenges concerning the resolution of the screen as the
original TUIO Bridge had been used with a 480*800 pixel device. To facilitate conversion
between different screen resolutions, we used the display with a 1360*768 resolution
instead of the default resolution. This conversion resulted in a small imprecision of the
calculated position that we compensated for in the map drawing.
Our objective was to distinguish exploratory finger movements (i.e., touching the
screen for following the raised-lines on the map) from touch interaction (i.e., pressing the
screen). The hardware provided the possibility to differentiate “touch down” events
(finger pressing), “move” events (finger moving while touching the screen) and “up”
events (finger contact leaves the screen). For each cursor the normal event flow should
be “down”, optionally “move” and then “up”. In the application we treated only “touch
down” events as these are the events at the origin of each touch interaction.
150
Chapter III - Designing Interactive Maps with and for Visually Impaired People
Pretests of the prototype with three blind users as reported in section III.2.5.2.b
made us recognize the need to adapt the interaction technique from simple tap
interaction to double tap interaction.
Figure III.17: Photograph of a user exploring the interactive map. The raised-line map
overlay is attached on top of the touch screen. The user is wearing mittens to prevent
unintended touch input from the palms.
As we changed the touch display we needed to adapt the Touch Detection Module.
This module handling the touch input was coded in C. We used the touch screen low level
driver, as we wanted to directly access precise touch information. For each touch event,
we obtained an ID, coordinates and a timestamp. This information was then sent to the
second module (Viewer Module). The Interactive Map Module was coded in Java as
before and received messages from both modules. It implemented the state machine for
the double tap interaction.
We checked with a blind subject that the double tap interaction was efficient, that
speech output was intelligible, and that the voice, volume and pace were adapted and
151
Chapter III - Designing Interactive Maps with and for Visually Impaired People
comfortable. The usability of the interactive map and the raised-line map with braille
were then compared in a usability study with 24 blind users (see chapter IV).
152
Chapter III - Designing Interactive Maps with and for Visually Impaired People
own battery. In our study, we evaluated landmark knowledge as the storage of names in
memory.
Figure III.18: Methods for evaluating route-based knowledge with visually impaired people
integrated into the classification by Kitchin and Jacobson (1997). Red filling: methods
included in our test battery. Blue outline: method that we added to the classification.
153
Chapter III - Designing Interactive Maps with and for Visually Impaired People
Jacobson (1997), these methods might have more utility for measuring visually impaired
people’s distance knowledge than magnitude or ratio estimation techniques because
they require only categorization rather than more precise scaling estimates. 2) Route
recognition (“R-R”): a route between two points was described and participants had to
decide whether the description was correct or not. This test has originally proposed by
Kitchin and Jacobson (1997) as part of the tests on survey knowledge but not on route
knowledge. Yet, it was included in Bosco’s battery on spatial orientation tests (Bosco et
al., 2004). 3) Wayfinding (“R-W”): A starting point and a destination were provided and
the participants had to describe the shortest route between these two points. Whereas
Bosco et al. (2004) indicated the route description and participants had to determine the
arrival point, in our test battery participants had to determine the route description. This
test set was the verbal description variant of the reproduction of route method (Kitchin &
Jacobson, 1997).
154
Chapter III - Designing Interactive Maps with and for Visually Impaired People
Figure III.19: Methods for evaluating survey knowledge with visually impaired people
integrated into the classification by Kitchin and Jacobson (1997). Red filling: methods
included in our test battery. Blue outline: method that we added to the classification.
The SUS (System Usability Scale, Brooke, 1996) is a questionnaire designed for
accessing user satisfaction. It is composed of 10 questions on a 5 point Likert scale.
Positive and negative responses are alternated for counterbalancing. The SUS is free to
use.
Lewis (2009) presented four different questionnaires used at IBM: ASQ, PSQ,
PSSUQ and CSUQ. The ASQ (after scenario questionnaire) is a 7 point Likert scale,
composed of three questions on ease of task completion, time for completing a task and
usefulness of help. It is mainly used for scenario-based evaluation. The PSQ (printer
155
Chapter III - Designing Interactive Maps with and for Visually Impaired People
scenario questionnaire) is an older version of the ASQ on a 5 point Likert scale. The
PSSUQ (post study system usability questionnaire) is a 7-point Likert scale composed of
19 questions. Four different scales can be calculated: overall satisfaction, system
usefulness, information quality and interface quality. The CSUQ (computer system
usability questionnaire) is a version of the PSSUQ used for field testing. Questions are
similar than for the PSSUQ with the difference that they do not relate to specific tasks but
to the work in general. The IBM questionnaires are free for use.
The SUMI (Software Usability Measurement Inventory, Kirakowski & Corbett, 1993)
is composed of 50 questions with three possible reponses (“agree”, “don’t know”,
“disagree”). In comparison to the preceding questionnaires, the SUMI is distributed
commercially (although free licenses can be obtained for teaching). It exists in different
languages, with the different versions validated by native speakers.
III.3 Conclusion
In this chapter we presented our contribution to the development of interactive
maps for visually impaired people.
We responded to Research Question 1 (What is the most suitable design choice for
interactive maps for visually impaired people). As it is impossible to define a universal
best solution, we defined a precise context of use. Concretely our aim was to develop a
prototype that allowed a visually impaired person to explore an unknown geographic
area. We did not aim at providing a prototype for mobile interaction, but for exploring a
map at home, at school or in another “immobile” context. Based on the analysis of the
design space of interactive maps for visually impaired people (see II.4), we opted for an
interactive map design composed by a multi-touch screen, a raised-line map overlay and
speech output. This design choice was also based on visual impaired users’ contribution
to the design process. We presented different versions of each map component that have
been developed through an iterative process. We also presented the experimental
prototype that was used in a subsequent user study (see chapter IV). Based on the
experience of designing interactive maps we can propose guidelines for the different
map components (see VII.6.3)
157
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
To sum up, these results show that interactivity is promising for improving
usability and accessibility of geographic maps for visually impaired people. The design
of non-visual interactions that promote essential spatial tasks (e.g. retrieving distances
and directions) may greatly enhance accessibility.
44
Note : a paper presenting this chapter is currently under submission. Part of the results have
been published in (Brock, Truillet, et al., 2012).
160
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
Few studies compared usability of different systems for visually impaired people.
As an example, Giudice et al. (2012) evaluated a vibro-audio tablet interface against a
tactile image for learning non-visual graphical information, both with blind and
blindfolded participants. The vibro-audio tablet interface synchronously triggered
vibration patterns and auditory information when the users touched an on-screen
element. The tactile graphic was embossed and did not offer any additional feedback
than the embossed relief. In this study, learning time with the interactive prototype was
up to four times longer than with the paper diagram. Also users had more problems to
follow lines and curves when indicated by vibrations than when printed in relief. This
finding raises the question if interactive devices are less usable than dedicated tools.
Wang et al. (2012) compared an interactive map (with raised-line map and audio
output) against a tactile map without any textual information. Users preferred the
interactive map. Furthermore, they observed that the interactive map was quicker in 64%
of all cases for identifying start and end points but not for route exploration. This
comparison is limited as in the interactive condition users spent most time on listening to
the audio output, whereas the tactile map did not contain any braille information.
Consequently, in order to fill this gap, this chapter presents a study on the
usability of two different geographic map types for visually impaired people: a classical
raised-line map compared to an accessible interactive map as described in subsection
III.2.5.4.b.
161
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
162
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
that users would perceive the interactive map as more accessible and ludic. Moreover
we hypothesized that users who encounter difficulties with braille reading would prefer
audio output.
IV.2.1 Material
We tested the same raised-line maps under two different conditions (“map type”):
the paper map (PM) condition corresponded to a regular raised-line map with braille
legend; the interactive map (IM) condition corresponded to a touch screen with a raised-
line map overlay (without any braille text) and audio feedback. The interactive map was
functionally comparable to a regular tactile paper map. Users could explore the raised-
line map on top of the screen with both hands, i.e. ten fingers, exactly the same way that
they would explore a paper map. Exploratory movements did not produce any speech
output. The braille legend was replaced by audio output that was triggered through a
double tap on the markers. No further input or output interaction was provided to ensure
functional equivalence with the paper map. The design of the interactive map is
presented in subsection III.2.5.4.b.
Same as the previously mentioned maps (III.2.5.1), the maps were designed as
SVG maps with Inkscape. A dashed line (line width 1.4 mm; miter join; miter limit 4.1;
butt cap; no start, mid or end markers) presented the outer limits of the map. Streets and
buildings were separated by a solid line (line width 1.4 mm; miter join; miter limit 4.0;
butt cap; no start, mid or end markers). A texture represented a river (texture “wavy”).
Points of interest (POI) were represented by circles (width and height 12.4 mm, line
width 1.4 mm). An arrow on the left upper side of the map indicated the north direction.
163
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
To avoid that newly learnt spatial knowledge would interfere with previous
knowledge (Thinus-Blanc & Gaunet, 1997), we chose to represent a unknown
environment. We designed a first map (see Figure IV.1) representing a fictional city
center with six streets, six buildings, six points of interest (for example museum,
restaurant, and public transportation) plus a hotel in the center of the map as well as one
geographic element (a river). A second map was then created with the same map
elements that were rotated and translated, so that both map contents were equivalent.
The additional central point of interest in the middle of the map (hotel) was common for
both maps. Pretests with a visually impaired user ensured that the maps were readable.
We also assured that they were not too easy or too difficult to memorize in order to avoid
a ceiling or floor effect concerning the observed variables.
(a) Map content 1, interactive (b) Map content 1, braille with legend
(c) Map content 2, interactive (d) Map content 2, braille with legend
Figure IV.1: Four different variants of the map existed in total. Two different map
contents are depicted in (a, b) and (c, d). They are based on the same geographic
elements, which were rotated and translated. Both map contents exist with braille (b, d)
and in interactive format (a, c). Circles are points of interest (either interactive or
accompanied by a braille abbreviation). The marks composed by three dots are
interactive elements to access street names.
million) as well as the number of syllables. The first point was considered important
because of the word frequency effect: in general more frequent words are easier to
memorize than less frequent words (Grainger, 1990). The second was considered
important because shorter words are better recalled than longer words (Baddeley,
Thomson, & Buchanan, 1975). Another constraint was that words on each map had to
begin with different letters so that each braille abbreviation was unique. Detailed tables
with the lexical map content can be found in the appendix VII.6.2. To sum up, all street
names were composed of two syllables, and were low frequency words, i.e. words with
less than 20 occurrences per million. In addition we used categories for the names: on
each map two streets were named after birds, two after precious stones and two after
flowers. On each map there were 6 points of interest (POI) with counterbalanced
frequencies and number of syllables. In addition to these six POIs, we added a reference
point on both maps which was the hotel. The word “hotel” had the highest usage
frequency among all POIs that we selected.
In our map all street name legends began with the word “rue” (French translation
for “street”) followed by the name of the street (note: in French an article between both
words is required). The corresponding abbreviation was the letter “r” followed by the
initial of the street name. For example “rue des saphirs” (Sapphire street) was
abbreviated “rs”. POIs were abbreviated with the initial of their name (for example
“museum” was abbreviated with the letter “m”). The braille legend was printed on a
separate A4 sheet of paper which was placed next to the map. Text was written in
uncontracted braille with the font “Braillenew” (font size 32 and line spacing 125%).
165
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
IV.2.2 Participants
As stated in the previous chapter (III.2.2), we recruited visually impaired
participants from different associations, through a local radio broadcast for visually
impaired people as well as by word-of-mouth. All participants gave informed consent to
participate in the whole experiment over the duration of three weeks. They received a
gift certificate after completion of the study. Costs for transportation were also
reimbursed. None of the participants had seen or had knowledge of the experimental
setup, or been informed about the experimental purposes before the experiment.
Figure IV.2: Description of the visually impaired participants in our study. Means and SDs
have been omitted from this table. *When blindness was progressive, two values are
reported (the second value indicates the age at which blindness actually impaired the
subjects' life).
166
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
Figure IV.2 shows the list of participants with some of their personal
characteristics. Further characteristics—education, travel aids and use of new
technology—are reported in the appendix VII.8.1. Henry (2007) recommended to recruit
users with varying characteristics within the actual target audience. 24 legally blind
participants (12 women, 12 men) participated in the study. Chronological age varied
from 21 to 64 years (mean chronological age = 42 years, SD = 13.15). The age at onset of
blindness varied from 0 to 27 (M = 8.71, SD = 8.51). We used the proportion of life-time
without visual experience (Lebaz et al., 2010) as measure in our study. This value varied
from 0.24 (meaning that the participant spent 24% of his life without visual experience) to
1 (meaning that the participant was born blind). The mean value was 0.87 (SD: 0.23).
The blindness of the subjects had different etiologies, including different illnesses
—genetic diseases, infectious diseases, or more specifically iritis, optical neuritis,
retinitis pigmentosa, optic atrophy, retrolental fibroplasia, retinal detachment,
retinoblastoma, glaucoma—as well as accidents (see VII.2 for definitions of the eye
diseases). Some participants could perceive light or large objects when being very close
but denied being able to use this residual vision in any form of spatial behavior. None of
the participants had a known neurological or motor dysfunction in association with the
visual impairment. As stated by Thinus-Blanc and Gaunet (1997) the role of these factors
is extremely difficult to evaluate and control. Some individuals show affective reactions to
their impairment such as depression, autism or stereotyped behavior patterns, whereas
other deal with their impairment very well. Congenital blindness might also lead to a
delayed development of sensori-motor coordination which might then impact spatial
cognition. It is of course very difficult to evaluate these criteria in a study with visually
impaired participants.
Sociocultural level, i.e. education level and current position, are used as a
classification criteria in some studies with visually impaired users (Thinus-Blanc &
Gaunet, 1997). Participants in our study had varied occupations, such as student,
administrative occupation, telephone operator, assistant secretary, front office employee,
teacher, physiotherapist, engineer, software developer, lawyer, translator, furniture
manufacturer, beautician, songwriter and pianist. Most participants were employed,
some were retired. Education level varied from vocational education to university
(including PhD).
Hand predominance is one of the factors that are applied in some studies with
visually impaired users (Thinus-Blanc & Gaunet, 1997). We examined handedness with
the Edinburgh Handedness Inventory (Oldfield, 1971). As users’ native language was
French, we translated it into French. In addition, we slightly adapted it to better match the
167
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
context of visual impairment (some of the proposed activities are never or rarely
executed by visually impaired people) and shortened to 10 questions. The resulting
questionnaire can be found as part of the user study questionnaire in appendix VII.8.1.
The scores ranged between 10 and 30 (1 to 3 points per question).
IV.2.3 Procedure
In the following section, we first describe the pretests for verifying the correct
functioning of the material and the procedure before the actual experiment. Then we
describe the main experiment that was composed of a short-term and a long-term study.
The short-term study aimed at comparing the usability of the two map types (paper map
and interactive map). The aim of the long-term study was to compare the memorization of
spatial knowledge from each map two weeks after exposure.
IV.2.3.1 Pretests
Pretests, also called pilot testing, are more important when working with impaired
users, as more things are new to the researchers and could go wrong during the study
168
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
(Henry, 2007). We organized pretests with 2 blind users (1 man, 1 woman). These
pretests were done on a fully functional prototype. The test had three objectives: 1) to
validate the interaction techniques, 2) to verify the comprehensibility of the tactile maps
(i.e., to test that the different marks and textures used for the map were distinguishable
and readable), 3) to verify the experimental protocol of the main study and to work out
timing issues as proposed by Henry (2007). As a result of this pretest we adapted the
speed of the double tap interaction to 700 ms between the two taps, because users had
encountered problems with the initial delay of 500 ms. Regarding the experimental
protocol we observed that users mixed up some information from the familiarization map
with information from the experimental map that they explored immediately afterwards.
One reason was that we had presented the experimental map directly after the
familiarization map. In order to minimize the confusion between the two maps during the
main study, we introduced an interview between these two steps. Another reason was
that street names on both maps were similar so that it was difficult for subjects to
differentiate the names of the elements on different maps. As a consequence we chose
abstract names for the familiarization map: streets and points of interest (POI) were
numbered (street 1 to street 4 and POI 1 to POI 4).
IV.2.3.3 Protocol
The experimental protocol included a short- and a long-term study that were each
composed by two sessions (see Figure IV.3). There was a delay of one week between
each of the four sessions, so that it took three weeks for each participant to complete the
whole experiment.
169
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
Figure IV.3: Experimental design of the study. The experiment was composed by a short-
term and a long-term study. In the following the color code orange will be used for the short-
term and blue for the long-term study.
170
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
conducted. Then, we asked subjects to explore and learn the first map (either IM or PM
depending on the group) with both accuracy and time constraints (“as quickly and as
accurate as possible”). Participants were informed that they would have to answer
questions afterwards without having access to the map. In order to motivate them to
memorize the map, we prepared a scenario: users were asked to prepare holidays in an
unknown city and we invited them to memorize the map in order to fully enjoy the trip.
Magliano et al. (1995) observed that subjects remember different types of map
knowledge (landmark, route or survey knowledge) depending on the instruction given
before exploration. Thus, in order to motivate users to memorize all types of spatial
information, we did not provide any cue on the kind of map knowledge that they should
retain. Subjects were free to explore until they felt like they had memorized the map.
When they stopped, we measured the learning time and removed the map. Subjects then
answered a questionnaire for assessing the three types of spatial learning (landmark,
route, survey). The second session took place one week later and started with a
familiarization phase followed by an interview on the Santa Barbara Sense of Direction
Scale. The subjects then explored the second map type (either PM or IM depending on
the group of subjects) and responded to the questions on spatial knowledge. We finally
assessed their satisfaction regarding the two different map types.
171
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
participant factor). We did not expect the map content to have any effect on the results.
Nevertheless, to assure correctness of the results, the order of presentation of the two
different—but equivalent—map contents (1 and 2) was counterbalanced. The experience
was therefore based on four groups with the following conditions: PM1-IM2, PM2-IM1,
IM1-PM2, IM2-PM1. The third independent variable was the type of spatial knowledge—
landmark, route and survey knowledge (Siegel & White, 1975)—as within-participant
factor. For the long-term study the time was introduced as a within-participants factor.
Efficiency
As stated before, efficiency is defined as the resources expended in relation to the
accuracy and completeness with which users achieve goals. We measured efficiency as
learning time, the time users needed for acquiring the map knowledge. Users were
asked to “learn as quickly as possible” and therefore determined themselves when the
learning process ended.
Effectiveness
Effectiveness—the accuracy and completeness with which users achieve specified
goals—was measured as spatial knowledge acquired from map exploration. More
specifically we wanted to assess the three types of spatial knowledge: landmark, route
and survey (Siegel & White, 1975).
We have chosen tests for evaluating spatial cognition as has been discussed in
subsection III.2.6.1. For assessing the landmark knowledge we asked participants to list
the six street names (task called “L-S”) and the six points of interest (“L-POI”) presented
on the map. The order of L-S and L-POI questions was counterbalanced across subjects.
After completion of the landmark (L) related questions, we read out the complete list of
streets and POI without giving any information concerning their locations on the map.
This was to avoid that failures in the subsequent spatial tests (route and survey) were due
to failures in short-term memory. Questions related to route (R) and survey (S)
knowledge were each divided into three blocks of four questions. The order of
presentation of the blocks was counterbalanced, but the order of the four questions within
each block was maintained. Figure IV.4 depicts the structure of the questions.
172
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
Figure IV.4 Structure of the spatial questions in the study. Questions were separated in three
categories: landmark, route and survey. Within each category questions were
counterbalanced. L = landmark, L-POI = landmark-points of interest, L-S = landmark -
street names, R = route, R-DE = route distance estimation, R-R = route recognition, R-W =
route wayfinding, S = survey, S-Dir = survey direction estimation, S-Loc = survey location
estimation, S-Dist = survey distance estimation.
The three blocks for R type questions (containing each four questions) were: 1)
Route distance estimation (“R-DE”): two pairs of POI were proposed (e.g. museum - spa
vs. railway station - obelisk) and participants had to select the two points separated by
the longest route when following the roads (also called functional distance in Ungar,
2000); 2) Route recognition (“R-R”): a route between two points was described and
participants had to decide whether the description was correct or not; 3) Wayfinding (“R-
W”): a starting point and a destination were provided. Then the participants had to
describe the shortest route between these two points.
The three blocks for S type questions (containing each four questions) were: 1)
Direction estimation (“S-Dir”): a starting point and a goal were given and participants
had to indicate the direction to the goal using a clock system (e.g. three o’clock for
direction east); 2) Location estimation (“S-Loc”): the map was divided into four equivalent
parts (northeast, northwest, southeast, southwest), and participants had to decide for a
map element in which part it was located; 3) Survey distance estimation (“S-Dist”): two
couples of POI were proposed (e.g. museum - railway station vs. spa - obelisk), and
173
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
participants had to decide which distance was the longest one in a straight line (Euclidian
distance).
In the whole test each subject could get a maximum of 36 correct answers (12 for
L, 12 for R and 12 for S). The set of questions can be found in the appendix VII.8.5.
Satisfaction
Satisfaction—positive attitudes towards the use of the product—was evaluated with
quantitative (SUS) and qualitative questionnaires.
For using the SUS we replaced the usage of the word “cumbersome” with
“awkward” to make question 8 of the SUS easier to understand (Bangor, Kortum, & Miller,
2008). In an earlier study we had observed negative reactions to question 7 which is
entitled “I would imagine that most people would learn to use this product very quickly.”
Users had stated that “most people” would not use a product for visually impaired
people. Therefore, we changed the wording to “I think that most visually impaired
people would learn to use this product very quickly.” We then translated the
questionnaire into French (see appendix VII.8.3). As subjective questions we asked users
which of the two map prototypes they had preferred and why.
Confidence
Finally, we introduced another set of dependent variables: the users’ confidence
in their responses to spatial questions. In the first presentation of an interactive map,
Parkes (1988) had raised the question if access to an interactive map could increase
users’ confidence in map reading. Until today, this question has not been answered.
Participants evaluated confidence on a scale from 1 (not confident at all) to 5 (very
confident). The question was systematically asked after each of the eight blocks of spatial
questions.
IV.3 Results
IV.3.1 Participants
We observed several personal characteristics including age, braille reading and
reading of tactile images, use of new technologies and orientation skills. For the
subjective estimation of these personal characteristics we used a scale of 1 (low) to 5
(high), except for age, Santa Barbara Sense of Direction Scale and Edinburgh
Handedness Inventory.
All participants were braille readers as this was a crucial condition to participate
in the study. Braille reading experience varied from five to 58 years (M = 32 years, SD =
174
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
14.8). Most subjects read braille bimanually. We also assessed frequency of braille
reading (M = 4.5, SD = 1.1) as well as braille reading expertise (M = 4, SD = 1.0). The
results show that the participants estimated themselves as experienced and frequent
braille readers.
All users except one had prior experience in reading tactile images. The one user
who did not have prior experience grew up in Morocco where the education system for
visually impaired users differs from the French education system. We examined
frequency of using tactile images (M = 2.2, SD = 1.2)—including figurative images, maps
and diagrams—as well as expertise of reading tactile images (M = 3.3, SD = 1.1).
Obviously users estimated themselves as less experienced in tactile image reading than
in braille reading.
Most subjects were right-handed (19 participants who obtained scores between 23
and 30 in the Edinburgh Handedness Inventory); few were left-handed (three scores
between 10 and 15) and few ambidextrous (two scores between 20 and 22).
175
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
map. An alpha level of .05 was used for statistical significance in every test. Error bars in
the diagrams indicate 95% confidence intervals.
Figure IV.5: Learning Time (mean values measured in minutes) for the paper map (left) as
compared to the interactive map (right). The Learning Time for the interactive map was
significantly lower than for the paper map (lower is better). In other words, efficiency of the
interactive map was significantly higher. Error bars show 95% confidence intervals in this
and all following figures. * p < .05. In the remainder of this chapter we will use the color
code red for the paper map and green for the interactive map.
Learning Time varied from five to 24 minutes with a mean value of 10.1 (SD = 4.4).
The observed time values were not normally distributed (Shapiro-Wilk W = 0.89, p <
.001) but logarithms conformed to a normal distribution (Shapiro-Wilk W = 0.96, p =
.086). The logarithm of Learning Time was then compared across map type and order of
map presentation in a 2 (map type, within-participants factor) x 2 (order of presentation,
between-participants factor) analysis of variance (ANOVA). A significant effect of the map
type emerged (F(1,22) = 4.59, p = .04) as depicted in Figure IV.5. Learning Time was
significantly shorter for the interactive map (M = 8.71, SD = 3.36) than for the paper map
(M = 11.54, SD = 4.88). We did not observe any effect of the order of presentation (F(22,1)
= 0.24, p = .63). Finally, there were no significant interactions between variables. We
verified that there was no learning effect between the first and the second map that
176
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
subjects explored. We also verified that there was no significant difference between the
two different map contents.
Figure IV.6: Mean spatial scores for responses to landmark, route and survey questions
(paper and interactive map summed up). Mean scores for the landmark tasks were
significantly higher than those for the route and survey tasks. There was no significant
difference between R and S questions. *** p< .001
The sums of the scores (i.e., L, R and S tasks summed up for each map) varied from
eight to 36 and were distributed normally (Shapiro Wilk W = 0.96, p = .089). They were
compared across map type and order of map presentation in a 2 (map type) x 2 (order of
presentation) analysis of variance. Although the scores for the interactive map were
slightly higher (M=25.6, SD = 6.8) than for the paper map (M = 24.9, SD = 6.8), the effect
of map type was not significant (F(22,1) = 0.45, p = .51). There was no effect of the order
of presentation (F(22,1) = 0.08, p = .79). We did not observe any significant interaction
either (F(22,1) = 1.25, p = .28). We verified that there was no learning effect between the
first and the second map that subjects explored. We also verified that there was no
significant effect between the two different map contents.
177
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
and R (M = 7.5, SD = 2.9) was significant (N = 45, Z = 5.20, p < .001) as well as the
difference between L and S (M = 7.7, SD = 2.7) questions (N = 43, Z = 5.06, p < .001).
There was no significant difference between R and S questions (N = 41, Z = 0.41, p = .68).
Figure IV.7: Effect of order of presentation on landmark scores. The mean scores for L
questions were significantly higher when the interactive map was presented before the
paper map. ** p < .01
We also verified for each spatial test that results were above the level of chance
(see Figure IV.8). Tests L-POI (mean = 5.48) and L-S (mean = 4.6) concerned the recall of
names. Coming up with the correct names cannot be done randomly and therefore the
chance level was zero for both tests. In R-RDE users had to choose which one out of two
itineraries was longer. There was a chance of 50% to guess the correct answer. Out of a
maximum of 4 points, the chance level was thus 2 points. The mean value was above
chance level. Mann-Whitney U Test revealed a significant difference between obtained
scores (mean = 2.92) and chance level (U = 384, n1 = n2 = 48, p < .001). In R-R tests, users
had to identify whether a proposed itinerary was true or false. Again, there was a chance
of 50% to guess the correct answer, thus 2 out of 4 points. The mean value (mean = 2.98)
was above chance level. Mann-Whitney U Test revealed a significant difference between
obtained scores and chance level (U = 576, n1 = n2 = 48, p < .001). In R-W tests, users had
178
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
to name the streets that composed a certain itinerary, without knowing how many streets
composed this itinerary. There was almost zero chance to guess the number and correct
names of streets (mean = 1.6). In S-Dir Tests users had to choose a direction following the
clock method. As there are twelve possible timings (we did not accept anything more
precise than one hour difference), the chance of guessing correctly was 1/12 (=0.083).
The mean value (mean = 2.15) was above chance level. Mann-Whitney U Test revealed a
significant difference between obtained scores and chance level (U = 240, n1 = n2 = 48, p
< .001). In the S-Loc test users had to guess in which one out of four possibilities a point of
interest was situated. Therefore there was a chance of 25% for random guessing, i.e. 1 out
of 4 points maximum. The mean value (mean = 2.7) was above chance level. Mann-
Whitney U Test revealed a significant difference between obtained scores and chance
level (U = 384, n1 = n2 = 48, p < .001). Finally, for the S-Dist test, users had to guess which
one of two trajectories was the longer one. There was 50% random chance, i.e. two out of
four points maximum. The mean value (mean = 2.83) was above chance level. Mann-
Whitney U Test revealed a significant difference between obtained scores and chance
level (U = 480, n1 = n2 = 48, p < .001).
(a)
(b)
Figure IV.8: Mean value (violet bar) and chance level (yellow box) for each spatial cognition
test. If no yellow box is reported, the chance level is equivalent to 0. (a) L Scores have
maximum values of 6, (b) R and S scores have maximum values of 4.
179
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
Figure IV.9: For each map type the number of participants who preferred this map type over
the other is reported. One user had no preference.
The SUS questionnaire provides scores between 0 and 100. In our study SUS
scores taken altogether for both maps varied between 45 and 100 with a mean value of
83.8 (SD = 13.9). Scores were not normally distributed (Shapiro Wilk W = 0.85, p < .001).
They were marginally better for the interactive map (M = 86.6, SD = 13.7) than for the
paper map (M = 81.0, SD = 13.9), without being statistically significant (Wilcoxon signed-
rank test, N = 22, Z = 1.9, p = .058). Yet, when asked which map they preferred, more
users answered in favor of the interactive map (Figure IV.9). Of a total of 24 users, six
users preferred the paper map, 17 preferred the interactive map and one had no
preference.
Most users quickly learnt the double-tap, whereas the user who gave a SUS score
of 45 encountered problems using the double-tap. This user (female, aged 64) possessed
prior experience with paper maps with braille legends and almost 60 years of
experience in braille reading. She mentioned that she enjoys reading braille and that she
had been surprised by the usage of an interactive map.
The six users who preferred the paper map were interviewed about which aspect
they had most liked or disliked about the map. Two users stated the ease of memorizing
written information. One user mentioned interaction problems with the interactive map,
more precisely that there was too much audio output. One user stated that she preferred
braille over speech, while another one mentioned the ease of use. Finally one user said
180
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
that the legend of the paper map was helpful because it presents a list of all the map
elements that the user should find during exploration.
We asked the 17 users who preferred the interactive maps which aspect they had
most liked or disliked about the map. Seven users preferred speech output over braille
text. Four users enjoyed that there was no need to read a legend. Three users enjoyed
the ease of use of the interactive map. One user stated the ease of memorizing spoken
text; one user said that the interactive map was ludic. Finally one user stated the
possibility to add supplementary content (like opening hours) on the interactive map
without overloading the tactile drawing. This would not be possible on a raised-line map
with braille where the amount of information is limited through the available space.
During the discussion after the interviews, users mentioned other explanations for
their preferences. Some users who preferred the tactile map stated that the tactile map
with braille can be more easily transported and that the tactile map was cheaper in terms
of production. Also, one user liked that the paper map gave access to the spelling of a
word. On the other hand, several users stated that the interactive map was quicker to
read, and some also mention that it was quicker to memorize. Interestingly several users
with good braille reading skills stated that the interactive map would be interesting for
someone who does not read braille. One user preferred the interactive map because it
feels less like assistive technology than the tactile map with braille. Finally, several users
mentioned that they had been surprised by the novelty of the interactive map.
reveals a similar distribution of mean confidence and mean spatial scores. No significant
difference emerged between confidence concerning R and S tasks (N = 39, Z = 1.56, p =
.12). We did not observe any significant interaction.
Figure IV.10: Mean spatial scores (depicted as graphs, right axis) and mean confidence
(depicted as bar chart, left axis) for responses to landmark, route and survey questions
(paper and interactive map summed up). Mean scores for the landmark tasks were
significantly higher than those for the route and survey tasks. There was no significant
difference between R and S questions. Confidence was significantly higher for L than for R
or S questions. Besides, the figure reveals a similar distribution of mean confidence and
mean spatial scores. *** p< .001
IV.3.2.5 Correlations
We analyzed linear correlations between users’ personal characteristics and
dependent variables. Figure IV.11 illustrates the many significant correlations. In order to
facilitate the visualization of correlations we divided the values in three groups: age-
related factors, other personal characteristics and dependent variables. In this subsection
we only discuss relevant correlations.
Figure IV.11: Significant correlations for dependent variables, age-related factors and
personal characteristics. The size of the lines between nodes increases with the strength of
the correlation (r value). Nodes have been manually positioned for better readability.
Abbreviations: exp = expertise, freq = frequency, IM = interactive map, PM = paper map,
SBSOD = Santa Barbara Sense of Direction Scale. * p < .05, ** p < .01, *** p < .001. The
diagram was created with the Gephi software (Bastian, Heymann, & Jacomy, 2009).
using interactive maps (Confidence_IM, r = .7, p < .001). Similarly, effectiveness and
satisfaction of using paper maps (Satisfaction_PM) were correlated (r = .44, p = .033), as
well as effectiveness and satisfaction of using interactive maps (Satisfaction_IM, r = .41, p
= .044). High performers perceived a higher satisfaction than low performers. The
satisfaction also depended on efficiency: satisfaction for the interactive maps was
negatively correlated with the learning times both for paper maps (LearningTime_PM, r =
-.55, p = .005) and interactive maps (LearningTime_IM, r = -.5, p = .13). Both learning
times were correlated (r = .63, p = .001).
184
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
A main effect of time clearly emerged (Wilcoxon signed rank, N = 45, Z = 5.84, p <
.001). Short-term scores for both maps varied from 8 to 36 with a mean of 25.75 (SD =
6.55). Long-term scores varied from 0 to 35 with a mean of 15.73 (SD = 8.35). The long-
term scores for the paper map varied between 4 and 34 with a mean value of 16.54 (SD =
7.99). For the interactive map spatial scores varied between 0 and 35 with a mean value
of 14.92 (SD = 8.78). These values were not distributed normally (Shapiro-Wilk W = 0.92,
p = .004). Although the mean score for the paper map was slightly higher, there was no
significant effect of the map type (Wilcoxon signed rank test, N = 24, Z = 0.96, p = .34).
There was no effect of order of presentation of maps on long-term scores (Mann-Whitney
U test, U = 226.5, n1 = n2 = 24, p = .21). We did not observe any significant interactions.
Figure IV.12 shows a comparison between scores for L, R and S questions at short-
and long-term. We observed a significant effect of time on each spatial task. Short-term
scores for L questions corresponded to 84% of the maximum score (M = 10.08, SD = 2.04)
and long-term scores corresponded to 39% of the maximum score (M = 4.71, SD = 3.64).
The decrease was 45%. A Wilcoxon rank signed test revealed a significant difference (N
185
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
Figure IV.12: Mean scores for landmark, route and survey responses at short-term
(immediate) and long-term (delayed). A significant effect of time is observed: all scores
were lower two weeks after exposure. The difference was most important for landmark
scores. The right axis depicts the percentage of the scores. LT: long-term, ST: short-term,
*** p < .001.
42, Z = 3.25, p = .001). The difference between R and S scores was also significant (N =
35, Z = 3.01, p = .003). There was no significant difference between L and S scores (N =
41, Z = 1.67, p = .09). In addition, there was a significant effect of time on each score
(Figure IV.13), with short-term scores being higher for L task (Wilcoxon signed rank test,
N = 43, Z = 5.71, p < .001), R task (N = 44, Z = 5.43, p < .001) and S task (N = 45, Z = 4.87,
p < .001).
Figure IV.13: Mean confidence (left y-axis) for landmark, route and survey knowledge
summed up for the paper and the interactive map both at short- and at long-term. A
significant effect of time is observed: confidence is lower two weeks after map exploration.
The difference is most important for landmark questions. Besides, the figure reveals a
strong correlation between confidence and spatial scores (orange) at short-term but not at
long-term (blue). The right axis applies to spatial scores. Abbreviations: ST = short-term,
LT = long-term. ** p < .01, *** p < .001.
IV.4 Discussion
process (Hinton, 1993). Similarly, Wang et al. (2012) observed that most users were
quicker using and interactive map than a tactile paper map, as they could identify map
locations more easily.
Yet, not all studies comparing an interactive prototype with a tactile diagram
demonstrate an advantage for the interactive device. Giudice et al. (2012) compared a
vibro-audio tablet interface against a hardcopy tactile stimuli for learning non-visual
graphical information, both with blind and blindfolded participants. The vibro-audio
tablet interface synchronously triggered vibration patterns and auditory information
when the users touched an on-screen element. The hardcopy tactile graphics were
embossed and did not offer any additional feedback than the embossed relief. They
observed that learning time with the interactive prototype was up to four times longer
than with the paper diagram. Giudice et al. suggested that lines and curves are harder to
perceive when indicated by vibrations than when printed in relief. Obviously, because of
the presence of the embossed map in our interactive map prototype, we did not face the
same issue in this study. However, learning time with the interactive map is significantly
shorter than with the paper map. It clearly shows that reading the interactive map—
composed by the raised-line map and audio information—is more efficient and does not
rely on additional training.
We observed a better satisfaction for the interactive map with 17 out of 24 users
stating that they preferred the interactive map. The three most cited reasons are the use
of speech output instead of braille, the fact that there is no legend, and finally the ease of
using the prototype. Bangor et al. (2008) proposed a description that correlates with a
188
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
given score. Scores of 100 are “best imaginable”, around 85 “excellent”, around 73
“good”, around 52 “OK”, around 38 “poor” and below 25 “worst imaginable”. In our
study mean SUS scores for both map types were in the range of “excellent” scores. This is
not surprising as both maps were simple maps with few details, and thus rather easy to
read. In addition, our users evaluated themselves as experienced in mobility and
orientation and expressed their interest in map reading. Except one participant, all had
prior experience in reading tactile maps. Bangor et al. also associated ranges of
acceptability with SUS scores. Scores above 70 were classified as acceptable, scores
between 50 and 70 as marginally acceptable and scores below 50 as unacceptable.
Specifically looking at the satisfaction for the paper map, 18 scores were in the range of
acceptable and six scores in the range of marginally acceptable (varying from 52.5 to
70). Concerning the interactive map, 21 scores were in the range of acceptable, two in
the range of marginally acceptable (57.5 respectively 67.5) and one in the range of
unacceptable (45). Bangor et al. stated that SUS scores were sometimes related to
participants’ performance (meaning that low performers gave low SUS scores and high
performers gave high SUS scores). This is probably the explanation for some of the low
SUS scores observed in our experiment. Indeed, the only participant without prior map
reading experience scored both maps in the range of marginally acceptable. Most
probably, map reading was more difficult for him than for other participants in the study,
and he simply did not enjoy exploring maps in general. The participant who gave the
lowest score (45) for the interactive map gave a high score for the paper map (90). This
user (female, aged 64) possessed almost 60 years of experience in braille reading. She
described herself as a very frequent braille reader with extremely good braille reading
skills. She had been visually impaired since birth. We suppose that her above-average
braille experience and reading skills as well as the high proportion of lifetime with visual
impairment were the reasons why she clearly preferred the tactile paper map (Brock,
Truillet, et al., 2012). This explanation is supported by the fact that, for all users, SUS
scores for the paper map were positively correlated with braille reading experience as
well as the proportion of lifetime with visual impairment. SUS scores for the interactive
map were not correlated with braille reading experience or any age-related factor. This
means that interactive maps are perceived as accessible even for participants with low
braille reading skills. We confirmed this assumption with a blind person not included in
the user group of this study. This blind person was 84 years old and had lost sight when
he was 66 years old. He learnt braille lately and had limited braille reading skills. A
standard raised-line map with braille text was not accessible for him, unless we printed
the braille with large spacing between letters. Contrary, he could immediately use the
interactive map and gave an excellent score of 87.5 points in the SUS questionnaire. The
189
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
interactive map provided him with access to spatial information that he could not have
obtained with a regular paper map.
192
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
193
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
It is not surprising that the Santa Barbara Sense of Direction Scale and the self-
reported sense of direction were correlated as both measure the same abilities. A higher
expertise with traveling leads to a higher self-reported sense of direction. Interestingly
the sense of direction was also correlated with the expertise in reading tactile images. As
tactile images include tactile maps, it suggests that a better sense of direction can
improve expertise in reading tactile maps and vice versa. This observation is potentially
reflecting similar cognitive processes between haptic map exploration and whole body
navigation. Furthermore, the expertise in using new technologies was positively
correlated with the Santa Barbara Sense of Direction Scale. We suggest two possible
interpretations. People with a well-developed sense of direction might have benefited
from using personal navigation and orientation devices (e.g. Trekker45 or Kapten46). We
do not favor this explanation as it has been shown that electronic travel aids (such as GPS
devices) are less effective for cognitive mapping (Ishikawaa et al., 2008; Münzer et al.,
2006). On the other hand people who are experts in using new technologies are probably
better connected to the internet. They might benefit from this by being more involved in
social life. New technologies also provide them with the possibility to prepare trips. Thus
these people might be less anxious in traveling, travel more, and by consequence
develop a better sense of direction. Finally, we did not observe any correlations with the
travel frequency. As our participants generally estimated themselves as frequent
travelers, we suppose that a ceiling effect occurred that hindered us from observing any
correlations.
45
http://www.nanopac.com/GPS%20Trekker.htm [last accessed July 10th 2013]
46
http://www.kapsys.com/fr/produits/kapten-mobility [last accessed July 10th 2013]
194
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
We also observed that ease of travel was negatively correlated with age and
proportion of lifetime with blindness. Older participants and those with a longer duration
of visual impairment lose confidence in navigation tasks. This means that they are less
used to traveling and thus get excluded from social life. It is therefore important to
propose solutions for this part of the population. Our study also revealed that the
proportion of lifetime with blindness was positively correlated to the frequency of using
new technology. This means that early blind people are used to, and probably benefit a
lot from new technologies. This opens up a new perspective: using new technology can
provide the elderly and early blind with a chance to improve space-related knowledge
and skills. The use of interactive maps is promising for making geographic information
accessible, for improving elderly users’ mobility and orientation skills, and for reducing
stress and fear related to travel.
explanation is that visually impaired people who volunteer for a study concerning
mobility and orientation are highly autonomous—they have to travel to the lab—and feel
proud and confident regarding traveling. The characteristics of our users may have
impacted the results concerning spatial cognition.
Limits may also lie in the questionnaires themselves. It has to be noted that the
French translation of the SBSOD questionnaire is not standardized. However, there is no
equivalent questionnaire that is standardized in French language. When working with
non-English speaking participants, this problem is hard to avoid. Besides, Ishikawaa et al.
(2008) potentially detected limits of the SBSOD questionnaire. They did not observe any
significant relationship between participants’ SBSOD score and wayfinding performance,
except for one of the groups in their study. For the SUS questionnaire as for the SBSOD the
French translation is not standardized. Further investigation is therefore necessary to
assure that methods are correct and adapted.
IV.5 Conclusion
In this chapter we presented a study with 24 blind users. This study was composed
of two parts: a short-term and a long-term study. The objective of the short-term study
was to compare the usability of an interactive map and a paper map, both designed for
visually impaired people. Our hypothesis was a higher usability for the interactive map
and thus a better spatial learning (effectiveness), shorter learning time (efficiency) and
higher user satisfaction. This hypothesis was partially confirmed: learning time was
significantly shorter for the interactive map and more users preferred the interactive map
over the paper map. Concerning spatial learning however, we did not observe any
differences depending on the map type. Differences however were observed between
the three types of spatial knowledge (landmark, route, survey). We observed that
landmark knowledge shortly after exploration was significantly superior to route and
survey knowledge. Furthermore, personal characteristics influenced the resulting spatial
knowledge. For instance, we observed correlations between effectiveness and
navigation skills. Interestingly the learning of landmarks was improved if the interactive
map was presented before the paper map. We suggest that first exploring an interactive
map might remove apprehension, increase map learning skills, and thus help read any
kind of map at a later moment. The absence of a significant effect in effectiveness
196
Chapter IV- Usability of Interactive Maps and their Impact on Spatial Cognition
between the two maps is probably related to the small number of elements that were
presented on the maps. We suggest that maps with a greater complexity might really
benefit from interactivity.
To sum up, these results provide an answer to Research Question 3 (How usable is
an interactive map in comparison with a tactile paper map?). Interactivity appears to be
promising for improving usability and accessibility of geographic maps for visually
impaired people. We observed another significant advantage for interactive maps: the
improved accessibility for people with low braille reading skills. More precisely,
interactive maps seem to be a valid means for improving spatial cognition of visually
impaired people. Specifically, they seem to favor survey knowledge, which is important
for creating a cognitive map that facilitates flexible wayfinding.
197
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
A first aspect concerns the number of fingers and hands used in the exploration
process. Symmons and Richardson (2000) observed that sighted adults who were little
familiar with the processing of tactile pictures preferentially used a single finger when
200
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
Recently, Vinter, Fernandes, Orlandi, and Morgan (2012) investigated the relation
of visual status and exploration strategies on drawing performance after exploration of
non-figurative 2D patterns. They encoded the following patterns: contour following,
enclosure of the global shape, enclosure of local shapes, pinch procedure, surface
sweeping, static contact and symmetrical movements. They observed that visually
impaired children demonstrated a greater expertise during haptic exploration than
sighted children. This was demonstrated through a more frequent use of bimanual
exploration and a greater number of different procedures. Yet, some of the exploratory
strategies employed by blind children, such as surface sweeping, led to poorly
recognizable drawings. Furthermore, they revealed strong relationships between the
exploratory procedures and their consequent performance in drawing. The resemblance
between model and drawing was closer, when a greater number of strategies were
employed.
(Jacobson, 1996). Ungar (2000) proposed that two-handed exploration facilitates learning
through the relative simultaneity of input—in line with the observations of Wijntjes et al.
(2008a). Jacobson (1996) argued that different haptic exploration strategies also influence
the understanding of maps. Yet to confirm the validity of findings on haptic exploration
for tactile maps, there is need for further studies.
To sum up, current research suggests that spatial cognition can be improved by
employing systematic exploration strategies. It can also be concluded that increasing the
perceptual field by using more than one finger, improves raised-line picture
identification. It appears that visually impaired people possess an advantage over the
sighted peers in employing systematic exploration patterns. However, the precise nature
of tactile exploratory modes and the relations between exploratory strategy and
performance level remain obscure and call for further investigation.
V.1.2.1 Motivation
The above reported studies motivated us to further investigate haptic exploration
strategies of visually impaired people. We identified three areas in which a better
understanding of these exploration strategies would be useful. First, a better
understanding of exploration strategies may help to identify and solve specific problems
with the map itself. For instance the strategies might reveal ambiguous lines or symbols
that are hard to identify by touch. Second, the knowledge could be used to improve
guidelines on how to teach map exploration. For instance, if it were demonstrated that
using one finger as a fixed reference point during the exploration enables better
performance, then visually impaired students should be told to explore maps in this way.
Third, this knowledge would enhance the design of adapted interaction techniques.
Exploration strategies may highlight preferences for interacting with the map, for
example, the role of different digits in the exploration process. This knowledge would
give important insight into whether interaction techniques should make use of one or
multiple fingers, and how to employ them.
In order to ensure against errors, often two or more researchers code the same videos
and then results are compared. In order to make the process more efficient, our goal was
to design a system for automatic tracking and identification of the fingers used when
exploring the tactile image.
V.1.2.2 Concept
In a first step we needed to identify a technology for tracking fingers. Many
previous projects have focused on finger tracking. The majority were based on automatic
recognition in video, tracking either bare hands (Kane, Frey, et al., 2013; Krueger &
Gilden, 1997) or markers of different kinds. For instance, Schneider and Strothotte (1999)
and Seisenbacher et al. (2005) used color markers for tracking fingers during the
exploration of tactile maps. More recently depth cameras have been used (Harrison &
Wilson, 2011; Wilson, 2010a). Other projects used optical multi-touch surfaces for finger
tracking and identification (Dang, Straub, & André, 2009). In contrast to electric touch
tables, optical touch tables allow to recognize not only the different fingers position but
also their orientation and sometimes even the form of the hand. However, as reported in
section III.2.3.3 it is not possible to use an electric multi-touch table with a raised-line
overlay. A different technology has been employed in a proof-of-concept prototype
(Munkres, Gardner, Mundra, Mccraw, & Chang, 2009). In addition to a classical finger
tracking in a video image based on color markers, Munkres et al. developed a new
concept. It was based on tracking finger position with a digital device attached to the
finger. However, this device was cumbersome and it was only possible to track one
finger with this apparatus.
In response to the analysis of literature we came up with the idea to make use of
the previously developed interactive map prototype for the tracking of fingers. However,
the 3M projected capacitive multi-touch screen M2256PW has two important limitations.
First, it can only track the position of fingers that actually touch the surface. Yet, the goal
for this prototype was to track all finger movement, including fingers above the surface.
Second, the multi-touch table cannot identify which hand (left or right) and which finger
(i.e. the thumb, index, etc.) caused the touch. For two successive touches, it is not even
possible to determine if they were made by the same or two different fingers. In contrast
with optical tables as mentioned above, the capacitive screen only indicates a touch
position and not the form and orientation of the touching finger. Third, it is not possible to
determine whether the touch event is provoked by a finger or another body part, such as
the palm of the hand. Yet, these unexpected touch events would have to be considered as
false positives as we only wanted to track fingers and not the palms of the hand.
203
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
Another idea was to make use of the Kinect camera as proposed by Wilson
(Wilson, 2010b). First of all, Wilson demonstrated that is possible to use the Kinect
camera to detect finger positions and even for gestural interaction. Second, fingers are
detected even if the surface is not flat (i.e. the relief of the tactile map does not disrupt
detection). Third, it is possible to obtain information about digits that are not in touch with
the surface. On the negative side, the precision of the Kinect as a depth sensor is inferior
to the precision of a touch surface (Wilson, 2010a).
V.1.2.3.a Hardware
Kinect
multi-touch
screen and
raised-line
overlay
Figure V.1 Kintouch hardware setup: multi-touch screen with raised-line overlay and Kinect
camera.
204
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
The Kinect camera combines a depth sensor that allows visual reconstruction of 3D
objects and a RGB (red, green, blue) color camera. It has originally been developed as a
controller for video games. Due to its low price and practicality it has recently been used
in many research projects. Its use for development purpose is now officially supported
by Microsoft. By the time we started the project however, the official drivers were not yet
available.
V.1.2.3.b Software
The software for this prototype consisted of three applications: one for detecting
finger positions on the touch screen as in the experimental interactive map prototype
(III.2.5.4.b), a second for finger tracking in the Kinect image and a third for merging the
results. The first application was based on the previous interactive map prototype. As in
the modular software architecture of the interactive map prototype (III.2.5.2.a), data was
sent via the Ivy middleware (Buisson et al., 2002). Each touch event contained a
timestamp and x and y coordinates. The second application was based on the Kinect. Its
output contained the name of the finger (thumb, etc.) and corresponding hand (right or
left), x and y coordinates of the finger and a timestamp. Kinect and multi-touch data was
then written to a log file. A third application called “fusion” (used for offline processing)
read this data and merged it. It then created another output file that contained the name
of the fingers and corresponding hand, x and y coordinates of the fingers, whether the
fingers were in touch, and a timestamp.
Finger Tracking
Image recognition is challenging in several aspects such as the lighting or noise in
the image and it is not easy to achieve a stable identification of objects. To reduce the
amount of information to process, finger tracking was only done on an excerpt of the
image (see Figure V.2). In order to further facilitate the image processing, we proposed
the following four hypotheses. First, while exploring a map, users’ hands are always
205
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
turned with the palms downwards. Second, the surface is flat (except for the relief on the
map). Third, only hands and arms and no other body parts are moving in the camera
image. Fourth, the map stays in position.
Figure V.2: Limiting the working zone in which image recognition is done (French: “zone de
travail”) to the actual map. Reprinted from (Ménard, 2011).
As the Kinect possesses two cameras (depth and RGB), it was easy to implement
and compare two different algorithms within the same setup. Both algorithms used
OpenCV47 functions and OpenNI middleware48.
Depth Image
A first algorithm used the depth image (Figure V.3 a). The calibration phase
consisted of two steps. First, a depth mask was created to segregate objects (i.e. hands
and digits) from the background (i.e. the tactile map). Second, the user held their hand
horizontally with the fingers spread. Maximum angles between fingers were measured
by the application. They were then used as additional constraints on finger identification.
After calibration OpenCV functions for noise reduction were applied. The image
was converted into a binary image. Contours were detected as lists of connected points.
In the next step it was then necessary to identify contours that corresponded to hands, as
false positives from noise in the image could occur. For this a minimum size threshold for
the contour was applied. Then, each contour was reduced to the minimum number of
points needed to form the outline of the hand because sometimes it was composed of
redundant points. The resulting segments corresponded to outlines of the fingers and
vertexes of the contour represented fingertips. By using the depth information from the
image it was possible to calculate the fingers that actually touched the surface.
47
http://opencv.willowgarage.com [last accessed July 7th 2013]
48
http://www.openni.org [last accessed July 7th 2013
206
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
To identify finger types, the recognized fingers were ordered based on angles
between them. The biggest angle existed between thumb and index. This identification
could be done easiest in the calibration phase when all fingers were spread and visible in
the image. Finger positions in the previous image were taken into account to stabilize
detection. Thus it was possible to identify fingers in the calibration image and
continuously track them. Given the hypothesis that the hands were always turned with the
palm downwards it was easy to identify the right and left hand once the thumb was
identified. As a final result of this algorithm, each detected finger was part of a hand
(right or left) and had a name (thumb, index, etc.) and information whether it touched the
surface or not. A special challenge for this approach was when users closed the hand
partially so that some fingers disappeared from sight. However, as long as some of the
fingers were visible, the algorithm kept track of fingers quite successfully.
(a) (b)
Figure V.3: Tracking of the user’s fingers. (a) Result based on depth image of the Kinect
camera (above) and real hand position (below). (b) Result based on RGB image of the Kinect
camera (above) and real hand position with color markers attached to finger tips (below).
Reprinted from (Brock, Lebaz, et al., 2012).
207
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
adjusting color markers to the fingers took about 5 minutes. However, due to the
additional color information this algorithm proved more stable for finger detection.
Fusion Algorithm
The aim of the fusion was to combine and correlate touch events obtained from the
multi-touch surface with the finger detections from the tracking algorithm. We had the
choice to implement the fusion online, i.e. at real time while running the applications, or
offline, i.e. after running the applications. As an advantage of the online fusion, it would
have been possible to use fusion results at runtime for correcting tracking of the Kinect or
the multi-touch table. On the downside, we suspected the fusion to be time consuming
and thus we feared that online fusion would slow down the application. The offline fusion
also appeared easier to implement. Furthermore, the offline fusion provided the
possibility to keep the log files for later analysis. The disadvantage of the offline fusion
was the need for memory for saving the log files. Based on this analysis we decided to
implement an offline fusion. However, it would be possible to change the fusion mode.
Figure V.4: Finger tracks before (a) and after (b) fusion. (a) Black : tracks from the Multi-
touch table; colored tracks are from the KINECT (red thumb, green index, blue middle
finger, yellow ring finger, pink pinky). (b) Colored tracks after fusion (red thumb, green
index, blue middle finger, yellow ring finger, pink pinky); tracks above the surface are
marked by an “O”; tracks in contact with the surface by an “X”. Reprinted from (Ménard,
2011).
Kinect and multi-touch table did not possess the same coordinate system. The
Kinect had a resolution of 640x480 pixel, whereas the multi-touch table had a resolution of
1680x1050). After limiting the image recognition zone (see Figure V.2), the touch surface
corresponded to approximately 330x221 pixels in the camera image. This resulted in a
precision of approximately 1.4 mm/pixel in the camera image, which is sufficient when
compared to the size of the fingertip. The precision of touch events was 0.28 mm. The
formulas for the coordinate transformation are described in (Ménard, 2011). Comparing
208
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
the resulting coordinates with the actual finger position revealed a calculation error of
about 20 pixels. Although this precision seemed quite low, we validated during the
preliminary tests that it was sufficient to match positions of the same finger from the multi-
touch screen and the Kinect.
Temporal accuracy is important for fusion. Touch events were produced at 100 Hz;
finger detections and fusion output at 10 Hz. Both sources produced a lot of data (the
multi-touch data produced several samples per millisecond) and not necessarily at the
same timestamp. Therefore we limited the fusion to samples of 10ms. In pretests it proved
to be sufficient.
The fusion application read log files that contained both multi-touch and Kinect
data. One of the two variants of the finger tracking could be selected. Multi-touch events
contained a contact identifier, x and y coordinates and a timestamp in milliseconds.
Finger detections from the KINECT contained a name for the finger, the type of hand
(right or left), x and y coordinates, if the finger was touching the surface and the
timestamp in milliseconds. For each touch event, the algorithm searched for the finger
detection with the closest position. Finger detections that had not been matched to touch
events were considered as fingers above the surface. Touch events that did not
correspond to finger detections were considered as false positives and removed (Figure
V.4). The fusion application provided visualization with the possibility to adapt the time
frame.
User Interface
Figure V.5: Visualization of tracked fingers. Identified fingers are depicted in violet and
fingers touching the surface in pink. Reprinted from (Ménard, 2011).
The interface for the application was developed with the GTK library49. Its purpose
was to provide the researcher with an application for calibrating, choosing between the
49
http://www.gtk.org/ [last accessed July 8th 2013
209
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
two algorithms and observing the tracking algorithm during execution. Visualization
showed the tracked fingers and which fingers touched the surface (Figure V.5). More
details on the user interface are described in (Ménard, 2011).
The test subjects were three blindfolded (2 female, 1 male) and three legally blind
(1 female, 2 male) participants. Participants possessed different levels of expertise in
tactile map exploration based on the assumption that this factor impacts exploration
strategies. Two of the blind participants had significant expertise in map reading,
whereas the third blind and the blindfolded participants had little expertise. We reused
the simple tactile map from the experimental study which contained six streets, six
buildings, six points of interest and a river (see IV.2.1.1). For each test, participants were
asked to explore the map in the way they normally do, or if they did not have previous
experience in the way that spontaneously felt natural.
(a) (b)
Figure V.6: Tracking of the fingers with the Kintouch prototype. (a) Detection was working
even when some fingers are occluded (names of fingers are indicated in French). (b) When
fingers were close to each other, the depth image algorithm perceived the hand as one blob
and separation of fingers was not possible. Reprinted from (Ménard, 2011).
The depth camera based algorithm proved efficient with most of the subjects:
fingers were successfully detected and identified. Obviously, occluded fingers were not
210
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
detected. Yet, the occlusion of some fingers did not hinder the correct identification of
the remaining fingers (Figure V.6 a). Reappearing fingers were identified within one
video frame. However the algorithm failed with one blind subject with good expertise in
map reading. As a part of his exploration strategy, he frequently closed his fingers so
they were no longer sufficiently separated (Figure V.6 b). Hence we concluded that the
algorithm is globally working, but does not support every user’s exploration behavior.
Consequently it would be problematic to use this algorithm for studying haptic
exploration strategies.
V.1.2.4.c Fusion
Figure V.7: A blindfolded participant testing the Kintouch prototype. Reprinted from
(Ménard, 2011).
211
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
Based on these preliminary results, we selected the RGB image based algorithm
for continued evaluation of the complete fusion process. We evaluated the fusion
algorithm with data from the map exploration of one male blindfolded with no prior
tactile map reading experience (see Figure V.7).
The user explored the map for approximately 2 minutes. The data was stored in a
log file. The fusion application then ran the log file through the fusion algorithm. After
fusion, 98% of the detected fingers were identified, i.e. name of finger and hand were
attributed. For each finger it was possible to determine whether it touched the surface or
not. 95% of the detections were identified as touches which means that most of the time
the subject was using all of his fingers for map exploration. Both hands were used almost
equally. The thumbs were least used with only 3.4% of touches. His left ring finger was
the finger having the most contact with the map (19%). This was surprising, as we did not
expect it to be the finger typically used for map exploration. Yet, of course this depends
on personal exploration behavior and the user had no prior tactile map reading
experience. Therefore his haptic exploration strategies may not have been the best
adapted strategies.
Traces
from the
palms of
the hands
Figure V.8: Visualization of the data in the log file before fusion. Traces marked in red
resulted from the palms of the hand that rested on the surface during map exploration.
Reprinted from (Ménard, 2011).
As can be seen in Figure V.8, the palms of the hand had rested on the multi-touch
screen during map exploration. The fusion successfully removed false positives from the
palms. The result can be seen in Figure V.9.
212
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
Figure V.9: Visualization of the data in the log file after fusion. Traces from the palms of the
hand have been removed. Reprinted from (Ménard, 2011).
The visualization tool in the fusion application provided the possibility to select the
time frame for displaying the finger traces. We suggest that it would additionally be
interesting to select one or several fingers in order to study their movement separately.
Furthermore, we propose to match the finger traces to the map content in order to
identify the relation between exploration strategies and map features. We exemplarily
visualized the traces of the right index finger with a simple visualization in Windows
Excel (Figure V.10). The temporal distribution is represented through different colors.
The finger first touched the red positions, then the green positions and then the blue
ones. From the red traces we can for instance conclude that the user followed the outline
of the buildings. Also his finger rested some time at the upper point of interest (blue
traces). Furthermore he worked his way from the bottom of the image to the top. Of
course, more detailed analysis would be necessary to obtain precise results and to study
in a next step how the employed strategies impact the resulting cognition.
Figure V.10: Spatial and temporal visualization (see colored traces) of movements made by
the right index during map exploration matched to the map content. Colors indicate the
successive positions of the index finger (1 red, 2 green, and 3 blue). Reprinted from (Brock,
Lebaz, et al., 2012).
213
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
First, we observed problems related to the Kinect. We originally chose to use the
Kinect because the depth image sounded promising for easily distinguishing fingers and
hands from the background. Yet as reported above, it was not possible to identify fingers
if the user explored the map with closed hands. Therefore the depth camera cannot be
used for studying haptic exploration strategies. However, it makes less sense to choose
the Kinect for the RGB camera, as there are color cameras available with a higher
resolution. Therefore it might be preferable to opt for a color camera with a higher
resolution. This would then also increase precision of the fusion results, which in the
current application is quite low.
Second, the solution of using colored markers on the finger tips is not optimal.
Having a marker glued to the finger nail during exploration of a map, might hinder the
natural exploration movements. As the markers were small we suppose that they do not
impact the exploration behavior. Yet, it would be desirable to develop an algorithm
which can detect bare fingers without markers. However, technically this is challenging.
Third, we identified the need for a more powerful visualization tool. As we have
argued before, we would need not only a tool that allows selecting the time frame to be
displayed, but also to select one or multiple fingers. Furthermore it should be possible to
match the fingers to the actual map content.
Once the prototype is improved, we would also need a tool that can actually
analyze the resulting traces from the fusion. So far we only see where the fingers passed
but we still have to analyze the image manually to understand what it means. An
improved functionality would be that the computer automatically identifies exploration
patterns. But first, it would be necessary to decide which patterns should be used. For
instance, patterns could be based on the proposition of Wijntjes et al. (2008b) to
differentiate use of a single hand, bimanual use with one hand moving while the other
214
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
hand rests on the drawing, and simultaneous use of both hands. A different possibility
would be to encode patterns such as contour following, enclosure of shapes, pinching,
surface sweeping, static contact and symmetrical movements as proposed by Vinter et al.
(2012). The choice of the procedures to be encoded would actually depend on the
hypothesis and protocol. It is only after implementing this step, that the prototype would
be really useful for studying haptic exploration strategies.
As an alternative, this prototype might also serve as the basis for developing a tool
for recognizing gestural interaction above the surface.
Subjects in the study should be visually impaired because it has been proved that
haptic exploration strategies differ between visually impaired and sighted participants. If
we want to understand visually impaired people’s tactile map reading strategies, we
therefore need to test with them.
There would be several consequences from the possible outcomes of such a study.
First, the study could have an impact on the teaching of map reading strategies. If the
study revealed that high performers use specific exploration patterns, these patterns
215
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
should be taught to visually impaired students. Likewise, if certain patterns lead to poor
performance it should be made sure that visually impaired people do not employ these
patterns. Second, identifying successful haptic exploration strategies could influence the
development of accessible interaction techniques. Let’s say that bimanual exploration
leads to better cognitive mapping, then non-visual interaction for maps should support
bimanual exploration. If using a fixed reference point improves distance knowledge, than
interaction techniques for learning distances should be based on this principle. Indeed,
the development of accessible interaction today is not grounded on insights from
cognitive science. In our opinion it is important to take into account this knowledge when
developing interaction techniques in order to develop systems that are even more
accessible and usable.
In addition, we have presented the design space of interactive maps for visually
impaired people (see II.4). Many non-visual interaction techniques can be used to
provide information. For instance, we have shown that the use of speech output can help
overcome problems related to braille. Yet some of the above mentioned limitations
remain to be solved. For example, interactive maps often are limited to announcing the
name of landmarks and streets but do not provide further information, such as opening
hours. Also information—such as distances—often remains inaccessible.
216
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
gestural interaction can be designed for visually impaired users (see II.4.3.2.a). Yet so
far, few interactive map projects for visually impaired people made use of more complex
gestures than tapping. The aim of this part of the project was to study how basic gestural
interaction could be used to enrich the interactive map prototype with extended
functionality. This work was developed during the internship of Alexis Paoleschi under
the supervision of Anke Brock (Paoleschi, 2012).
217
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
Although there was no significant difference in reading speed or error rate, users clearly
preferred the mode in which they could manually toggle the amount of detail.
The brainstorming session took place between one blind expert and four sighted
researchers. We discussed how to make use of gestural interaction within the same
interactive map concept, i.e., touchscreen with raised-line overlay. Among the ideas that
we retained was the possibility to select different levels of information. For instance with
the same interactive map, it could be possible to switch between the audio output for
basic points of interest, opening hours, or public transportation. We proposed the
combined use of buttons and basic gestural interaction to access this different
information. Additionally, we came up with the idea of integrating a dynamic braille
display as an alternative output interaction. Also, we decided to explore how distance
information could be provided.
218
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
Figure V.11: Exploration of the gestural prototype. The map drawing contains four buttons
for accessing different information modes (right side of the drawing). The image also shows
the braille display placed behind the screen. Reprinted from (Paoleschi, 2012).
Therefore, a second software architecture was based on using a Gestural API. Following a
preliminary analysis of multi-touch APIs (see subsection III.2.5.2.b), we have chosen the
API Multi-touch for Java (MT4J).
Figure V.12: Detailed view of the raised-line overlay with the four buttons on the right side.
Reprinted from (Paoleschi, 2012).
The map drawing was largely based on the map drawing from the experimental
study (IV.2.1.1). However, in order to provide the possibility to access different modes
and levels of information, we decided to add “buttons” on the map. These buttons were
220
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
actually rectangles printed in relief. The buttons were ordered from level one to four by
an increasing number of triangles. Figure V.12 shows the map including four buttons for
switching between different levels of information.
(a) (b)
Figure V.13: Different gestures used in the system, (a) Tap gesture, (b) Lasso gesture.
Reprinted from (Laufs, Ruff, & Weisbecker, 2010) with permission
MT4J possesses the possibility to design graphically rich user interfaces. This is, of
course, of little importance for our application that targets visually impaired people.
Nonetheless the experimenter needs a visual view of the map for preparing the
experimentation. In MT4J the graphical user interface is created as a scene graph. It is
based on a hierarchic structure of components. Creating such a component tree is done
by attaching components (“children”) to other components (“parents”). A component
can have one parent component at most while the number of child components is not
limited. For initializing the scene graph it is necessary to choose a name, a background
image and load default parameters, for instance resolution and full screen mode. These
parameters are loaded via a configuration file at startup of the application. Application
221
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
specific parameters can be defined in another configuration file. In our application this
configuration file defined the timing constraints for double taps, activation of different
output modalities (audio and braille) and the dimension of the map. Next, different
parsers read the map information in order to create the interactive map scenery.
Figure V.14: View of the interactive map created as MT4J scene graph. Names of streets are
depicted in blue, names of points of interests in red. All names are in French. Reprinted
from (Paoleschi, 2012).
As in the previous prototypes, we designed the map with Inkscape in SVG format.
As stated before this format provides a textual interface that can be analyzed by a file
parser. In our application, a file parser was in charge of converting the SVG map into
components in the Java scene graph. We used the SAX API50 (Simple API for XML) for
implementing a file parser. This API permits to define which markups within the SVG
should be parsed for. The SVG file allowed extracting information on position and
dimension of map elements. Each interactive map element was then represented as a
component in the scene graph. It possessed an identifier and coordinates. One
geographic element could be represented by different components. For instance, streets
possessed interactive markers before and after each crossing (see Figure V.14). A
component was created for each interactive marker. Figure V.14 shows the resulting view
of the scene graph.
50
http://www.saxproject.org/ [last accessed June 7th 2013]
222
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
configuration file was written in XML and read with a JDOM51 parser. JDOM charges the
entire XML file into a data structure which can then be manipulated. In our prototype the
information was then associated with the components in the scene graph.
The pre-study revealed that audio output was comprehensible and the braille
display functional. We detected that the user needed some time to familiarize with the
new gestures, but was then able to use the different gestures that have been proposed for
accessing different levels of information. The lasso gesture seemed to be challenging as
it demanded to first identify a map element and then circle around it. This can be done
more easily with the visual than with the tactile sense. There were no problems with
accessing the different information levels by means of the different buttons. Also the tap-
and-hold gesture for the distance worked well.
51
http://www.jdom.org/ [last accessed June 7th 2013]
223
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
V.2.1.5 Conclusion
In this section we implemented basic gestural interaction techniques that can be
used in interactive maps for visually impaired people. Although a complete study is
required to design advanced gestural interactions, we checked that it was possible for a
blind user to access fundamental information (i.e. distance) and navigate different modes
and levels of information via gestural interaction. Furthermore we observed that gestural
interaction should be picked carefully. Gestures like the lasso that are easy for sighted
people may be less evident for visually impaired people. This finding is in line with
previous studies on gestural interaction for visually impaired people (Kane, Wobbrock,
et al., 2011).
This work only presented a proof-of-concept and a more profound design process
would be necessary for perfection. We believe that thoroughly designed gestural
interaction would be important to include in a commercial prototype. For this purpose, it
would be important to choose the arrangement of functionality in different modes based
on ergonomic criteria and users’ needs. It would also be important to verify that the
chosen gestures were usable without sight. In the future, it would be interesting to go
beyond the basic gestural interaction provided by the API, and to design specific
gestural interaction. For this purpose it would be desirable to include visually impaired
people throughout the process from the creation of ideas to the evaluation. As a concrete
example, in a subsequent project we focused on the design of specific interaction
techniques for route learning on interactive map prototypes.
224
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
Several interaction techniques were evoked. At the end of the session we selected
the five interaction techniques that seemed the most promising. First, we proposed to
guide a user by verbal description. This technique resembled the instructions given by a
navigation system, i.e. “turn left on the next crossing”. We called this technique “Guided
Directions” after a similar technique proposed by Kane, Morris, et al. (2011).
225
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
clock face method for orienting themselves in a real environment. We will refer to this
technique as “Clock Face Method”.
Third, we proposed using a musical interface similar to the One Octave Scale
(Yairi et al., 2008). We discussed the idea of using a well-known musical piece, like for
instance the beginning of Beethoven’s “Für Elise” instead of playing the notes of an
octave. However, choosing a musical piece seems more difficult as the musical
knowledge of users depends for instance on cultural aspects or education. Also, the
musical piece would have been needed to be very short in order that it could be played
entirely for each route segment. We therefore decided to stick to the idea of the octave.
We will refer to this technique as “One Octave Scale”.
The previous techniques were all based on audio output. However, our analysis of
non-visual interaction techniques had revealed that tactile feedback is also an interesting
means of non-visual interaction. Thus we wanted to explore the use of tactile interaction.
Our idea was to use vibrating wristbands that had previously been developed and
employed in our research group (Kammoun, Jouffrais, Guerreiro, Nicolau, & Jorge, 2012).
The idea was then to develop vibrational stimulation of different length and rhythm to
inform the user whether he should turn right or left. A similar concept has been employed
for guiding pedestrians with a tactile compass (Pielot, Poppinga, Heuten, & Boll, 2011).
We called this technique “Vibrating Wristband”.
Another aspect that was evoked and needed further investigation was how to
guide users in case that they missed the correct itinerary.
226
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
Map Drawing
As in the previous projects, this prototype included a raised-line drawing. In
contrast with previous map drawings, we only wanted to present street information and
no buildings. Moreover, we did not want to provide any knowledge about names of
geographical elements. We decided to draw the streets as single lines because single
lines are easier to follow with a finger than double lines (Tatham, 1991).
We based the drawing on the road network of the city center of Lille, a city in the
North of France. This road network was interesting as it was quite complex and angles
between streets were not necessarily rectangular (see Figure V.15). In addition, given
the distance between Lille and Toulouse, we assumed that participants did not have prior
knowledge of the road network. Concretely, we prepared four different itineraries which
were chosen with the goal of having a similar difficulty. The blue route (Figure V.15 a)
consisted of six segments with two right and three left turns. The yellow route (Figure
V.15 b) consisted of six segments with three right turns and two left turns. The red route
(Figure V.15 c) consisted of six segments with two right and three left turns. The green
route (Figure V.15 c) consisted of six segments with two right and three left turns.
227
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
(a) (b)
(c) (d)
Figure V.15: Map showing the road network of Lille as used for the Route Learning Study.
Four different itineraries (a, b, c, d) were prepared.
52
http://audacity.sourceforge.net/ [last accessed September 21st 2013]
228
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
For the “One Octave Scale” Interface we needed simulated musical output. We
used a virtual midi piano for this purpose. However, simulating the output proved tricky.
Indeed, the idea was that the notes of the octave were emitted in proportion with the
distance that the finger had “traveled” on the map. Therefore the speed of the output
needed to adapt to the speed of the finger. In order to help the experimenter, we marked
the names to be played on the map next to the road. The experimenter also needed to
train before doing the actual experiment. The One Octave Scale had the disadvantage
that the direction for turning was not announced. Therefore at each crossing the user had
to test all possible directions. Wrong turns were announced with a beep in a different
octave.
The “Edge Projection” technique was altered from the original publication (Kane,
Morris, et al., 2011). As we did not want to provide the names of map elements, there was
no need for verbal feedback. We opted for earcons in the form of simple beeps. The user
had to search for the start point of the itinerary by gliding first one, then the other finger
on the edge menu. A beep was emitted when the finger matched the position of the point.
When both edge points were found, the user had to connect both fingers. The system
confirmed when the user found the point. It then activated the next point.
(a) (b)
Figure V.16: Photograph of the vibrating wristbands. (a) The vibration motor. (b) The haptic
bracelets with Arduino board. Reprinted with permission from (Kammoun, 2013).
Finally for the “Vibrating Wristbands” technique, we used two wristbands with
vibro-motors (see Figure V.16). These wristbands had been handmade in a prior project
in our research group (Kammoun, 2013). Each band contained one vibration motor VPM2
of the brand Solarbotics53. The tactile interface was programmable through an Arduino
53
https://solarbotics.com/product/vpm2/ [last accessed September 21st 2013]
229
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
Bluetooth board and vibrational patterns were defined and uploaded once (left, right,
both). The bands provided the possibility to control frequency, duration and interval
between vibrational stimuli for each wristband. We made use of a previously developed
smartphone application that permitted to execute vibration signals via bluetooth. Wrong
turns were indicated by both wristbands vibrating at once.
Pretests
We conducted pretests so that the experimenter could train for the Wizard of Oz
Sessions. Several results emerged from these pretests that required modification of the
experimental protocol.
First we observed that it was necessary to follow the itinerary twice. Our protocol
did not foresee a familiarization phase for getting used to the interaction technique. After
exploring the itinerary once, participants were not capable of remembering the route.
We therefore suggested following the same route twice, once for getting used to the
technique and the second time for learning the route.
Furthermore we observed that the “Edge Projection” technique did not work for
learning the itineraries. In contrast to the other interaction techniques in our study, “Edge
Projection” did not continuously guide the users’ fingers on a route but taught them
connection points that needed to be mentally connected. The “Edge Projection”
technique being a bimanual technique, users had to remove hands from the last position.
It was then hard for them to remember the point that they had learnt before. In the study
by Kane, Morris, et al. edge projection had proved successful for finding landmarks on a
map. However it appeared less well adapted for learning itineraries. Consequently, in
this study we decided not to investigate “Edge Projection” any further.
Figure V.17: Perspectives of the user (red) and the traveler as represented by the finger
(blue). (a) Orientations are aligned. (b) Orientations are misaligned.
230
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
Participants
We recruited six sighted university students as participants (see Table III.2). Age
varied between 21 and 23 with a mean of 21.83 (SD = 0.98). All but one participants were
male. The female participant was also the only left-handed one. Participants were
blindfolded during the experiment.
Table V.1: Participants in the Wizard of Oz study. Means and standard deviations are
reported in the two rightmost columns. SBSOD: Santa Barbara Sense of Direction Scale.
Procedure
In order to compare usability of the four remaining interaction techniques (Guided
Directions, Clock Face, One Octave Scale and Vibrating Wristbands) we organized
Wizard of Oz Sessions.
Users evaluated the system in individual sessions. The sessions took place in the
laboratory ULYSS in the IRIT research laboratory in Toulouse, France. Video was
recorded after agreement from the participants.
On arrival in the laboratory participants were informed about the aim of the study,
i.e. testing different applications for guiding the user on an itinerary on an interactive
231
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
map. In order to motivate them, the experimenter introduced a scenario in which they
had to prepare a holiday trip to an unknown city (similar to the scenario in the
experimental study of the interactive map, see IV.2.3.3.a). As the blindfolded sighted
participants were not used to tactile map reading, they were then allowed to familiarize
themselves with the raised-line map. They did not get any audio information on map
content or itineraries but if necessary the experimenter helped them to follow the raised
lines. Once the participants felt comfortable with the map representation, the
experimenter introduced the task which consisted in following the guiding instructions
and memorizing the itinerary (see Figure V.18). Every user tested four different
conditions. For each condition the experimenter first described the technique. Users
were informed in advance that they would be asked to reproduce the itineraries. The
users could then try the technique twice. Afterwards, they had to answer a SUS
questionnaire. We also collected qualitative feedback. The same procedure was then
reproduced for the three other conditions.
Figure V.18: A participant while following the route guidance (with the right index finger).
Observed Variables
The independent variable in our study was the type of interaction technique which
was designed as within-participant factor. As the four itineraries possessed an equal
number of segments and turns we did not expect an impact of the itinerary on the result.
The different interaction techniques were crossed with the four different itineraries. The
order of presentation was randomized to prevent effects of learning or fatigue.
direction to turn. The user has to explore each direction at the crossroad. An error was
counted when he/she continued in a direction even if the “wrong turn” sound was
emitted. Satisfaction was measured with the SUS questionnaire. We used the
questionnaire from the experimental study (see IV.2.3.4.b) with minor adaptations (for
instance renaming “map” in “technique”). The questionnaire is presented in the
appendix (VII.9.1).
Results
We have hypothesized that interaction techniques would differ in exploration time,
number of wrong turns and satisfaction, without specifically favoring any of the
techniques. Therefore we investigated the four techniques with regard to the different
aspects. An alpha level of .05 was used for statistical significance in every test.
Exploration time was measured for the first and the second exploration. The
observed time values for the first iteration were not normally distributed (Shapiro-Wilk W
= 0.88, p = .03). The comparison across type of interaction technique in a Friedman test
was not significant (X2(6) = 0.93, p = .82).
Figure V.19: Boxplot (25 – 75% interval, minimum and maximum value) of the second
exploration time for both explorations taken together for the four different interaction
techniques. The blue dots represent the median.
The time values for the second exploration were normally distributed (Shapiro-
Wilk W = 0.95, p = .29). Accordingly they were compared across type of interaction
technique in an analysis of variance (ANOVA). The effect was almost significant (F(3,15) =
233
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
2.63, p = .09). Figure V.19 reveals a tendency with the “Vibrational Wristband” being the
quickest and the “Guided Directions” being the slowest technique.
The distribution of time values was not normal (Shapiro-Wilk W = 0.92, p = .005).
The result of the Friedman ANOVA was not significant (X2(12)=4.22, p = .23).
Errors (Effectiveness)
As for the time, errors were measured for the first and the second exploration.
Errors for the first exploration (Shapiro-Wilk W = 0.70, p < .001), for the second
exploration (W = 0.78, p < .001), as well as for both explorations taken together (W =
0.73, p < .001) were not normally distributed. Therefore all errors were analyzed in a
Friedman test across type of interaction technique.
Figure V.20: Boxplot (25 – 75% interval and maximum value) of the errors for both
explorations taken together for the four different interaction techniques. The blue dots
represent the median.
For the first exploration the Friedman test was not significant (X2(6) = 4.92,
p = .23). The Friedman test for the second exploration revealed significant results
(X2(6) = 10.35, p = .016). Similarly, for error values taken together there was a significant
result (X2(12)=13.97, p = .03). Techniques ranked in the order from the best to the most
erroneous were One Octave Scale, Vibrating Bracelets, Clock Face and Guided
Directions (see Figure V.20). Pairwise Wilcoxon rank sum tests with Bonferroni correction
(alpha level = .0125) revealed that only the difference between Guided Directions and
One Octave Scale was significant (N = 11, Z = 2.93, p = .003).
234
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
Satisfaction
Satisfaction was measured with the SUS questionnaire. The results of the SUS were
not normally distributed (Shapiro-Wilk W = 0.91, p = .04). Therefore the values were
compared across type of interaction technique in a Friedman test. The result was almost
significant (X2(6) = 6.41, p = .09). Figure V.21 shows a tendency with the One Octave
Scale receiving the best satisfaction values, and the Clock Face receiving the lowest
satisfaction.
Figure V.21: Boxplot (25 – 75% interval, minimum and maximum value) of the results from
the SUS questionnaire for the four different interaction techniques. The blue dots represent
the median.
We also asked participants which interaction techniques were their favorite and
their least preferred. The One Octave Scale was ranked five out of six times as favorite
technique. One participant liked the Guided Directions most, and one of the participants
that had stated a preference for using the One Octave Scale said that the Guided
Directions were better for memorization. Three participants stated that they least liked
the Clock Face and two participants stated that they disliked the Guided Directions. One
participant did not have a technique that he disliked most.
Discussion
The presented study was part of an internship and thus limited in time. As
expected due to the low number of participants, the study revealed few statistically
relevant results. Yet, we were able to detect some tendencies. Of course these
tendencies have to be considered with caution but we will take them as a basis for a more
235
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
detailed user study that is currently in preparation. More precisely, the outcome allows
reflecting on each of the different interaction techniques.
The only prototype within the interactive map corpus that used musical output
(seeII.4.4.1.c) was the One Octave Scale interface (Yairi et al., 2008). Our study suggests
that it might be without good reason that this interaction technique has not been further
investigated. In our Wizard of Oz simulation the One Octave Scale interface was the one
with the least number of errors. It was also the one that most participants preferred.
All users stated that the technique allowed a good estimation of the total distance
of the segment. One user specifically stated that he could estimate the remaining distance
and slow down before arriving at a crossing. However, several users stated that the
system was missing an indication for the next direction. Propositions for indicating
directions included using stereo sound, playing different musical pieces depending on
the next direction or playing one specific note just before arriving at the crossing.
Vibrating Wristbands
Our interest in using vibrational feedback had been piqued by the analysis of non-
visual interaction techniques in existing interactive map prototypes (see II.4.4.2). In the
present study the use of tactons (Brewster & Brown, 2004) proved efficient. Indeed for the
second exploration, there was a tendency for the Vibrating Wristbands being the
quickest interaction technique. The data suggest that there was a learning effect between
the first and the second exploration. As stated before, understanding tactons is not innate
and has to be learnt (see II.4.4.2). Yet, our results suggest that in case of a small number
of patterns that are easy to distinguish, tactons can become a powerful means of
interaction.
Three users stated that they would prefer to have two wristbands, which
contradicted the findings from our pretests. Another participant suggested a different
coding for the tactons (one long: turn right; two short: turn left; three short: wrong turn).
In addition, one user suggested adding a signal for “continue straight”. Several users
stated that they needed to concentrate during this technique. One user underlined that
the technique was difficult but because he needed to concentrate he made less errors.
Clock Face
The Clock Face method had the worst results in the SUS questionnaire. It may be
hypothesized that this is because we tested with sighted people who do not normally
236
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
orient themselves with the help of the clock face. Therefore we suppose that with visually
impaired people we might have obtained better results.
Unsurprisingly, one participant stated that he had problems linking hours and
directions. Another participant stated that this technique did not foster a good
memorization. On the other hand, one user stated that this technique was the most
intuitive (however he preferred the One Octave Scale Interface). A different user stated
that the system was handy at crossings with more than four streets because it allowed
indicating precise directions.
Guided Directions
The Guided Directions was the technique with which users made the most errors.
However, this was not represented in the satisfaction, which was better than for the Clock
Face Method. We suggest that the users liked this technique as it relied on familiar
guiding instructions as in navigation systems or when guided by another person. We
suggest that the bad results concerning error rate occur from a change in perspective.
Indeed, the misalignment of the map reader’s perspective and the traveler’s perspective
as explained above (Figure V.17) also occurs for this interaction technique.
As expected, three users stated that the instructions were natural and easy to
follow. On the other hand, one user stated that the instructions were not sufficiently
precise at intersections of several streets. There was no qualitative feedback that
explained the higher error rate.
Implementation
The results of the Wizard of Oz study have confirmed that the development of the
four different interaction techniques is indeed interesting. We know have the hypothesis
that the One Octave Scale technique or the Vibrating Wristband might be the most usable
techniques.
237
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
As in the previous prototypes, the maps were designed in svg format with
inkscape. Thus, the segments of the different routes could be tagged with specific labels
that were then parsed in the code. A config file provided the possibility to select the
designated itinerary in advance to compiling and executing the code. Only this itinerary
was then interactive during the exploration of the map. In addition to the map for the
Wizard of Oz study, we introduced four new itineraries. This was done so that a
familiarization phase could be added to the protocol.
Evaluation
We plan to conduct an extensive user study to evaluate the different interaction
techniques. Several interesting perspectives emerged from the Wizard of Oz study.
First, we plan to investigate whether tactons could be the most efficient interaction
techniques. We hypothesize that including more participants in the study might reveal
significant effects.
Furthermore we believe that the results of the Clock Face Interface would have
been different when testing with visually impaired people. It would therefore be
interesting to include both sighted and visually impaired participants in order to evaluate
whether their preferences differ. Few studies so far have investigated differences
between sighted and visually impaired users concerning non-visual interaction.
54
http://www.cloudgarden.com/ [last accessed September 21st 2013]
238
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
study whether using a tangible object as representation of the traveler on the map
improves spatial cognition. In contrast with the finger, a tangible object is external to the
observer’s body, and also the object can be chosen so that it has a clear indication of
direction (for instance an arrow indicating direction). Finally, the objective of route
learning is to successfully navigate in a real environment and for this purpose egocentric
knowledge is needed. It would therefore be desirable to study whether the presented
interaction techniques successfully allow the transfer from an allocentric to an egocentric
perspective. In fact, it can be hypothesized that the Guided Directions technique is best
adapted for this task as the user memorizes the direction of turning at a crossing.
Evaluating the navigation skills in a real environment after learning of routes with the
help of the interactive map, may therefore be envisioned.
V.3 Conclusion
In this chapter we investigated Research Question 4 (How can non-visual
interaction enhance tactile map exploration?). We presented preliminary investigations
with the purpose of opening up avenues for future work. More precisely two sub-
questions were studied.
First, we proposed the Kintouch prototype with the goal of better understanding
how visually impaired people read tactile maps. The prototype was composed of a multi-
touch screen and a Kinect camera. Fusion of data from both sources allowed tracking
finger movements during raised-line map reading. Preliminary evaluations suggest that
this technology could be helpful for studies on haptic exploration strategies, even if
further development would be needed. We suggest that a better understanding of haptic
exploration strategies would be helpful to design interaction techniques that are truly
accessible and usable.
239
Chapter V - Designing Non-Visual Interaction to Enhance Tactile Map Exploration
suspect a problem of transfer between the different perspectives of the map reader and
the traveler. The four interaction techniques have been implemented. We propose
possible investigations for a user study that we plan to do in the near future.
240
Chapter VI - Discussion
Chapter I defined the context and the scope of the research. We situated the
research in the field of accessible geographic maps for sight impaired people, and more
precisely in the field of interactive maps. We also presented four research questions that
we addressed in the following chapters, as well as the methodology of the thesis.
242
Chapter VI - Discussion
Chapter III treats two questions: the design of an interactive map and the
adaptation of the design process to include visually impaired participants. First, the map
was composed by a multi-touch screen, a raised-line map overlay and speech output. For
each map component we presented different versions that have been developed through
an iterative process. Second, we based our work on a participatory design process in
four steps (analysis, design prototyping and evaluation). As the methods of participatory
design mostly rely on the visual sense we needed to make it more inclusive for visually
impaired people. We proposed several recommendations to this regard (VII.7).
243
Chapter VI - Discussion
What is the design space of interactive maps for visually impaired people?
And what is the most suitable design choice for interactive maps for
visually impaired people?
In chapter II, we opened the design space. We included 43 articles published over
the past 26 years in the corpus of interactive maps. These articles were analyzed with
regard to devices, modalities and interaction techniques (a supplementary analysis
regarding origins of the projects, timeline and map content is provided in the appendix
VII.5). Various devices have been used in existing prototypes. We classified the devices
in four categories according to common principles of input sensing and representation of
information: haptic devices, tactile actuator devices, touch-sensitive devices and other.
With regard to input interaction we presented speech recognition and touch input
(including accessible gestural interaction). On the output side, different audio and touch
interaction techniques have been presented. Audio output is not limited to speech, even
if it is very often employed in assistive devices for visually impaired people. Auditory
icons, earcons, music and spearcons provide alternatives. We suggested the use of
ambient sound, earcons or music to provide complementary information to speech. In
some prototypes, audio output is provided in 3D which appears to be an interesting
possibility. Our analysis revealed that most existing projects combined different
modalities. There is also evidence for a better performance if more than one output
modality is provided. Touch output can be cutaneous, kinesthetic or haptic. We discussed
the possibilities provided by vibro-tactile output, raised-pin displays, laterotactile
displays. We underlined the importance of a fixed haptic reference frame for the creation
of a cognitive map. We argued that raised-line map overlays are highly adapted for
presenting spatial information to visually impaired people. First, they rely on previously
acquired map reading skills. Second, they also provide a fixed reference frame.
244
Chapter VI - Discussion
Participatory design methods are often based on visual modalities. In this project
we have demonstrated that it is possible to make the process itself accessible to include
visually impaired people. Concretely, we have applied a design process in four steps:
analysis, generation of ideas, prototyping and evaluation. Participants in our project have
been involved from the start to the end of the project. From this experience we can give
several recommendations (see VII.7)
In our project, participants have been involved all along the process. Yet,
prototypes were created with the users as “consumers” and not as “designers”. It would
be interesting to include users even closer and let them design their own prototypes. This
however demands, that users become experts of the technology that is used.
In this thesis we presented a study with 24 blind users. The objective of the study
was to compare the usability of an interactive map and a paper map, both designed for
visually impaired people. We expected a higher usability for the interactive map: more
precisely we expected a better spatial learning (effectiveness), shorter learning time
245
Chapter VI - Discussion
(efficiency) and higher user satisfaction. This hypothesis was partially confirmed: we did
not observe any differences with regard to spatial learning, but learning time was
significantly shorter for the interactive map and more users preferred the interactive map
over the paper map. Furthermore, the interactive map was accessible to a blind person
with low braille reading skills that could not read the raised-line map. Indeed, satisfaction
for the use of interactive maps proved to be independent of age-related factors, whereas
braille reading experience and proportion of lifetime with visual impairment were
correlated to satisfaction for reading the paper map.
More precisely, we also gained insight into spatial learning. Spatial knowledge
was measured at short-term (directly after exploration) and at long-term (two weeks after
exploration). Our study revealed that the map type did not influence spatial learning
neither at short- nor at long-term. The absence of a significant effect in effectiveness
between the two maps is probably related to the small number of elements that were
presented on the maps. We suggest that maps with a greater complexity might really
benefit from interactivity. Significant differences in effectiveness emerged according for
the three types of spatial knowledge (landmark, route and survey). At short-term
landmark knowledge was significantly superior to both route and survey knowledge. In
contrast, at long-term survey scores were significantly more prevalent than landmark and
route scores. Furthermore we observed an impact of personal characteristics on spatial
knowledge. Spatial scores for both maps were strongly correlated with self-evaluated
expertise in reading tactile images. Interestingly the learning of landmarks was
improved if the interactive map was presented before the paper map. We suggested that
first exploring an interactive map might remove apprehension, increase map learning
skills, and thus help read any kind of map at a later moment.
To our knowledge this is the first study that systematically addressed the
comparison of an interactive map with a raised-line map with braille. The result of the
study is of course limited to the specific type of interactive map (multi-touch screen,
raised-line overlay and speech output) and cannot be generalized to other interactive
map types. Nevertheless the results are encouraging as they show that interactive maps
can be designed as to support visually impaired users’ spatial learning.
246
Chapter VI - Discussion
Interactive maps are accessible and pertinent tools for presenting spatial
knowledge to visually impaired people,
they are more usable than raised-line paper maps
and they can be further enhanced through advanced non-visual interaction.
This statement has been addressed and verified through this research. It is
answered through the four research questions, as discussed in the previous section. In
summary, interactive maps have proved to be accessible tools for presenting spatial
information to visually impaired people. In a direct comparison with raised-line maps we
have observed an improved efficiency and satisfaction, whereas we have not found any
247
Chapter VI - Discussion
VI.1.3 Contributions
This thesis makes the following principal and secondary contributions to research
in the field of HCI.
Design propositions for interaction techniques for route learning with the
help of an interactive map prototype and preliminary evaluation results.
248
Chapter VI - Discussion
At the ACM CHI conference 2013, we proposed a Special Interest Group on Non-
Visual interaction 55 with the objective to reunite projects aimed at visually impaired
people and those aimed at sighted people (Brock, Kammoun, et al., 2013). We believe
55
The community of the SIG Non-Visual Interaction can be joined online on Facebook
http://bit.ly/sig-nvi or Google Groups http://bit.ly/GG-SIG-NVI.
249
Chapter VI - Discussion
that it would be beneficial for all researchers in the field to work together more tightly, as
challenges for non-visual interaction (NVI) seem to overlap regardless of users’ abilities.
We believe that one of the current research challenges is in finding out whether the
problems encountered by visually-impaired people are similar to those of sighted users
in “extraordinary” situations, and how these problems can be addressed to achieve a
high usability and accessibility for all users.
VI.2.2.1 Touch
Touch interaction has been a prevalent technique in non-visual interactive maps.
However, some challenges exist when making touch-sensitive devices accessible.
Furthermore, we believe that taking into account visually impaired users’ haptic
exploration behavior would improve the usability of gestural interaction.
unconsciously touching the surface while holding the device. The intended touch
movement was a single finger moving over the surface in order to read the book.
Because this movement was regular, they were able to track it and eliminate the
unintended input. McGookin et al. (2008) suggested not to use short impact related
gestures, such as single taps, because they are likely to occur accidentally. Likewise,
Yatani et al. (2012) proposed implementing a double tap rather than a single tap because
it is less likely to occur by chance. In their study on accessible gestural interaction, Kane,
Morris, et al. (2011) suggested that placing interactive zones in the edges or corners of a
device or other areas that were easily distinguishable, would reduce the likelihood to
trigger this interaction accidentally.
More recently, we have begun working on a new study (Simonnet, Jonin, Brock, &
Jouffrais, 2013). The aim of this study is to better understand which interaction techniques
best support spatial learning. Kane, Morris, et al. (2011) have investigated this aspect but
on a large touch table, whereas we used a tablet PC. For this purpose, we have
implemented different interaction techniques with the aim to support learning of spatial
information (see Figure VI.1). A first interaction technique was based on edge projection
as proposed by Kane, Morris, et al. (2011). In a second interaction technique, “Single
Side Menu”, an alphabetic menu of list items was projected to the left border of the
251
Chapter VI - Discussion
screen. While one finger skimmed through the list, the other finger was guided towards
this destination by verbal directions (right, left, down, up). The third technique, “Spatial
Regions”, was inspired by the Neighborhood Browsing by Kane, Morris, et al. (2011).
However, the map was separated in a regular grid, instead of adapting the size of the
regions to the map content. User studies for these interaction techniques are currently
ongoing. Our objective is to obtain useful insight in the design of interaction techniques
for spatial learning.
Figure VI.1: Propositions of interaction techniques for spatial learning on a tablet. (a) Edge
Projection. (b) Single Side Menu. (c) Spatial Regions. Reprinted from (Simonnet et al., 2013).
While studies have shown that touch was more appropriate for determining
positions and forms, speech recognition is promising for certain tasks. For visually
impaired people entering textual information by speech is convenient. For example, the
user could enter a certain landmark (“cinema”), position the finger on the map and let the
map then guide the finger to this position (right, left, up, down instructions). It could also
be used for changing modes instead of button-based interaction (see V.2.1).
So far, there has not been much research on tangible interaction with visually
impaired people. Yet, first studies on tangible interaction in non-visual maps have been
promising (Milne et al., 2011; Pielot et al., 2007). To this end, we have just started a
252
Chapter VI - Discussion
research project with students from the Master 2 IHM in Toulouse (Kévin Bergua, Jérémy
Bourdiol, Charly Carrère and Julie Ducasse). The aim of this project is to design and
develop a multimodal prototype which allows visually impaired people and sighted
people to collaboratively work on a geographic map. The prototype will be based on the
use of a multi-touch table (Immersion ILight table) and tangible interaction.
56
http://wiki.openstreetmap.org/wiki/HaptoRender [last accessed August 13 th 2013]
253
Chapter VI - Discussion
254
Chapter VI - Discussion
line maps. Indeed, the absence of affordable and robust tactile devices is responsible for
problems related to displaying dynamic graphical information (Lévesque, 2005).
However, we argue that this disadvantage is going to be resolved in the future due
to the emergence of touch-sensitive surfaces with cutaneous feedback. Laterotactile
displays and raised-pin displays as presented in the classification of interactive maps
(II.4.4.2), present a step in this direction. Recently further technology has emerged. In
this subsection we describe some of this recent technology without the aim of being
exhaustive.
Some devices produce electrostatic friction between a touch surface and the
user’s fingertip. Indeed, it has been shown that friction is a significant factor in touch
perception. Friction feels more natural than vibrotactile feedback and provides
continuous feedback (Lévesque et al., 2011). A comprehensive review on friction has
recently been published by Adams et al. (2013). The principle of electrostatic friction has
been used in the case for TeslaTouch (Bau, Poupyrev, Israr, & Harrison, 2010). The
friction was produced without mechanical actuation but by making use of
electrovibration. In electrovibration, the stimulation is only perceived when moving the
finger by forming a capacitor between the finger and the surface. A periodic electrostatic
stimulation applied to the surface then deformed the skin of the sliding finger. The
technology is scalable to devices of any size, shape and configuration. In addition, it can
be combined with touch-sensitive surfaces, so that the device provides both input and
output. In TeslaTouch (Bau et al., 2010) electrovibration was created on a transparent
surface which allowed to use it with a variety of devices. Concretely it has been
combined with optical multi-touch technology. In a study with ten participants, it has
been shown that textures with characteristics such as smoothness, fineness, gentleness or
stickiness could successfully be differentiated by varying frequency and voltage.
TeslaTouch has been successfully used in applications for visually impaired people (C.
Xu, Israr, Poupyrev, Bau, & Harrison, 2011). It was possible to display dots and lines. Yet,
it proved difficult for visually impaired subjects to recognize braille letters with this
technology. A similar technology has been commercially developed by Senseg57. Based
on their previous work, in the “REVEL” project Bau & Poupyrev (2012) reversed the
principle and made the human being the carrier of the electric signal. In this technology,
“reverse electrovibration”, a weak electrical signal was injected into the user’s body,
thus producing an electrical field around the finger. By doing so, the technology became
independent on hardware and any object could be augmented with tactile feedback
57
http://senseg.com/ [last accessed September 26th 2013]
255
Chapter VI - Discussion
(under the condition of the object being coated with an insulator-covered electrode).
Various textures could be produced by varying signal amplitude, shape and frequency.
Figure VI.2: Photograph of the Stimtac prototype as presented in (Casiez et al., 2011).
Reprinted with permission.
256
Chapter VI - Discussion
grid of electromagnets. This technique allowed feeling forces without having to move the
finger. Users could detect the signal up to a height of 35 mm.
The above presented technologies are only a selection of current technology and
only a glimpse of what will be possible in the near future. Obviously, providing dynamic
tactile feedback is promising for the use in non-visual interfaces and would allow new
interaction techniques and functionality. We hope that as technology will improve, soon it
will be possible to represent map features on such displays.
Furthermore, the user study presented in this thesis possesses some limitations,
which might be addresses in the future. We have made certain choices regarding the
map design and the obtained positive results are limited to the specific map design
(multi-touch, raised-line overlay, speech output). It would be interesting to compare the
map to a map with different output modalities, for instance non-verbal sound.
Most importantly, the study did not reveal any differences regarding effectiveness
(spatial knowledge). We suppose that this might be due to the limited complexity of the
maps. Indeed, the readability and thus the effectiveness of a tactile map is impaired if the
map contains a great number of elements and legends. In contrast, it is possible to
present a richer and more complex content with an interactive map (Hinton, 1993).
Therefore, it would be interesting to repeat the study with a more complex map.
58
http://www.tactustechnology.com/index.html [last accessed September 26th 2013]
257
Chapter VI - Discussion
want assistive technology, they also want this technology to be fun to use. To this end,
(Merabet et al., 2012) developed computer based video games with the aim of teaching
navigation skills to the blind. User experience is still rarely addressed in the field of
assistive technology. It might be interesting to center future research around this area.
If maps could be created automatically, then the visually impaired person needed
to know how to charge the digital map that belongs to the corresponding map overlay.
Some approaches have been discussed in the literature In the case of the Talking TMAP
project (Miele et al., 2006), a bar with vertical stripes served as identification of each
raised-line map. By pressing on the stripes, the sheet was identified and the
corresponding dataset loaded. Fitzpatrick and McMullen (2008) investigated the use of
different technologies for this purpose. They observed that barcode readers and RFID
chips both required additional hardware. As an alternative they came up with a TIN
(tactile pin). The TIN corresponded to a code of tactile dots on the raised-line sheets
which composed a three-digit number. The user would load a digital map on the
computer and the computer would then provide the three-digit number which could be
found on the corresponding raised-line map. The issue of providing digital content and
according raised-line map needs to be overcome before interactive maps can largely be
adopted.
258
Chapter VI - Discussion
Given the current low prices for tablets and touch screens, it is not surprising that
schools and associations for visually impaired people begin to adopt this technology for
teaching. As an example, the Tact2Voice59 project has been developed with the aim of
providing visually impaired students with access to graphical data. This prototype was
based on an iPad and audio output. It was destined both at blind and low vision students,
as it was usable with or without a raised-line overlay. Documents were provided per mail
or download and the teacher could adapt the existing material to the student’s need. It
would then work similarly as the prototype described in this thesis. Despite, the
technological possibilities we have the impression that they are not yet systematically
employed in the classroom and we believe that it would be beneficial to quickly take
advantage of this technology. This is also supported by findings of our user study which
suggest that exploring an interactive map before a raised-line map might remove
apprehension, increase map learning skills, and thus help read any kind of map at a later
moment.
VII Appendix
VII.1 List of Publications
VII.1.1 Journals
Brock, A., Truillet, P., Oriola, B., Picard, D., Jouffrais, C., Interactivity Improves Usability
of Geographic Maps for Visually Impaired People (submitted to HCI, under revision)
Brock, A., Oriola, B., Truillet, P., Jouffrais, C., Picard, D. Map design for visually impaired
people: past, present, and future research, in: Darras, B. & Valente, D. (éd.) Handicap
et Communication, MEI 36, Paris: l'Harmattan, December 2013, pp. 117-129.
Brock, A., Touch the map! Designing Interactive Maps for Visually Impaired People,
ACM Accessibility and Computing, Issue 105, January 2013, p. 9 – 14.
Dray, S., Peters, A., Brock, A., Peer, A., Gitau, S., Jennings, P., Kumar, J., Murray, D.
Leveraging the Progress of Women in the HCI Field to Address the Diversity Chasm.
ACM SIGCHI Conference on Human Factors in Computing Systems - CHI, Paris,
France, April 27th to May 2nd 2013, Panel.
Dray, S., Peer, A., Brock, A., Peters, A., Bardzell, S., Burnett, M., Churchill, E., Poole, E.
Exploring the Representation of Women Perspectives in Technologies. ACM SIGCHI
Conference on Human Factors in Computing Systems - CHI, Paris, France, April 27th to
May 2nd 2013, Panel.
262
Chapter VII- Appendix
Brock, A., Lebaz, S., Oriola, B., Picard, D., Jouffrais, C., Truillet, P. Kin’ touch:
Understanding How Visually Impaired People Explore Tactile Maps, ACM SIGCHI
Conference on Human Factors in Computing Systems - CHI, Austin, Texas, USA, 2012,
Work-In-Progress
Brock, A., Truillet, P., Oriola, B., Jouffrais, C. Usage of Multimodal Maps for Blind People:
Why and How, Proc. of ACM International Conference on Interactive Tabletops and
Surfaces, Saarbruecken, Germany, 2010, Poster
VII.1.4 Workshops
Simonnet, M., Jonin, H., Brock, A., Jouffrais, C. Conception et évaluation de techniques
d’exploration interactives non visuelles pour les cartes géographiques, SAGEO -
atelier Cartographie et Cognition, Brest, France, September 23rd to 26th 2013
263
Chapter VII- Appendix
VII.2.1 Cataracts
Definition: Clouding that develops in the crystalline lens of the eye or in its lens
capsule. It ranges from slight to complete opacity.
Symptoms:
Blurry vision
Early stage: the power of the lens may be increased which results in near-
sightedness (myopia)
Gradual yellowing of the lens may reduce the perception of blue colors.
Loss of contrast sensitivity: contours, shadows and color vision are less bright.
Potentially complete vision loss if untreated.
Almost always one eye is affected earlier than the other.
Symptoms:
60
Retrieved from Wikipedia http://www.wikipedia.org/ [last accessed January 10th 2014]
264
Chapter VII- Appendix
Propagation: Up to 80% of all patients who have had diabetes for 10 years or
more are affected. Risk increases with duration of diabetes.
VII.2.3 Glaucoma
Definition: Damage of the optic nerve in a characteristic pattern. Normally
associated with increased fluid pressure in the eye and loss of retinal ganglion cells in a
characteristic pattern. “Ocular hypertension”: increased pressure within the eye without
any associated optic nerve damage. “Normal tension” or “low tension” glaucoma: optic
nerve damage and associated visual field loss, but normal or low intraocular pressure.
There are many different subtypes of glaucoma.
Symptoms:
Causes: ocular hypertension, ethnicity (higher risk for East Asian descendants),
three times more risk for women than men, various rare congenital or genetic eye
malformations, family history, age (the loss of vision often occurs gradually over a long
period of time, and symptoms only occur when the disease is quite advanced, 1/10 of
people over 80 are affected).
265
Chapter VII- Appendix
VII.2.4 Iritis
Definition: Inflammation of the iris. “Acute iritis”: heals within a few weeks and
improves quickly when treated. “Chronic iritis”: exists for months or years before
recovery, does not respond to treatment as well as acute iritis, higher risk of serious
visual impairment.
Symptoms:
Ocular pain
Pain in affected eye when light shines in unaffected eye
Blurred or cloudy vision
Reddened eye, especially adjacent to the iris
White blood cells seen as tiny white dots and protein resulting in a grey or
near-white haze
Adhesion of iris to lens or cornea
Motion sickness
Symptoms:
Propagation: most common genetic disease of the optic nerves aside from
glaucoma
266
Chapter VII- Appendix
Symptoms:
Loss of the central vision, so that reading and recognizing faces can
become difficult, although enough peripheral vision may allow other
activities of daily life.
Loss of contrast sensitivity: contours, shadows, and color vision are less
bright.
Rarely total loss of vision.
Symptoms:
Causes: multiple sclerosis, infection (e.g. syphilis, Lyme disease, herpes zoster),
autoimmune disorders (e.g. lupus), inflammatory bowel disease, drugs, diabetes, gender
(higher risk for females), typically affects young adults ranging from 18–45 years of age.
267
Chapter VII- Appendix
Symptoms:
Dense shadow that starts in the peripheral vision and slowly progresses
towards the central vision
Impression that a curtain was drawn over the field of vision
Straight lines suddenly appear curved
Central visual loss
Potentially total loss of vision
Causes: break in the retina that allows fluid to pass; inflammation, injury or
vascular abnormalities that results in fluid accumulating underneath the retina without the
presence of a hole, tear, or break; tumor on the layers of tissue beneath the retina; injury,
inflammation or neovascularization that pulls the sensory retina from the retinal pigment
epithelium; trauma; more common in people with severe myopia; more frequent after
surgery for cataracts; proliferative diabetic retinopathy; 15% chance of it developing in
the second eye.
Propagation: Around 5 new cases in 100,000 persons per year. More frequent in
middle-aged or elderly populations, with rates of around 20 in 100,000 per year. The
lifetime risk in normal individuals is about 1 in 300.
268
Chapter VII- Appendix
Symptoms:
VII.2.10 Retinoblastoma
Definition: Rapidly developing cancer in the cells of retina, the light-detecting
tissue of the eye
Symptoms:
Symptoms:
Greater risk for strabismus, glaucoma, cataracts and myopia later in life
Causes: in preterm infants, the retina is often not fully vascularized, ROP occurs
when the development of the retinal vasculature is arrested and then proceeds
abnormally. Very low birth weight is an additional risk factor. Oxygen toxicity and
relative hypoxia can contribute to the development of ROP. Supplemental oxygen
exposure is a risk factor.
270
Chapter VII- Appendix
VII.3.1 Device
Users and systems both perform actions on physical devices, either to acquire or
to deliver information (Nigay & Coutaz, 1997). Devices should be clearly distinguished
from modalities, as information presentation is not dependent on how modalities map
onto devices (Bernsen, 1995). For instance similar functionality can be achieved with a
mouse or other pointing devices.
Coutaz, 1997). Interaction languages are languages employed by the user or the system
to exchange information. Buxton (2007) proposed a contrasting view. He claimed that
interaction techniques could be the same independent of the device that was used (for
instance pointing with a mouse or pointing with a touchscreen). Flexibility and robustness
are important characteristics of interaction techniques (Nigay & Coutaz, 1997). Interaction
flexibility denotes the multiplicity of ways with which the user and the system exchange
information. Interaction robustness relates to the successful achievement of goals.
VII.3.4 Modality
The definition of modality in human-interaction is not strictly equivalent to the
definition of human sensory modality (Bernsen, 1995). Bernsen (2008) defined modalities
as way of representing information in some physical medium. A physical medium is light
for vision, sound waves for audition and mechanical contact for touch. Consequently, a
modality is defined by its physical medium and its particular “way” of representation.
Text, graphics and gestures are all perceived by vision and transported by the physical
medium of light. However, the way of presenting information is different and thus is the
modality. Bernsen (2008) mentioned different aspects that influence the choice of
modalities. First, modalities differ in expressiveness, i.e. modalities are adapted for
transporting different types of information. As an example it may be easier to understand
spatial relations from a map than from verbal descriptions. Second, the purpose is
important. For instance, verbal description might be sufficient to convey general
information regarding an environment. However, if one wants to travel to a place, he
might want to look at a map beforehand. Third, the abilities and skills of the user, for
instance regarding perception and cognition, also influence the modalities to be used.
For instance, information must be presented in a non-visual modality to visually impaired
people, but also to sighted people in specific situations. For designers it is easier to
develop accessible applications if more modalities are available (Bernsen, 2008).
Bernsen (1995) proposed a taxonomy of input and output modalities for task-
oriented human-machine interaction. This taxonomy was based on the media of graphics
(including text), acoustics and kinesthetic. Only recently and only few technical systems
so far make use of olfaction and the sense of taste (for an example see Nakamura &
Miyashita, 2012). While at long-term the taxonomy will probably need extension to the
media of smell and taste, for the concrete application of interactive maps, it makes sense
to investigate only vision, audition and touch. Bernsen (1995) further distinguished
linguistic and non-linguistic, analogue or non-analogue, arbitrary or non-arbitrary, static
or dynamic modalities. Linguistic modalities are for instance speech and text, whereas
graphics are non-linguistic. Analogue modalities possess a similarity between the
272
Chapter VII- Appendix
representation and what is being represented (Bernsen, 2008). For instance the drawing
of an object is analogue as it resembles the original object (obviously this is not valid for
some modern art forms). The names for an object vary in different languages and are
non-analogue. Arbitrary modalities get their meaning assigned when they are introduced
(Bernsen, 2008). For instance the use of a certain texture for representing water in a
tactile map is arbitrary as there is no fixed convention. The use of the braille alphabet is
non-arbitrary as each letter of the alphabet is clearly defined. Arbitrary modalities
introduce a cognitive charge for learning which increases with the number of arbitrary
items to be learnt (Bernsen, 2008). Finally, static representations do not mean that the
representations are physically fixed but that the user can inspect them for as long as
wanted. Typically this concerns the difference between graphic and acoustic modalities
as we have discussed before (see II.1.3). Static graphic modalities, for instance written
text, allow the simultaneous representation of large amounts of information for as much
time as necessary. In contrast, dynamic output modalities such as speech are sequential
and fugacious and do not offer the freedom of perceptual inspection for as much time as
wanted. If the auditory output is repeated as long as the user needs for inspection it
becomes static.
VII.3.5 Mode
The term “mode” has a linguistic similarity to “modality”, but they denominate
different things. A mode is a system state at a given time (Bellik, 1995). In this state only a
subset of all existing interactions can be performed (Foley et al., 1996).
VII.3.6 Multimedia
There is a distinction between multimodal and multimedia systems. Coutaz and
Caelen (1991) defined multimedia systems as computer systems which are able to
acquire, store and deliver multimedia information. In comparison with multimodal
systems they do not interpret the information they handle. In contrast, a computer system
is said multimodal if it is able to support human-computer interaction by modalities such
as gesture, written or spoken language with the competence of a human interlocutor
(Coutaz & Caelen, 1991). It must be able to acquire and render multimodal expressions in
real time. It must be able to choose the appropriate output modalities and produce
meaningful output expressions, as well as to understand multimodal input expressions.
VII.3.7 Multimodality
Multimodal interaction has been introduced by Bolt (1980) with the “put-that-
there” system. This prototype made use of combined gestural and speech input in order
to draw, move and modify geometric forms on a visual display. According to Bernsen
273
Chapter VII- Appendix
(2008) a unimodal interactive system is a system which uses the same single modality for
input and output (for instance audio input and output). In contrast, a multimodal
interactive system uses at least two different modalities for input and/or output. There are
various possibilities for combining input and output modalities and therefore the number
of possible multimodal systems is larger than the number of possible unimodal systems
(Bernsen, 2008).
Nigay & Coutaz (1997) defined the system CARE properties (complementary,
assignment, redundancy and equivalence) to describe the relationship between devices
and interaction languages, as well as the relationship between interaction languages and
tasks. Interaction languages are equivalent, if tasks can be expressed using either one of
the languages. They are assigned to a task if no equivalent interaction language exists.
They are complementary if a task can be partitioned so that for each partition there exists
one interaction language assigned to it. Finally, interaction languages are redundant if
they are equivalent and can be used simultaneously to express a task. The same is valid
for the relationship between devices and interaction languages. For instance assignment
274
Chapter VII- Appendix
connects a device to a particular language. The CARE properties can be permanent, i.e.
valid for all system states, or transient. They can be total, i.e. include all tasks, or be
partial. These different properties have different effects on flexibility and robustness of a
system (Nigay & Coutaz, 1997). Assignment is restrictive, whereas equivalence increases
flexibility as the user has the choice between different interaction languages respectively
devices. Similarly, redundancy enhances flexibility and robustness. As stated by Bernsen
(2008) it can be particularly useful to represent important information with redundant
interaction languages respectively devices. Complementary on the other hand, increases
the risk of cognitive overload.
275
Chapter VII- Appendix
VII.4.1 Production
Many methods exist for producing tactile maps and images. Edman (1992)
presents an exhaustive overview of possible production techniques. Raised-line drawing
boards allow visually impaired people to draw their own map. For instance, the board
can be made of a rubbery material. Drawing with a ball-pen over a plastic sheet placed
on top of the drawing board then leaves a tactually perceivable trace in the plastic sheet.
Picture types such as tactile experience pictures, buildup displays (either paper-on-
paper or including additional material), paper and tape maps and charts, or displays with
movable parts require costly manual preparation. Nyloprint is a technique that uses
photography to engrave information on nylon or metal plates. It is based on the principle
that gelatin which is applied on the plate changes its structure when exposed to light. Silk
screening uses a metal or plastic stencil. Color is applied to the parts of the image that
are cutout in the stencil. Variants of this technique exist such as foam ink, a special ink
that expands when heated. Most recently, 3D printing has been used for the production of
tactile books for blind children61. To our knowledge, 3D printing has not yet been used
for producing tactile maps, but it might present new possibilities. Furthermore a braille
embosser can be used for the creation of tactile maps. A braille embosser puts holes in
the size of a braille dot into a sheet of paper. The advantage is the clear readability for
braille labels produced with this method. On the contrary, the resolution is limited as an
image is composed by structured holes in the paper. Besides, a sighted mobility trainer
or assistant is not easily able to read the image (Wang et al., 2009). Furthermore, this
method demands the acquisition of a braille embosser which comes at high cost.
Vacuum-forming and swell-paper as the most common techniques are explained in the
main text (II.3.3.1).
61
http://www.tactilepicturebooks.org/books.html [last accessed August 27th 2013]
276
Chapter VII- Appendix
Picard (2012) obtained a patent for the design and production of a visuo-tactile atlas. The
atlas combines visual and tactile views, so that it is accessible for blind people, visually
impaired people with residual vision and sighted people, such as mobility trainers. The
tactile images are created with respect to the perceptual constraints of haptic
exploration. In a second step, the visual view is then created based on the tactile view,
thus placing the importance on the tactile presentation. In a recent study, Lobben and
Lawrence (2012) proposed a standardization of map symbols for tactile maps produced
on microcapsule paper. They defined a set of tactile symbols that have proved
discriminable in user studies. In this set the most important map elements (streets and
intersections) were attributed the most discriminable symbols. This symbol set, released
by the Braille authority of North America, has been made freely accessible on the internet
and presented in workshops. As a downside it has to be noted that the set has only been
tested in North American cities and it is not clear whether the same symbols can be used
in European cities that tend to be less rectangular.
VII.4.2.2.a Lines
According to Bris (1999) lines represent either linear concrete objects—like
roads—or contours of two-dimensional objects. To differentiate objects, lines should have
varying characteristics (such as pattern or width). Bris suggests that very specific
patterns can be used to identify distinct map elements. However, more than three to five
different patterns increase the risk of overloading the map reader with too much
information.
In practice, Picard and Bris both proposed 8 mm as the minimum length for lines
to be perceived as such, whereas Tatham (1991) proposed 13 mm. According to Bris the
minimum line width (or thickness) is 0.4 mm. Tatham defined it as 1 mm and the
maximum as 7 mm. Bris proposed 0.4mm also as minimum line height, however this
depends on the production method and cannot always be controlled. In order to facilitate
differentiation between different lines, Tatham suggested that line widths should vary at
least around 25%. Bris even suggested a factor of 2 for different line widths.
For fragmented lines, another relevant variable exists: the spacing of the
elements. According to Bris the spacing can be between 0.5 mm minimum and 4 mm
maximum so that the elements are perceived as one line. Similar values are proposed by
Tatham. It is also possible to vary the length of the dash, which should be at least as long
as the spacing. According to Tatham single lines are easier to perceive than double lines.
Picard suggested spacing double lines at a distance of 2mm. To avoid confusion with
277
Chapter VII- Appendix
double lines, Tatham proposed that adjacent but separate lines should be spaced at least
by 6mm. Tatham also suggested that solid lines are easier to perceive than recessed lines
(which anyway is only possible with certain production methods). Finally it has to be
noted that due to the specificities of kinesthetic and proprioceptive perceptions, changes
of direction should be superior to 60°.
VII.4.2.2.b Textures
Textures can be useful to represent a particular space (Lederman & Kinch, 1979).
The texture is the combination of the two components (Bris, 1999): first, the form (outline)
of the element that is filled with the texture; second, the pattern of the texture.
Several studies have investigated the choice of textures that are easily
distinguishable (Lederman & Kinch, 1979). Possible textures contain dots, lines (vertical,
horizontal, diagonal or crossing lines), curves, zigzag, chessboard patterns and
combinations of these in different width and size. Criteria for differentiating textures are
continuity or interruption, regularity, the density of patterns and the size of elements that
form the pattern. Bris underlined that the spacing of the pattern elements is important. If
spacing is too wide, the feeling of texture disappears. If spacing is too dense, it is
possible that the texture is perceived as one closed surface. Lederman and Kinch
criticized that the choice of patterns used in tactile diagrams often seems to be made
randomly. It also has to be noted that sighted map designers tend to use representations
that content visual conventions, for instance wavy forms for representing water, even
though these conventions are not necessarily known to visually impaired map readers.
VII.4.2.2.c Symbols
Single landmarks can be represented through specific symbols (Lederman &
Kinch, 1979). The number of forms that are indistinguishable using the sense of touch is
limited. Often points, rectangles, triangles and radiant symbols are used (Tatham, 1991).
The number of possible elements can be increased by using open and solid symbols.
Depending on the technique used for the map production (see II.3.3.1.b) it may be
possible to add height as a third dimension to improve contrast. Edman showed a list of
278
Chapter VII- Appendix
different symbols that have been used in maps. Lobben and Lawrence (2012) proposed a
set of symbols including circles, triangles, rectangles and radiant symbols that are either
open, solid or filled with lines or crosses. According to Tatham, solid symbols should be
between 3 and 10 mm in size. Open and radiant symbols should be between 4 and 10 mm
in size.
279
Chapter VII- Appendix
VII.5.1 Terminology
Due to the current lack of overview on existing map projects there is no
standardization regarding terminology. Various names have been chosen in different
publications and there is rarely an explanation or definition. Several authors used the
term “audio-tactile maps” (Jacobson, 1998a; Miele et al., 2006; Paladugu et al., 2010;
Parente & Bishop, 2003; Wang et al., 2009). In general—but not in all cases—this term
refers to maps that are based on touch input. More specifically, these maps are mostly
based on raised-line maps augmented with audio output. Zeng and Weber (2011) named
this map type “augmented paper-based tactile maps”. Another term that has been used
is “virtual tactile maps”. Schneider and Strothotte (1999) defined virtual tactile maps as
digital maps that emit speech and sound when they are explored by hand movements.
Their prototype consisted in a tactile grid that could be manually explored by the user
and that was augmented with audio output. In contrast, Zeng and Weber (2011) used this
name when referring to maps based on the use of haptic devices such as force feedback
mice and auditory output. Thus, there seems to be no common consent on use of this
term. Some authors referred to prototypes based on haptic devices and audio output as
“haptic soundscapes” (Golledge et al., 2005; Lawrence et al., 2009; Rice et al., 2005).
Lohmann and Habel (2012) proposed the name VAVETaM (verbally assisting virtual-
environment tactile maps) for a similar prototype based on a haptic device and audio
output. Finally, Zeng and Weber (2011) proposed “virtual acoustic maps” for maps with
audio output alone and “braille tactile maps” for maps based on the use of raised-pin
displays. However, the latter is surprising as raised-pin displays are not necessarily
destined for braille text and also they are usually accompanied by audio output.
Figure VII.1: Cities in which projects of interactive maps for visually impaired people have
been developed. The figure has been produced with Gephi GeoLayout (Bastian et al., 2009)
and OpenStreetMap.
Figure VII.1 shows a map of the cities in which projects of interactive maps for
visually impaired people have been developed. Note that in some cases there are
several universities in one city. For instance, UMBC (University of Maryland Baltimore
County), the University of Maryland and Towson University are all situated in or close to
Baltimore, Maryland. In some cases map projects are collaborations between several
universities. In that case all cities have been reported. The map indicates that various
universities have been involved in the development of accessible maps, mainly in Europe
and North America.
281
Chapter VII- Appendix
VII.5.3 Timeline
We also analyzed the projects regarding the year of publication. Figure VII.2
shows that most projects have emerged in the past ten years.
Figure VII.2: Timeline of publications on interactive maps for visually impaired people.
There might be different reasons for this recent increase. First, this might simply
be related to technological progress. In the first publication on interactive maps for
visually impaired people, Parkes (1988) proposed a map based on a touchscreen.
However, the interest for touchscreens has risen only recently (Schöning et al., 2008). A
first peak could be observed after Jeff Han’s presentation of a Frustrated Total Internal
Reflection Table in 2005 (Han, 2005), followed by the arrival of the IPhone, Microsoft
Surface and Android in 2007. The first IPad was introduced to market in 2010. Indeed,
Schneider and Strothotte (1999) stated that at the time of the development of their
prototype, there were no tactile displays or touch tablets available which met all of their
requirements. With the number of devices increasing, prices have decreased. A similar
development has been observed in the field of haptics (Roberts & Paneels, 2007). The
first Phantom device was presented in 1993 and the first haptic library (GHOST) in 1997.
Roberts & Paneels consequently observed an exponential growth of scientific
publications in the field of haptics since the ‘90s. Another reason might be that due to
different laws and regulations (Americans with Disabilities Act of 1990, Pub. L. No. 101-336,
104 Stat. 328, 1990, LOI n° 2005-102 du 11 février 2005 pour l’égalité des droits et des
282
Chapter VII- Appendix
Figure VII.3: Number of publications on interactive maps for visually impaired people
classified by map content and scale. Categories are presented on the left (Outdoor, Indoor
and Others). The x-axis represents the number of publications. Bars in dark grey present the
total for each category.
Within Outdoor Maps we noticed different scales. The largest maps represent
several countries—maps depicting several states in the US have been counted in this
category (Kane, Frey, et al., 2013; Kane, Morris, et al., 2011; Krueger & Gilden, 1997; Petit
et al., 2008; Seisenbacher et al., 2005; Tixier et al., 2013). These maps typically show the
283
Chapter VII- Appendix
outline of borders, major cities and eventually the border of continents and oceans. Maps
of regions with several cities show similar content but sometimes contain information
about road networks and water lines (De Felice et al., 2007; Jansson et al., 2006; Parente &
Bishop, 2003; Rice et al., 2005; Simonnet et al., 2012; Tornil & Baptiste-Jessel, 2004). City
maps depict streets and buildings, sometimes including green areas, rivers and specific
information for visually impaired travelers (Brock, Truillet, et al., 2012; Campin et al.,
2003; Hamid & Edwards, 2013; Heuten et al., 2007; Iglesias et al., 2004; Kaklanis et al.,
2011, 2013; Lohmann & Habel, 2012; Miele et al., 2006; Milne et al., 2011; Pielot et al.,
2007; Poppinga et al., 2011; Schmitz & Ertl, 2010; Senette et al., 2013; Tixier et al., 2013;
Wang et al., 2009; Yatani et al., 2012). Specific forms of city maps depict a university
campus (Lawrence et al., 2009; Rice et al., 2005; Zeng & Weber, 2010) or a zoo (Jacobson,
1998a). One project aimed at maritime maps for blind sailors (Simonnet et al., 2009).
Finally, one project is operable at different scales (Bahram, 2013).
Indoor Maps mostly depict floor plans of buildings (Schmitz & Ertl, 2012; Su et al.,
2010; Yairi et al., 2008). An exception is the seat plan of a concert hall (Lévesque et al.,
2012).
The category “other” contains choropleth maps, i.e. specific thematic maps which
do not serve for orientation and mobility but for transporting societal or political
information in relation to a geographic area. Concretely, these are census maps (Rice et
al., 2005; Zhao et al., 2008) or maps presenting weather information (Carroll et al., 2013;
Weir et al., 2012). Finally, several publications did not specify the kind of content that is
depicted on the map (Daunys & Lauruska, 2009; Parkes, 1988; Schneider & Strothotte,
1999; Shimada et al., 2010).
As can be seen in Figure VII.3, outdoor maps are more common than other map
types. Within outdoor maps, the most common are city maps. This makes sense as these
maps directly respond to the need of visually impaired people to improve mobility and
orientation. In comparison, maps that depict regions or countries are rather used for
teaching geography. Orientation inside buildings is certainly also important for visually
impaired people. We propose two reasons why these types of maps are less common.
First, even if the problems of constructing mental maps are comparable for indoor and
outdoor maps, navigation itself is more problematic - and dangerous - in outdoor areas.
Therefore, visually impaired people might potentially experience more fear related to
outdoor mobility. Second, there is more information available on outdoor areas and thus
the design of the physical map can be based on existing data. Google Maps is currently
284
Chapter VII- Appendix
One of the earliest projects we heard about was ABAplans 63 . ABAplans was
originally a research project of the Engineering School of Geneva (Ecole d'ingénieurs de
Genève). Its aim was to allow users to navigate the map of a city, find a place, prepare
trips and learn about public transportation. Two approaches have been developed. First,
a map editor destined for locomotion instructors. Second, an interactive device based on
a raised-line map as overlay on a mono-touch screen. Users could explore the relief map
and receive audio output for certain map elements. ABAplans is currently in the phase of
commercialization.
The company ViewPlus offers the IVEO system64. It consists of a monotouch screen
adapted for use with raised-line maps. It comes with software for the drawing of raised-
line maps and drawings. Optionally it is possible to purchase software that allows
creating raised-line images from PDF or scanned documents using optical character
recognition technology. The user must possess the equipment to print the relief maps.
For this purpose ViewPlus also offers braille embossers.
Earth+65 was a project developed by the NASA with the aim to develop accessible
map representations. In contrast to other projects that provide speech output, Earth+
provided musical sound output depending on the colors in the image. The user interacted
with the system by moving a mouse cursor in the image. The tool remained in beta
version and has not been further developed since 2009.
Ariadne GPS66 is a commercial map application for Ipad or IPhone that is under
ongoing development. It has been developed for these devices as many blind people
already use them because of the good screen reader qualities. It resembles in its
62
http://maps.google.com/help/maps/indoormaps/ [last accessed August 21 st 2013]
63 http://abaplans.eig.ch/index.html [last accessed June 10th 2013]
64 http://www.viewplus.com/products/software/hands-on-learning/ [last accessed September
11th 2013]
65 http://prime.jsc.nasa.gov/earthplus/ [last accessed August 22nd 2013]
66 http://www.ariadnegps.eu/ [last accessed August 22nd 2013]
285
Chapter VII- Appendix
concepts the TouchOverMap (Poppinga et al., 2011). The user can move the finger on the
touchscreen and receives audio and vibrational feedback through the use of the
VoiceOver 67 screen reader. Besides map exploration, Ariadne GPS also takes into
account mobile information, for instance by alerting users when they approach favorite
points.
Figure VII.4: The iDact prototype provides accessible information about metro stations in
Paris. Reprinted with permission.
IDact68 is another application for iPad (see Figure VII.4). Its objective is to provide
accessible maps for train and metro stations. Information is provided by speech output.
The iDact project is currently under development.
67
http://www.apple.com/accessibility/osx/voiceover/ [last accessed August 22nd 2013]
68
http://www.idact.eu/ [last accessed August 22nd 2013]
286
Chapter VII- Appendix
idFinger: identifier of the touch input (with the Stantum touchscreen each
new touch input automatically creates a new id, with the 3M screen IDs are
reused)
x and y: coordinates x and y in pixel related to the superior left point of the
screen.
precision: defines the precision of the touch. As the hardware does not give
any precision information, the value 1 has been set as default
t: timestamp in milliseconds
The following message indicates the state for each touch input:
Finally, the following message indicates the alive state of the device
287
Chapter VII- Appendix
idFinger: identifier of the touch input (with the Stantum touchscreen each
new touch input automatically creates a new id)
t: timestamp in milliseconds
TTS Say=SSS
288
Chapter VII- Appendix
Table VII.1: Wording of street names in map 1 (a) and map 2 (b). Each map contains two
terms for each category flowers, precious stones and birds. Each term is a two-syllable word
(= bi). All words are low-frequency terms (= LF).
a.
Code French English Category Frequency Syllables Classification
rg gentiane gentian flower 0,14 2 LF bi
rs saphir sapphire precious stone 0,34 2 LF bi
rl linotte linnet bird 0,69 2 LF bi
rj jasmin jasmine flower 1,57 2 LF bi
rr rubis ruby precious stone 2,22 2 LF bi
rp pinson chaffinch bird 2,68 2 LF bi
b.
Code French English Category Frequency Syllables Classification
rg glycine wisteria flower 0,21 2 LF bi
rt topaze topaz precious stone 0,36 2 LF bi
rp puffin shearwater bird 0,54 2 LF bi
rl lavande lavender flower 1,53 2 LF bi
rd diamant diamond precious stone 7,97 2 LF bi
rb bécasse woodcock bird 1,4 2 LF bi
289
Chapter VII- Appendix
Table VII.2: Description of the wording of POI in map 1 (a) and map 2 (b). Each map
contains three words with low frequency (= LF) and three words with high frequency (=HF).
For each of the two frequency one word has one syllable (= mono), one has two syllables (=
bi) and one has three syllables (= tri).
290
Chapter VII- Appendix
Keep it simple
Proposed dimensions for lines, textures and symbols can be found in the
appendix (VII.4.2.2)
When printing on swell paper, avoid too much black paint on the image
because it turns printing more difficult
Multi-touch Device
The multi-touch device must comply with several criteria:
Software Architecture
A modular software architecture enables easy prototyping and replacing
modules
291
Chapter VII- Appendix
Provide at least touch input and speech output, eventually combined with
further modalities and interaction techniques
Evaluation
Always foresee pretests
292
Chapter VII- Appendix
Participants
Include specific criteria such as degree of visual impairment, the
proportion of lifetime with blindness or the age at onset of blindness,
autonomy in everyday life, braille reading skills or use of assistive
technology.
Low-vision and blind people do not have the same requirements. It may be
useful to focus on a specific sub-group.
Logistics
Describe the outline of the room.
293
Chapter VII- Appendix
Analysis Phase
Take time for understanding users’ needs, especially when working with
impaired users for the first time
Generating Ideas
Brainstorming can be made accessible
o Visually impaired users can take notes with the BrailleNote device.
Wizard of Oz simulation is a useful method for stimulating ideas for new and
innovative concepts.
Prototyping
We suggest an iterative procedure with software-based low-fidelity
prototypes.
Evaluation
Guidelines, heuristics and standards are helpful, but prototypes should be
evaluated with real users.
294
Chapter VII- Appendix
VII.8.1 Participants
Figure VII.5 Detailed characteristics of the visually impaired participants in our study.
295
Chapter VII- Appendix
296
Chapter VII- Appendix
297
Chapter VII- Appendix
298
Chapter VII- Appendix
299
Chapter VII- Appendix
300
Chapter VII- Appendix
301
Chapter VII- Appendix
302
Chapter VII- Appendix
Âge :_______
1 2 3 4 5
2. J'ai une mauvaise mémoire des endroits où j’ai laissé des choses.
1 2 3 4 5
1 2 3 4 5
1 2 3 4 5
Tout à fait Pas du tout
d’accord d’accord
303
Chapter VII- Appendix
5. J'ai tendance à penser mon environnement en termes de points cardinaux (Nord, Sud, Est,
Ouest) ou horaires.
1 2 3 4 5
Tout à fait Pas du tout
d’accord d’accord
1 2 3 4 5
Tout à fait Pas du tout
d’accord d’accord
1 2 3 4 5
Tout à fait Pas du tout
d’accord d’accord
1 2 3 4 5
Tout à fait Pas du tout
d’accord d’accord
1 2 3 4 5
Tout à fait Pas du tout
d’accord d’accord
10. Je ne me souviens pas très bien des routes quand je suis accompagné.
1 2 3 4 5
Tout à fait Pas du tout
d’accord d’accord
1 2 3 4 5
Tout à fait Pas du tout
d’accord d’accord
304
Chapter VII- Appendix
1 2 3 4 5
Tout à fait Pas du tout
d’accord d’accord
1 2 3 4 5
Tout à fait Pas du tout
d’accord d’accord
14. Généralement je me souviens d’un nouveau trajet après l'avoir parcouru une seule fois.
1 2 3 4 5
Tout à fait Pas du tout
d’accord d’accord
1 2 3 4 5
Tout à fait Pas du tout
d’accord d’accord
305
Chapter VII- Appendix
Je pense que j’aurais besoin de l’aide d’une personne expérimentée pour pouvoir utiliser
ce type de carte
Pour ce qui concerne la carte papier
Pour ce qui concerne la carte interactive
J’ai trouvé que les différentes fonctionnalités étaient bien conçues et intégrées dans ce
type de carte
Pour ce qui concerne la carte papier
Pour ce qui concerne la carte interactive
306
Chapter VII- Appendix
J’ai trouvé que ce type de carte était peu commode lors de son utilisation
Pour ce qui concerne la carte papier
Pour ce qui concerne la carte interactive
J’ai eu besoin d’un long temps d’adaptation pour me sentir à l’aise avec ce type de carte
Pour ce qui concerne la carte papier
Pour ce qui concerne la carte interactive
307
Chapter VII- Appendix
VII.8.5.1 Map 1
1) L-POI : « Sans compter l’hôtel, quels étaient les noms des 6 points d’intérêt sur la carte ? »
Parc ; Cinéma ; Halles ; Métro ; Eglise ; Obélisque (souligner les réponses correctes)
1 2 3 4 5
Pas du tout Tout à fait
Jasmins ; Rubis ; Linottes ; Pinsons ; Saphirs ; Gentianes (souligner les réponses correctes)
1 2 3 4 5
Pas du tout Tout à fait
3) R-RDE-1 :« Par la route, et en empruntant le plus court chemin, lequel de ces deux trajets
est le plus long ? Trajet 1 : de l’Hôtel au Métro. Trajet 2 : de l’Hôtel au Parc. »
4) R-RDE-2 : « Par la route, et en empruntant le plus court chemin, lequel de ces deux trajets
est le plus long ? Trajet 1 : de l’Obélisque au Cinéma. Trajet 2 : de l’Obélisque au Parc. »
309
Chapter VII- Appendix
5) R-RDE-3 :« Par la route, et en empruntant le plus court chemin, lequel de ces deux trajets
est le plus long ? Trajet 1 : de l’Eglise au Halles. Trajet 2 : de l’Eglise à l’Hôtel. »
6) R-RDE-4 :« Par la route, et en empruntant le plus court chemin, lequel de ces deux trajets
est le plus long ? Trajet 1 : du Métro à l’Obélisque. Trajet 2 : du Métro au Parc»
1 2 3 4 5
Pas du tout Tout à fait
7) R-R-1: « Ce trajet est-il correct ? Pour aller (par le chemin le plus court) du Cinéma au
Métro, j’emprunte successivement la rue des Jasmins et la rue des Pinsons ». Correct
8) R-R-2-« Ce trajet est-il correct ? Pour aller (par le chemin le plus court) du Parc à
l’Obélisque, j’emprunte successivement la rue des Saphirs et la rue des Jasmins ».
Incorrect
9) R-R-3-« Ce trajet est-il correct ? Pour aller (par le chemin le plus court) de l’Eglise à
l’Hôtel, j’emprunte successivement la rue des Gentiannes, la rue des Jasmins, et la rue des
Rubis». Incorrect
10) R-R-4--« Ce trajet est-il correct ? Pour aller (par le chemin le plus court) des Halles au
Parc, j’emprunte successivement la rue des Pinsons, la rue des Jasmins et la rue des
Saphirs». Correct
1 2 3 4 5
Pas du tout Tout à fait
310
Chapter VII- Appendix
11) R-W-1 : “Quelles rues dois-je successivement emprunter pour aller de l’hôtel au
Parc par le chemin le plus court ? »
12) R-W-2 : “Quelles rues dois-je successivement emprunter pour aller du Métro au
Cinéma par le chemin le plus court ? »
13) R-W-3 : “Quelles rues dois-je successivement emprunter pour aller du Parc aux
Halles par le chemin le plus court ? »
14) R-W-4 : “Quelles rues dois-je successivement emprunter pour aller de l’Obélisque au
Métro par le chemin le plus court ? »
1 2 3 4 5
Pas du tout Tout à fait
15) S-Dir-1 : « Imaginez-vous à l’Hôtel, dans quelle direction se trouve le Cinéma par rapport
à l’Hôtel? Pour répondre, utilisez le système de l’horloge pour indiquer la direction à partir
de votre position (à midi, à 6h, à 3h, à 9h, à 10h, etc.) »
16) S-Dir-2-« Imaginez-vous à l’Hôtel, dans quelle direction se trouve le Parc par rapport à
l’Hôtel? Pour répondre, utilisez le système de l’horloge pour indiquer la direction à partir
de votre position (à midi, à 6h, à 3h, à 9h, à 10h, etc.) »
17) S-Dir-3-« Imaginez-vous à l’Eglise, dans quelle direction se trouve l’Obélisque par
rapport à l’Eglise? Pour répondre, utilisez le système de l’horloge pour indiquer la direction
à partir de votre position (à midi, à 6h, à 3h, à 9h, à 10h, etc.) »
311
Chapter VII- Appendix
18) S-Dir-4-« Imaginez-vous à l’Eglise, dans quelle direction se trouve les Halles par rapport
à l’Eglise? Pour répondre, utilisez le système de l’horloge pour indiquer la direction à partir
de votre position (à midi, à 6h, à 3h, à 9h, à 10h, etc.) »
1 2 3 4 5
Pas du tout Tout à fait
Si besoin décrire la grille (tracer une ligne verticale, une ligne horizontale à partir de
l’hôtel)
19) S-Loc-1 : « Imaginez que la carte est coupée en 4 parties équivalentes à partir du point
central qu’est l’Hôtel (partie nord-est, nord-ouest, sud-est et sud-ouest). Dans quelle partie
de la carte se trouvent les Halles ? »
20) S-Loc-2 : « Imaginez que la carte est coupée en 4 parties équivalentes à partir du point
central qu’est l’Hôtel (partie nord-est, nord-ouest, sud-est et sud-ouest). Dans quelle partie
de la carte se trouve le Cinéma? »
21) S-Loc-3 : « Imaginez que la carte est coupée en 4 parties équivalentes à partir du point
central qu’est l’Hôtel (partie nord-est, nord-ouest, sud-est et sud-ouest). Dans quelle partie
de la carte se trouve l’Obélisque ? »
22) S-Loc-4 : « Imaginez que la carte est coupée en 4 parties équivalentes à partir du point
central qu’est l’Hôtel (partie nord-est, nord-ouest, sud-est et sud-ouest). Dans quelle partie
de la carte se trouve le Parc? »
1 2 3 4 5
Pas du tout Tout à fait
312
Chapter VII- Appendix
23) S-Dist-1 : « A vol d’oiseau (c’est-à-dire sans emprunter les rues, mais en faisant comme
si vous pouviez aller en ligne droite par les airs comme un oiseau), lequel de ces deux
trajets est le plus long ? Hôtel-Obélisque ou Hôtel-Métro ? »
24) S-Dist-2 : « A vol d’oiseau (c’est-à-dire sans emprunter les rues, mais en faisant comme
si vous pouviez aller en ligne droite par les airs comme un oiseau), lequel de ces deux
trajets est le plus long ? Hôtel-Eglise ou Hôtel-Halles ? »
25) S-Dist-3 : « A vol d’oiseau (c’est-à-dire sans emprunter les rues, mais en faisant comme
si vous pouviez aller en ligne droite par les airs comme un oiseau), lequel de ces deux
trajets est le plus long ? Cinéma-Parc ou Cinéma-Halles ? »
26) S-Dist-4 : « A vol d’oiseau (c’est-à-dire sans emprunter les rues, mais en faisant comme
si vous pouviez aller en ligne droite par les airs comme un oiseau), lequel de ces deux
trajets est le plus long ? Halles-Parc ou Halles-Obélisque? »
1 2 3 4 5
Pas du tout Tout à fait
313
Chapter VII- Appendix
VII.8.5.2 Map 2
1) L-POI : « Sans compter l’hôtel, quels étaient les noms des 6 points d’intérêt sur la carte ? »
Musée ; Restaurant ; Spa ; Gare ; Jardin ; Hippodrome (souligner les réponses correctes)
1 2 3 4 5
Pas du tout Tout à fait
1 2 3 4 5
Pas du tout Tout à fait
3) R-RDE-1 :« Par la route, et en empruntant le plus court chemin, lequel de ces deux trajets
est le plus long ? Trajet 1 : de l’Hôtel à la Gare. Trajet 2 : de l’Hôtel au Spa. »
4) R-RDE-2 : « Par la route, et en empruntant le plus court chemin, lequel de ces deux trajets
est le plus long ? Trajet 1 : de la Gare au Spa. Trajet 2 : de la Gare au Jardin. »
314
Chapter VII- Appendix
5) R-RDE-3 :« Par la route, et en empruntant le plus court chemin, lequel de ces deux trajets
est le plus long ? Trajet 1 : du Musée au Jardin. Trajet 2 : du Musée à l’Hippodrome. »
6) R-RDE-4 :« Par la route, et en empruntant le plus court chemin, lequel de ces deux trajets
est le plus long ? Trajet 1 : du Restaurant au Spa. Trajet 2 : du Restaurant à l’Hôtel»
1 2 3 4 5
Pas du tout Tout à fait
7) R-R-1: « Ce trajet est-il correct ? Pour aller (par le chemin le plus court) du Jardin à
l’Hôtel, j’emprunte successivement la rue des Diamants et la rue des Topazes ». Incorrect
8) R-R-2-« Ce trajet est-il correct ? Pour aller (par le chemin le plus court) de la Gare au
Musée, j’emprunte successivement la rue des Lavandes et la rue des Glycines ». Correct
9) R-R-3-« Ce trajet est-il correct ? Pour aller (par le chemin le plus court) du Restaurant à
l’Hippodrome, j’emprunte successivement la rue des Glycines, la rue des Diamants et la
rue des Bécasses». Incorrect
10) R-R-4--« Ce trajet est-il correct ? Pour aller (par le chemin le plus court) du Spa au
Musée, j’emprunte successivement la rue des Puffins, la rue des Lavandes et la rue des
Glycines». Correct
1 2 3 4 5
Pas du tout Tout à fait
315
Chapter VII- Appendix
11) R-W-1 : “Quelles rues dois-je successivement emprunter pour aller de l’Hippodrome à la
Gare par le chemin le plus court ? »
12) R-W-2 : “Quelles rues dois-je successivement emprunter pour aller de la Gare au
Restaurant par le chemin le plus court ? »
13) R-W-3 : “Quelles rues dois-je successivement emprunter pour aller du Spa à l’Hôtel par
le chemin le plus court ? »
14) R-W-4 : “Quelles rues dois-je successivement emprunter pour aller du Jardin au
Musée par le chemin le plus court ? »
1 2 3 4 5
Pas du tout Tout à fait
15) S-Dir-1 : « Imaginez-vous à l’Hôtel, dans quelle direction se trouve le Musée par rapport
à l’Hôtel? Pour répondre, utilisez le système de l’horloge pour indiquer la direction à partir
de votre position (à midi, à 6h, à 3h, à 9h, à 10h, etc.) »
16) S-Dir-2-« Imaginez-vous à l’Hôtel, dans quelle direction se trouve la Gare par rapport à
l’Hôtel? Pour répondre, utilisez le système de l’horloge pour indiquer la direction à partir
de votre position (à midi, à 6h, à 3h, à 9h, à 10h, etc.) »
17) S-Dir-3-« Imaginez-vous au Jardin, dans quelle direction se trouve l’Hippodrome par
rapport au Jardin? Pour répondre, utilisez le système de l’horloge pour indiquer la direction
à partir de votre position (à midi, à 6h, à 3h, à 9h, à 10h, etc.) »
316
Chapter VII- Appendix
18) S-Dir-4-« Imaginez-vous au Jardin, dans quelle direction se trouve le Restaurant par
rapport au Jardin? Pour répondre, utilisez le système de l’horloge pour indiquer la direction
à partir de votre position (à midi, à 6h, à 3h, à 9h, à 10h, etc.) »
1 2 3 4 5
Pas du tout Tout à fait
Si besoin décrire la grille (tracer une ligne verticale, une ligne horizontale à partir de
l’hôtel)
19) S-Loc-1 : « Imaginez que la carte est coupée en 4 parties équivalentes à partir du point
central qu’est l’Hôtel (partie nord-est, nord-ouest, sud-est et sud-ouest). Dans quelle partie
de la carte se trouve le Musée ? »
20) S-Loc-2 : « Imaginez que la carte est coupée en 4 parties équivalentes à partir du point
central qu’est l’Hôtel (partie nord-est, nord-ouest, sud-est et sud-ouest). Dans quelle partie
de la carte se trouve le Spa? »
21) S-Loc-3 : « Imaginez que la carte est coupée en 4 parties équivalentes à partir du point
central qu’est l’Hôtel (partie nord-est, nord-ouest, sud-est et sud-ouest). Dans quelle partie
de la carte se trouve le Restaurant ? »
22) S-Loc-4 : « Imaginez que la carte est coupée en 4 parties équivalentes à partir du point
central qu’est l’Hôtel (partie nord-est, nord-ouest, sud-est et sud-ouest). Dans quelle partie
de la carte se trouve la Gare? »
1 2 3 4 5
Pas du tout Tout à fait
317
Chapter VII- Appendix
23) S-Dist-1 : « A vol d’oiseau (c’est-à-dire sans emprunter les rues, mais en faisant comme
si vous pouviez aller en ligne droite par les airs comme un oiseau), lequel de ces deux
trajets est le plus long ? Hôtel-Jardin ou Hôtel-Musée ? »
24) S-Dist-2 : « A vol d’oiseau (c’est-à-dire sans emprunter les rues, mais en faisant comme
si vous pouviez aller en ligne droite par les airs comme un oiseau), lequel de ces deux
trajets est le plus long ? Hôtel-Spa ou Hôtel-Restaurant ? »
25) S-Dist-3 : « A vol d’oiseau (c’est-à-dire sans emprunter les rues, mais en faisant comme
si vous pouviez aller en ligne droite par les airs comme un oiseau), lequel de ces deux
trajets est le plus long ? Hippodrome-Jardin ou Hippodrome-Gare ? »
26) S-Dist-4 : « A vol d’oiseau (c’est-à-dire sans emprunter les rues, mais en faisant comme
si vous pouviez aller en ligne droite par les airs comme un oiseau), lequel de ces deux
trajets est le plus long ? Restaurant-Musée ou Restaurant-Hippodrome? »
1 2 3 4 5
Pas du tout Tout
à fait
318
Chapter VII- Appendix
1 2 3 4 5
Pas du tout Tout à fait
1 2 3 4 5
Pas du tout Tout à fait
1 2 3 4 5
Pas du tout Tout à fait
1 2 3 4 5
Pas du tout Tout à fait
5) J’ai trouvé que les différentes fonctionnalités étaient bien conçues dans cette technique.
1 2 3 4 5
Pas du tout Tout à fait
1 2 3 4 5
Pas du tout Tout à fait
1 2 3 4 5
Pas du tout Tout à fait
8) J’ai trouvé que cette technique de guidage était peu commode lors de son utilisation.
1 2 3 4 5
Pas du tout Tout à fait
1 2 3 4 5
Pas du tout Tout à fait
10) J’ai eu besoin d’un long temps d’adaptation pour me sentir à l’aise avec ce type de
guidage.
1 2 3 4 5
Pas du tout Tout à fait
319
Chapter VII- Appendix
320
References
Adams, M. J., Johnson, S. A., Lefèvre, P., Lévesque, V., Hayward, V., André, T., &
Thonnard, J.-L. (2013). Finger pad friction and its role in grip and touch. Journal of the
Royal Society Interface, 10, 1742–5662.
Americans with Disabilities Act of 1990, Pub. L. No. 101-336, 104 Stat. 328 (1990).
ANSI INCITS 354-2001 Common Industry Format Usability Test Reports. (2001) (pp. 1–39).
Washington, DC, USA.
Asakawa, C. (2005). What’s the web like if you can't see it? In W4A ’05: Proceedings of the
2005 International Cross-Disciplinary Workshop on Web Accessibility (pp. 1–8). Chiba,
Japan: ACM.
Asakawa, C., Takagi, H., Ino, S., & Ifukube, T. (2003). Maximum listening speeds for the
blind. In Proceedings of the 2003 International Conference on Auditory Display (pp.
276–279). Boston, MA, USA: ICAD.
Baddeley, A. D., Thomson, N., & Buchanan, M. (1975). Word length and the structure of
short-term memory. Journal of Verbal Learning and Verbal Behavior, 14, 575–589.
Bahram, S. (2013). Multimodal eyes-free exploration of maps: TIKISI for maps. ACM
SIGACCESS Accessibility and Computing, 106, 3–11.
Bangor, A., Kortum, P. T., & Miller, J. T. (2008). An Empirical Evaluation of the System
Usability Scale. International Journal of Human-Computer Interaction, 24, 574–594.
Bastian, M., Heymann, S., & Jacomy, M. (2009). Gephi: An Open Source Software for
Exploring and Manipulating Networks. In International AAAI Conference on Weblogs
and Social Media.
Bau, O., & Poupyrev, I. (2012). REVEL: Tactile Feedback Technology for Augmented
Reality. ACM Transactions on Graphics, 31, 89.
Bau, O., Poupyrev, I., Israr, A., & Harrison, C. (2010). TeslaTouch: electrovibration for
touch surfaces. In Proceedings of the 23nd annual ACM symposium on User interface
software and technology - UIST’10 (p. 283). New York, New York, USA: ACM Press.
Bigham, J. P., Jayant, C., Hanjie, J., Little, G., Miller, A., Miller, R. C., … Yeh, T. (2010).
VizWiz: Nearly real-time answers to visual questions. In Proceedings of the 23nd
323
annual ACM symposium on User interface software and technology - UIST ’10 (pp. 333–
342). New York, New York, USA: ACM Press.
Blades, M., Lippa, Y., Golledge, R. G., Jacobson, R. D., & Kitchin, R. M. (2002). The Effect
of Spatial Tasks on Visually Impaired Peoples’ Wayfinding Abilities. Journal of Visual
Impairment & Blindness, 96, 407–419.
Blattner, M., Sumikawa, D., & Greenberg, R. (1989). Earcons and Icons: Their Structure
and Common Design Principles. Human-Computer Interaction, 4, 11–44.
Bolt, R. A. (1980). “Put-that-there”: Voice and gesture at the graphics interface. In ACM
SIGGRAPH Computer Graphics (Vol. 14, pp. 262–270). ACM.
Borodin, Y., Bigham, J. P., Dausch, G., & Ramakrishnan, I. V. (2010). More than meets the
eye: a survey of screen-reader browsing strategies. In Proceedings of the 2010
International Cross Disciplinary Conference on Web Accessibility - W4A ’10 (pp. 1–
10). New York, New York, USA: ACM Press.
Bortolaso, C. (2012, June 19). Méthode de conception assistée par les modèles pour les
systèmes interactifs mixtes. Université Toulouse 3 Paul Sabatier.
Bosco, A., Longoni, A. M., & Vecchi, T. (2004). Gender effects in spatial orientation:
cognitive profiles and mental strategies. Applied Cognitive Psychology, 18, 519–532.
Boy, G. A. (1997). The group elicitation method for participatory design and usability
testing. interactions, 4, 27–33.
Brewster, S., & Brown, L. M. (2004). Tactons: structured tactile messages for non-visual
information display. In AUIC’04 Proceedings of the fifth conference on Australasian
user interface (pp. 15–23). Australian Computer Society, Inc.
Bris, M. (1999). Rapport «Tactimages & Training» (pp. 1–34). Paris, France.
Brock, A. (2010b). Comparaison de différentes dalles multitouches pour projet NAVIG (pp.
1–13). Toulouse, France.
Brock, A., Kammoun, S., Nicolau, H., Guerreiro, T., Kane, S. K., & Jouffrais, C. (2013). SIG:
NVI (Non-Visual Interaction). In CHI’13 Extended Abstracts on Human Factors in
Computing Systems (pp. 2513–2516). New York, New York, USA: ACM Press.
Brock, A., Lebaz, S., Oriola, B., Picard, D., Jouffrais, C., & Truillet, P. (2012). Kin’ touch:
Understanding How Visually Impaired People Explore Tactile Maps. In Conference
on Human Factors in Computing Systems - CHI (pp. 2471–2476). Austin, Texas: ACM.
Brock, A., Molas, J., & Vinot, J.-L. (2010). Naviplan: un logiciel de préparation d’itinéraires
pour personnes déficientes visuelles. University Toulouse 3 Paul Sabatier and ENAC.
Brock, A., Oriola, B., Truillet, P., Jouffrais, C., & Picard, D. (2013). Map design for visually
impaired people: past, present, and future research. MEI, 36, 117-129.
324
Brock, A., Truillet, P., Oriola, B., & Jouffrais, C. (2010). Usage of multimodal maps for blind
people: why and how. In ITS’10: International Conference on Interactive Tabletops and
Surfaces (pp. 247 – 248). Saarbruecken, Germany, NY, USA: ACM Press.
Brock, A., Truillet, P., Oriola, B., Picard, D., & Jouffrais, C. (2012). Design and User
Satisfaction of Interactive Maps for Visually Impaired People. In K. Miesenberger, A.
Karshmer, P. Penaz, & W. Zagler (Eds.), ICCHP 2012. LNCS, vol. 7383. (pp. 544–551).
Linz, Austria: Springer.
Brock, A., Vinot, J.-L., Oriola, B., Kammoun, S., Truillet, P., & Jouffrais, C. (2010). Méthodes
et outils de conception participative avec des utilisateurs non-voyants. In IHM’10 -
Proceedings of the 22nd French-speaking conference on Human-computer interaction
(pp. 65 – 72). Luxembourg, NY, USA: ACM Press.
Brooke, J. (1996). SUS: A “quick and dirty” usability scale. In P. W. Jordan, B. Thomas, B.
A. Weerdmeester, & I. L. McClelland (Eds.), Usability evaluation in industry (pp. 189–
194). London, UK: Taylor & Francis.
Buisson, M., Bustico, A., Chatty, S., Colin, F.-R., Jestin, Y., Maury, S. S., … Truillet, P.
(2002). Ivy: un bus logiciel au service du développement de prototypes de systèmes
interactifs. In IHM’02 - Proceedings of the 14th French-speaking conference on Human-
computer interaction (Vol. 32, pp. 223 – 226). Poitiers, France: ACM.
Burch, D., & Pawluk, D. (2011). Using multiple contacts with texture-enhanced graphics.
In 2011 IEEE World Haptics Conference (WHC) (pp. 287–292). Istanbul, Turkey: IEEE.
Buxton, W. (1986). There’s more to interaction than meets the eye: some issues in manual
input. In D. A. Norman & S. W. Draper (Eds.), User centered system design: New
perspectives on human-computer interaction (pp. 319–337). Hillsdale, New Jersey:
Lawrence Erlbaum Associates.
Buxton, W. (2007). Multi-Touch Systems that I Have Known and Loved (pp. 1–10).
C2RP. (2005). Déficience Visuelle - Etudes et Résultats (pp. 1–14). Lille, France.
Caddeo, P., Fornara, F., Nenci, A. M., & Piroddi, A. (2006). Wayfinding tasks in visually
impaired people: the role of tactile maps. Cognitive Processing, 7, 168–169.
Campin, B., McCurdy, W., Brunet, L., & Siekierska, E. (2003). SVG Maps for People with
Visual Impairment. In SVG OPEN Conference. Vancouver, Canada.
Carroll, D., Chakraborty, S., & Lazar, J. (2013). Designing Accessible Visualizations: The
Case of Designing a Weather Map for Blind Users. In Universal Access in Human-
Computer Interaction. Design Methods, Tools, and Interaction Techniques for
eInclusion (pp. 436–445). Springer Berlin Heidelberg.
Casiez, G., Roussel, N., Vanbelleghem, R., & Giraud, F. (2011). Surfpad: riding towards
targets on a squeeze film effect. In Proceedings of CHI’11 Conference on Human
Factors in Computing Systems (pp. 2491–2500). Vancouver, British Columbia,
Canada: ACM Press.
Cattaneo, Z., & Vecchi, T. (2011). Blind vision: the neuroscience of visual impairment.
Rehabilitation (p. 269). MIT Press.
325
Cavender, A., Trewin, S., & Hanson, V. (2008). General writing guidelines for technology
and people with disabilities. ACM SIGACCESS Accessibility and Computing,
SIGACCESS, 17–22.
Chen, X., & Tremaine, M. (2005). Multimodal user input patterns in a non-visual context.
In Proceedings of the 7th international ACM SIGACCESS conference on Computers and
accessibility - Assets ’05 (pp. 206–207). New York, New York, USA: ACM Press.
Choi, S., & Kuchenbecker, K. J. (2013). Vibrotactile Display: Perception, Technology, and
Applications. Proceedings of the IEEE, 101, 2093–2104.
Coluccia, E., Iosue, G., & Brandimonte, M. A. (2007). The relationship between map
drawing and spatial orientation abilities: A study of gender differences. Journal of
Environmental Psychology, 27, 135–144.
Cornoldi, C., Fastame, M.-C., & Vecchi, T. (2003). Congenitally blindness and spatial
mental imagery. In Y. Hatwell, A. Streri, & E. Gentaz (Eds.), Touching for Knowing:
cognitive psychology of haptic manual perception (pp. 173–187). Amsterdam /
Philadelphia: John Benjamins Publishing Company.
Côté-Giroux, P., Trudeau, N., Valiquette, C., Sutton, A., Chan, E., & Hébert, C. (2011).
Intelligibilité et appréciation de neuf synthèses vocales françaises. Canadian Journal
of speech-language pathology and audiology, 35, 300–311.
Coutaz, J., & Caelen, J. (1991). A Taxonomy for Multimedia and Multimodal User
Interfaces. In Proceedings of the ERCIM Workshop on User Interfaces and Multimedia.
Lisbon, Portugal.
Crossan, A., & Brewster, S. (2008). Multimodal Trajectory Playback for Teaching Shape
Information and Trajectories to Visually Impaired Computer Users. ACM Transactions
on Accessible Computing, 1, 1–34.
Dang, C. T., Straub, M., & André, E. (2009). Hand distinction for multi-touch tabletop
interaction. In Proceedings of the ACM International Conference on Interactive
Tabletops and Surfaces (pp. 101 – 108). Banff, Canada: ACM.
Daunys, G., & Lauruska, V. (2009). Sonification System of Maps for Blind – Alternative
View. In C. Stephanidis (Ed.), Universal Access in Human-Computer Interaction.
Intelligent and Ubiquitous Interaction Environments (Vol. 5615, pp. 503–508). Berlin,
Heidelberg: Springer Berlin Heidelberg.
De Felice, F., Renna, F., Attolico, G., & Distante, A. (2007). A haptic/acoustic application
to allow blind the access to spatial information. In Second Joint EuroHaptics
Conference and Symposium on Haptic Interfaces for Virtual Environment and
Teleoperator Systems (WHC’07) (pp. 310–315). IEEE.
Delogu, F., Palmiero, M., Federici, S., Plaisant, C., Zhao, H., & Belardinelli, O. (2010).
Non-visual exploration of geographic maps: Does sonification help? Disability &
Rehabilitation: Assistive Technology, 5, 164–174.
326
Dingler, T., Lindsay, J., & Walker, B. N. (2008). Learnabiltiy of Sound Cues for
Environmental Features: Auditory Icons, Earcons, Spearcons, and Speech. In
Proceedings of the 14th International Conference on Auditory Display (pp. 1–6).
Downs, R. M., & Stea, D. (1973). Cognitive maps and spatial behavior: Process and
products. In R. M. Downs & D. Stea (Eds.), Image and Environment (pp. 8–26). Aldine
Publishing Company.
Dragicevic, P. (2004, March 9). Un modèle d’interaction en entrée pour des systèmes
interactifs multi-dispositifs hautement configurables. Université de Nantes.
Dutoit, T., Pagel, V., Pierret, N., & Bataille, F. (1996). The MBROLA project: towards a set
of high quality speech synthesizers free of use for non commercial purposes. In
ICSLP’96 - Proceedings of the Fourth International Conference on Spoken Language
(pp. 1393 – 1396). IEEE.
Ebert, A., Weber, C., Cernea, D., & Petsch, S. (2013). TangibleRings: nestable circular
tangibles. In CHI EA’13 Extended Abstracts on Human Factors in Computing Systems
(pp. 1617–1622). Paris, France: ACM Press.
Edman, P. (1992). Tactile graphics (p. 550). New York, USA: AFB press.
El Saddik, A., Orozco, M., Eid, M., & Cha, J. (2011). Haptics Technologies - Bringing Touch
to Multimedia (p. 218). Berlin Heidelberg: Springer.
El-Glaly, Y., Quek, F., Smith-Jackson, T., & Dhillon, G. (2012). Audible rendering of text
documents controlled by multi-touch interaction. In Proceedings of the 14th ACM
international conference on Multimodal interaction - ICMI’12 (pp. 401–408). New York,
New York, USA: ACM Press.
Espinosa, M. A., Ungar, S., Ochaita, E., Blades, M., & Spencer, C. (1998). Comparing
methods for introducing blind and visually impaired people to unfamiliar urban
environments. Journal of Environmental Psychology, 18, 277–287.
Feng, J., & Sears, A. (2009). Speech Input to Support Universal Access. In C. Stephanidis
(Ed.), The Universal Access Handbook (CRC Press.). Boca Raton, Florida, USA: Taylor
& Francis.
327
Fletcher, J. F. (1981b). Spatial Representation in Blind Children. 2: Effects of Task
Variation. Visual Impairment and Blindness, 75, 1–3.
Foley, J. D., Van Dam, A., Feiner, S. K., & Hughes, J. F. (1996). Computer Graphics:
Principles and Practice (p. 1175). Addison-Wesley Professional.
Gapenne, O., Rovira, K., Ali Ammar, A., & Lenay, C. (2003). Tactos: a special computer
interface for the reading and writing of 2D forms in blind people. In C. Stephanidis
(Ed.), Universal Access in HCI, Inclusive Design in the Information Society (pp. 1270–
1274). Heraklion, Crete, Greece: CRC Press.
Gaunet, F., & Briffault, X. (2005). Exploring the Functional Specifications of a Localized
Wayfinding Verbal Aid for Blind Pedestrians: Simple and Structured Urban Areas.
Human-Computer Interaction, 20, 267–314.
Gaver, W. (1989). The SonicFinder: An Interface That Uses Auditory Icons. Human-
Computer Interaction, 4, 67–94.
Gentaz, É., Baud-Bovy, G., & Luyat, M. (2008). The haptic perception of spatial
orientations. Experimental brain research, 187, 331–348.
Giudice, N. A., Klatzky, R. L., Bennett, C. R., & Loomis, J. M. (2013). Combining Locations
from Working Memory and Long-Term Memory into a Common Spatial Image.
Spatial Cognition & Computation, 13, 103–128.
Giudice, N. A., Palani, H. P., Brenner, E., & Kramer, K. M. (2012). Learning non-visual
graphical information using a touch-based vibro-audio interface. In Proceedings of
the 14th international ACM SIGACCESS conference on Computers and accessibility -
ASSETS ’12 (pp. 103–110). New York, New York, USA: ACM Press.
Golledge, R. G., Rice, M., & Jacobson, R. D. (2005). A commentary on the use of touch for
accessing on-screen spatial representations: The process of experiencing haptic
maps and graphics. The Professional Geographer, 57, 339–349.
Hamid, N. N. A., & Edwards, A. D. N. (2013). Facilitating route learning using interactive
audio-tactile maps for blind and visually impaired people. In CHI EA ’13 Extended
Abstracts on Human Factors in Computing Systems (pp. 37–42). New York, New York,
USA: ACM Press.
Han, J. Y. (2005). Low-cost multi-touch sensing through frustrated total internal reflection.
In Proceedings of the 18th annual ACM symposium on User interface software and
technology - UIST ’05 (p. 115). New York, New York, USA: ACM Press.
Harada, S., Sato, D., Adams, D. W., Kurniawan, S., Takagi, H., & Asakawa, C. (2013).
Accessible photo album: enhancing the photo sharing experience for people with
visual impairment. In Proceedings of the SIGCHI Conference on Human Factors in
328
Computing Systems - CHI ’13 (pp. 2127–2136). New York, New York, USA: ACM
Press.
Hart, S. G., & Staveland, L. E. (1988). Development of NASA-TLX (Task Load Index):
Results of Empirical and Theoretical Research. In P. A. Hancock & N. Meshkati (Eds.),
Human Mental Workload (Vol. 52, pp. 139–183). Elsevier.
Hassenzahl, M., Burmester, M., & Koller, F. (2003). AttrakDiff: Ein Fragebogen zur
Messung wahrgenommener hedonischer und pragmatischer Qualität. In G. Szwillus
& J. Ziegler (Eds.), Mensch & Computer (Vol. 57, pp. 187–196). Wiesbaden, Germany:
Vieweg+Teubner Verlag.
Hatwell, Y. (2003). Introduction: Touch and Cognition. In Y. Hatwell, A. Streri, & É. Gentaz
(Eds.), Touching for Knowing: Cognitive Psychology of Haptic Manual Perception (pp.
1–14). Amsterdam / Philadelphia: John Benjamins Publishing Company.
Hatwell, Y., & Martinez-Sarrochi, F. (2003). The tactile reading of maps and drawings, and
the access of blind people to works of art. In Y. Hatwell, A. Streri, & É. Gentaz (Eds.),
Touching for Knowing: Cognitive Psychology of Haptic Manual Perception (pp. 255–
273). Amsterdam / Philadelphia: John Benjamins Publishing Company.
Hegarty, M., Richardson, A. E., Montello, D. R., Lovelace, K., & Subbiah, I. (2002).
Development of a self-report measure of environmental spatial ability. Intelligence,
30, 425–447.
Held, R., Ostrovsky, Y., DeGelder, B., Gandhi, T., Ganesh, S., Mathur, U., & Sinha, P.
(2011). The newly sighted fail to match seen with felt. Nature Neuroscience, 14, 551–
553.
Heller, M. A. (1989). Picture and pattern perception in the sighted and the blind: the
advantage of the late blind. Perception, 18, 379–389.
Henry, S. L. (2007). Just Ask: Integrating Accessibility Throughout Design (pp. 1–177).
Breinigsville, PA, USA: Lulu.com.
Heuten, W., Henze, N., & Boll, S. (2007). Interactive exploration of city maps with auditory
torches. In CHI EA ’07 Extended Abstracts on Human Factors in Computing Systems
(pp. 1959–1964). ACM, New York, NY, USA.
Heuten, W., Henze, N., Boll, S., & Pielot, M. (2008). Tactile wayfinder: A non-visual
support system for wayfinding. In Proceedings of the 5th Nordic conference on
Human-computer interaction: building bridges (pp. 172–181). New York, New York,
USA: ACM Press.
Heuten, W., Wichmann, D., & Boll, S. (2006). Interactive 3D sonification for the exploration
of city maps. In Proceedings of the 4th Nordic conference on Human-computer
interaction: changing roles (pp. 155–164). New York, New York, USA: ACM.
Hinton, R. A. L. (1993). Tactile and audio-tactile images as vehicles for learning. In Non-
visual Human-computer Interactions: Prospects for the Visually Handicapped :
329
Proceedings of the INSERM-SETAA Conference (Vol. 228, pp. 169–179). Paris: John
Libbey Eurotext.
Iglesias, R., Casado, S., Gutierrez, T., Barbero, J. I., Avizzano, C. A., Marcheschi, S., &
Bergamasco, M. (2004). Computer graphics access for blind people through a haptic
and audio virtual environment. In Haptic, Audio and Visual Environments and Their
Applications, 2004. HAVE 2004 (pp. 13–18). IEEE Press.
Ishikawaa, T., Fujiwarab, H., Imaic, O., & Okabec, A. (2008). Wayfinding with a GPS-
based mobile navigation system: A comparison with maps and direct experience.
Journal of Environmental Psychology, 28, 74–82.
ISO. (2002). ISO 16982 : 2002, Méthodes d’utilisabilité pour la conception centrée sur
l'opérateur humain. Technical specification International Organisation for
Standardisation. Switzerland: ISO - International Organization for Standardization.
Jansen, Y., Karrer, T., & Borchers, J. (2010). MudPad: Localized Tactile Feedback on Touch
Surfaces. In Adjunct proceedings of the 23nd annual ACM symposium on User interface
software and technology - UIST ’10 (pp. 385–386). New York, New York, USA: ACM
Press.
Jansson, G., Juhasz, I., & Cammilton, A. (2006). Reading virtual maps with a haptic mouse:
Effects of some modifications of the tactile and audio-tactile information. British
Journal of Visual Impairment, 24, 60–66.
330
Jetter, H.-C., Leifert, S., Gerken, J., Schubert, S., & Reiterer, H. (2012). Does (multi-)touch
aid users’ spatial memory and navigation in “panning” and in “zooming & panning”
UIs? In Proceedings of the International Working Conference on Advanced Visual
Interfaces - AVI ’12 (pp. 83–90). New York, New York, USA: ACM Press.
Kaklanis, N., Votis, K., Moschonas, P., & Tzovaras, D. (2011). HapticRiaMaps: Towards
Interactive exploration of web world maps for the visually impaired. In Proceedings
of the International Cross-Disciplinary Conference on Web Accessibility - W4A ’11 (p.
20). New York, New York, USA: ACM Press.
Kaklanis, N., Votis, K., & Tzovaras, D. (2013). Open Touch/Sound Maps: A system to
convey street data through haptic and auditory feedback. Computers & Geosciences,
57, 59–67.
Kaltenbrunner, M., Bovermann, T., Bencina, R., & Costanza, E. (2005). TUIO - A Protocol
for Table-Top Tangible User Interfaces. In Proc. of the The 6th Int’l Workshop on
Gesture in Human-Computer Interaction and Simulation. Vannes, France.
Kammer, D., Keck, M., Freitag, G., & Wacker, M. (2010). Taxonomy and Overview of
Multi-touch Frameworks: Architecture, Scope and Features. In Engineering Patterns
for Multi-Touch Interfaces 2010, Workshop of the ACM SIGCHI Symposium on
Engineering Interactive Computing Systems. Berlin, Germany: ACM.
Kammoun, S., Dramas, F., Oriola, B., & Jouffrais, C. (2010). Route Selection Algorithm for
Blind Pedestrian. In Proceedings of ICCAS’10 - International Conference on Control
Automation and Systems (pp. 2223–2228). KINTEX, Gyeonggi-do, Korea: IEEE.
Kammoun, S., Jouffrais, C., Guerreiro, T., Nicolau, H., & Jorge, J. (2012). Guiding Blind
People with Haptic Feedback. In Frontiers in Accessibility for Pervasive Computing
(Pervasive 2012). Newcastle, UK,.
Kammoun, S., Macé, M. J.-M., Oriola, B., & Jouffrais, C. (2012). Designing a Virtual
Environment Framework for Improving Guidance for the Visually Impaired. In P.
Langdon, J. Clarkson, P. Robinson, J. Lazar, & A. Heylighen (Eds.), Designing
Inclusive Systems (pp. 217–226). London: Springer London.
Kane, S. K., Bigham, J. P., & Wobbrock, J. O. (2008). Slide Rule: making mobile touch
screens accessible to blind people using multi-touch interaction techniques. In
Proceedings of the 10th international ACM SIGACCESS conference on Computers and
accessibility - Assets ’08 (pp. 73 – 80). New York, New York, USA: ACM Press.
Kane, S. K., Frey, B., & Wobbrock, J. O. (2013). Access lens: a gesture-based screen
reader for real-world documents. In Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems - CHI ’13 (p. 347). New York, New York, USA: ACM
Press.
331
Kane, S. K., Morris, M. R., Perkins, A. Z., Wigdor, D., Ladner, R. E., & Wobbrock, J. O.
(2011). Access Overlays: Improving Non-Visual Access to Large Touch Screens for
Blind Users. In Proceedings of the 24th annual ACM symposium on User interface
software and technology - UIST ’11 (pp. 273–282). New York, New York, USA: ACM
Press.
Kane, S. K., Morris, M. R., & Wobbrock, J. O. (2013). Touchplates: Low-Cost Tactile
Overlays for Visually Impaired Touch Screen Users. In ASSETS’13 - SIGACCESS
International Conference on Computers and Accessibility. Bellevue, Washington, USA:
ACM.
Kane, S. K., Wobbrock, J. O., & Ladner, R. E. (2011). Usable gestures for blind people. In
Proceedings of the 2011 annual conference on Human factors in computing systems -
CHI ’11 (pp. 413–422). Vancouver, BC, Canada: ACM Press.
Katz, B. F. G., Dramas, F., Parseihian, G., Gutierrez, O., Kammoun, S., Brilhault, A., …
Jouffrais, C. (2012). NAVIG: Guidance system for the visually impaired using virtual
augmented reality. Technology and Disability, 24, 163–178.
Kim, S.-J., Smith-Jackson, T., Carroll, K., Suh, M., & Mi, N. (2009). Implications of
Participatory Design for a Wearable Near and Far Environment Awareness System
(NaFEAS) for Users with Severe Visual Impairments. In C. Stephanidis (Ed.), UAHCI
2009, LNCS Vol 5614 (Vol. 5614, pp. 86–95). Berlin, Heidelberg: Springer Berlin
Heidelberg.
Kirakowski, J., & Corbett, M. (1993). SUMI: the Software Usability Measurement Inventory.
British Journal of Educational Technology, 24, 210–212.
Kitchin, R. M., & Jacobson, R. D. (1997). Techniques to collect and analyze the cognitive
map knowledge of persons with visual impairment or blindness: Issues of validity.
Journal of Visual Impairment Blindness, 91, 360–376.
Klatzky, R. L. R. L., Loomis, J. M. J. M., Lederman, S. J. S. J., Wake, H., & Fujita, N. (1993).
Haptic identification of objects and their depictions. Attention, Perception, &
Psychophysics, 54, 170–178.
Klemmer, S. R., Sinha, A. K., Chen, J., Landay, J. A., Aboobaker, N., & Wang, A. (2000).
Suede: a Wizard of Oz prototyping tool for speech user interfaces. In UIST ’00
Proceedings of the 13th annual ACM symposium on User interface software and
technology (pp. 1–10). San Diego, California, United States: ACM.
Klippel, A., Hirtle, S., & Davies, C. (2010). You-Are-Here Maps: Creating Spatial
Awareness through Map-like Representations. Spatial Cognition & Computation, 10,
83–93.
332
Kostopoulos, K., Moustakas, K., Tzovaras, D., Nikolakis, G., Thillou, C., & Gosselin, B.
(2007). Haptic access to conventional 2D maps for the visually impaired. Journal on
Multimodal User Interfaces, 1, 13–19.
Krueger, M. W., & Gilden, D. (1997). KnowWhereTM: an Audio/Spatial Interface for Blind
People. In Proceedings of the Fourth International Conference on Auditory Display
(ICAD)`97. Palo Alto, CA.
Kuhn, S., & Muller, M. J. (1993). Participatory design. Commun. ACM, 36, 24–28.
Kuipers, B. J., & Levitt, T. S. (1988). Navigation and Mapping in Large Scale Space. AI
Magazine, 9, 25.
Kupers, R., Chebat, D. R., Madsen, K. H., Paulson, O. B., & Ptito, M. (2010). Neural
correlates of virtual route recognition in congenital blindness. Proceedings of the
National Academy of Sciences of the United States of America, 107, 12716–12721.
Lahav, O., & Mioduser, D. (2008). Construction of cognitive maps of unknown spaces
using a multi-sensory virtual environment for people who are blind. Computers in
Human Behavior, 24, 1139–1155.
Largillier, G., Joguet, P., Recoquillon, C., Auriel, P., Balley, A., Meador, J., … Chastan, G.
(2010). Specifying and Characterizing Tactile Performances for Multi-Touch Panels:
Toward a User-Centric Metrology (pp. 1–5). Bordeaux, France.
Laufs, U., Ruff, C., & Weisbecker, A. (2010). MT4j - an open source platform for multi-
touch software development. VIMation Journal, 58 – 64.
Laufs, U., Ruff, C., & Zibuschka, J. (2010). MT4j - A Cross-platform Multi-touch
Development Framework. In ACM EICS 2010, Workshop: Engineering patterns for
multi-touch interfaces (pp. 52–57). Berlin, Germany: ACM.
Lawrence, M. M., Martinelli, N., & Nehmer, R. (2009). A Haptic Soundscape Map of the
University of Oregon. Journal of Maps, 5, 19–29.
Lazar, J., Chakraborty, S., Carroll, D., Weir, R., Sizemore, B., & Henderson, H. (2013).
Development and Evaluation of Two Prototypes for Providing Weather Map Data to
Blind Users Through Sonification. Journal of Usability Studies, 8, 93–110.
Lebaz, S., Jouffrais, C., & Picard, D. (2012). Haptic identification of raised-line drawings:
high visuospatial imagers outperform low visuospatial imagers. Psychological
research, 76, 667–675.
Lebaz, S., Picard, D., & Jouffrais, C. (2010). Haptic recognition of non-figurative tactile
pictures in the blind: does life-time proportion without visual experience matter? In
A. M. L. Kappers, J. B. F. van Erp, W. M. Bergmann Tiest, & F. C. T. van der Helm
(Eds.), Eurohaptics, LNCS, Vol 6192/2010 (pp. 412–417). Berlin Heidelberg New York:
Springer-Verlag.
Lederman, S. J., & Kinch, D. (1979). Texture in tactual maps and graphics for the visually
handicapped. Journal of Visual Impairment & Blindness, 73, 217–227.
Lederman, S. J., & Klatzky, R. L. (2009). Haptic perception: a tutorial. Attention, perception
& psychophysics, 71, 1439–59.
333
Ledo, D., Nacenta, M. A., Marquardt, N., Boring, S., & Greenberg, S. (2012). The
HapticTouch toolkit. In Proceedings of the Sixth International Conference on Tangible,
Embedded and Embodied Interaction - TEI ’12 (pp. 115–122). New York, New York,
USA: ACM Press.
Lévesque, V., & Hayward, V. (2008). Tactile Graphics Rendering Using Three
Laterotactile Drawing Primitives. In 2008 Symposium on Haptic Interfaces for Virtual
Environment and Teleoperator Systems (pp. 429–436). IEEE.
Lévesque, V., Oram, L., MacLean, K., Cockburn, A., Marchuk, N. D., Johnson, D., …
Peshkin, M. A. (2011). Enhancing physicality in touch interaction with programmable
friction. In Proceedings of the 2011 annual conference on Human factors in computing
systems - CHI ’11 (pp. 2481–2490). New York, New York, USA: ACM Press.
Lévesque, V., Petit, G., Dufresne, A., & Hayward, V. (2012). Adaptive level of detail in
dynamic, refreshable tactile graphics. IEEE Haptics Symposium (HAPTICS), 1–5.
Linn, M. C., & Peterson, A. C. (1985). Emergence and Characterization of Sex Differences
in Spatial Ability: A Meta-Analysis. Child Development, 56, 1479–1498.
Lloyd, R. (2000). Understanding and learning maps. In R. Kitchin & S. Freundschuh (Eds.),
Cognitive Mapping: Past Present and Future (Routledge., pp. 84–107). New York:
Taylor & Francis.
Lobben, A., & Lawrence, M. (2012). The Use of Environmental Features on Tactile Maps
by Navigators Who Are Blind. The Professional Geographer, 64, 95–108.
Lohmann, K. (2013, July 1). Verbal Assistance with Virtual Tactile Maps : a Multi-Modal
Interface for the Non-Visual Acquisition of Spatial Knowledge. Staats- und
Universitätsbibliothek Hamburg, Hamburg, Germany.
Lohmann, K., & Habel, C. (2012). Extended Verbal Assistance Facilitates Knowledge
Acquisition of Virtual Tactile Maps. In C. Stachniss, K. Schill, & D. Uttal (Eds.), Spatial
Cognition VIII, LNCS 7463 (Vol. 7463, pp. 299–318). Berlin, Heidelberg: Springer
Berlin Heidelberg.
LOI n° 2005-102 du 11 février 2005 pour l’égalité des droits et des chances, la
participation et la citoyenneté des personnes handicapées, JORF n°36 du 12 février
2005 (2005).
Loomis, J. M., Golledge, R. G., & Klatzky, R. L. (1998). Navigation System for the Blind:
Auditory Display Modes and Guidance. PRESENCE, 7, 193–203.
Loomis, J. M., Golledge, R. G., Klatzky, R. L., & Marston, J. R. (2007). Assisting wayfinding
in visually impaired travelers. In G. Allen (Ed.), Applied spatial Cognition From
Research to Cognitive Technology (pp. 179–202). Lawrence Erlbaum Associates, Inc.
334
Loomis, J. M., Klatzky, R. L., & Giudice, N. A. (2013). Representing 3D Space in Working
Memory: Spatial Images from Vision, Hearing, Touch, and Language. In S. Lacey & R.
Lawson (Eds.), Multisensory Imagery (pp. 131–155). New York, NY: Springer New
York.
Lynch, K. (1960). The Image of The City. Cambridge, UK: MIT Press.
Macé, M. J.-M., Dramas, F., & Jouffrais, C. (2012). Reaching to sound accuracy in the peri-
personal space of blind and sighted humans. In Klaus Miesenberger, A. Karshmer, P.
Penaz, & W. Zagler (Eds.), Computers Helping People with Special Needs: 13th
International Conference, ICCHP 2012 (pp. 636–643). Linz: Springer-Verlag.
Mackay, W. E., & Fayard, A. L. (1999). Video brainstorming and prototyping: techniques
for participatory design. In CHI ’99 Extended Abstracts on Human Factors in
Computing Systems (pp. 118–119). Pittsburgh, Pennsylvania, USA: ACM.
Magliano, J. P., Cohen, R., Allen, G. L., & Rodrigue, J. R. (1995). The impact of a
wayfinder’s goal on learning a new environment: different types of spatial
knowledge as goals. Journal of Environmental Psychology, 65–75.
Marquardt, N., Nacenta, M. A., Young, J. E., Carpendale, S., Greenberg, S., & Sharlin, E.
(2009). The Haptic Tabletop Puck: tactile feedback for interactive tabletops. In
Proceedings of the ACM International Conference on Interactive Tabletops and
Surfaces - ITS ’09 (pp. 85–92). New York, New York, USA: ACM Press.
Mazella, A., Picard, D., & Albaret, J.-M. (2013). The haptic tests battery: Purpose, content,
and preliminary results. In 13th European Congress of Psychology. Stockholm,
Sweden.
McGookin, D., Robertson, E., & Brewster, S. (2010). Clutching at Straws: Using Tangible
Interaction to Provide Non-Visual Access to Graphs. In Proceedings of the 28th
international conference on Human factors in computing systems - CHI ’10 (pp. 1715–
1724). New York, New York, USA: ACM Press.
Ménard, C. (2011). Détection et reconnaissance des doigts pour applications sur surfaces
multitouch. Université Toulouse 3 Paul Sabatier.
Merabet, L. B., Connors, E. C., Halko, M. A., & Sánchez, J. (2012). Teaching the blind to
find their way by playing video games. PloS one, 7, e44958.
Miao, M., Köhlmann, W., Schiewe, M., & Weber, G. (2009). Tactile Paper Prototyping with
Blind Subjects. In M. E. Altinsoy, U. Jekosch, & S. Brewster (Eds.), Haptic and Audio
Interaction Design (Lecture No., Vol. 5763/2009, pp. 81–90). Berlin / Heidelberg:
Springer.
Miao, M., & Weber, G. (2012). A quantitative approach for cognitive maps of blind
people. In C. Graf, N. Giudice, & F. Schmid (Eds.), SKALID (pp. 37–42). Kloster
Seeon, Germany.
335
Miele, J. A., Landau, S., & Gilden, D. (2006). Talking TMAP: Automated generation of
audio-tactile maps using Smith-Kettlewell’s TMAP software. British Journal of Visual
Impairment, 24, 93–100.
Milne, A. P., Antle, A. N., & Riecke, B. E. (2011). Tangible and body-based interaction
with auditory maps. In CHI EA ’11 Extended Abstracts on Human Factors in Computing
Systems (p. 2329). New York, New York, USA: ACM Press.
Minatani, K., Watanabe, T., Yamaguchi, T., Watanabe, K., Akiyama, J., Miyagi, M., &
Oouchi, S. (2010). Tactile Map Automated Creation System to Enhance the Mobility
of Blind Persons – It’s Design Concept and Evaluation through Experiment. In K.
Miesenberger, J. Klaus, W. Zagler, & A. Karshmer (Eds.), ICCHP 2010, Part II. LNCS,
vol. 6180 (Vol. 6180/2010, pp. 534–540). Springer Berlin Heidelberg.
Moeller, J., Kerne, A., & Damaraju, S. (2011). ZeroTouch: a zero-thickness optical multi-
touch force field. In CHI EA ’11 Extended Abstracts on Human Factors in Computing
Systems (pp. 1165–1170). New York, New York, USA: ACM Press.
Montello, D. R. (2010). You Are Where? The Function and Frustration of You-Are-Here
(YAH) Maps. Spatial Cognition & Computation, 10, 94–104.
Morris, M. R., Wobbrock, J. O., & Wilson, A. D. (2010). Understanding users’ preferences
for surface gestures. In GI ’10 Proceedings of Graphics Interface 2010 (pp. 261–268).
Toronto, Ont., Canada: ACM Press.
Munkres, A., Gardner, C., Mundra, D., Mccraw, J., & Chang, T. (2009). Using a Tablet to
Understand the Tactile Map Interaction Techniques of the Visually-impaired. Analysis.
Münzer, S., Zimmer, H. D., Schwalm, M., Baus, J., & Aslan, I. (2006). Computer-assisted
navigation and the acquisition of route and survey knowledge. Journal of
Environmental Psychology, 26, 300–308.
Nacenta, M. A., Kamber, Y., Qiang, Y., & Kristensson, P. O. (2013). Memorability of pre-
designed and user-defined gesture sets. In CHI ’13 Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems (pp. 1099–1108). Paris, France:
ACM Press.
Nakamura, H., & Miyashita, H. (2012). Development and evaluation of interactive system
for synchronizing electric taste and visual content. In CHI ’12 Proceedings of the
SIGCHI Conference on Human Factors in Computing Systems (pp. 517–520). Austin,
Texas: ACM Press.
Nasir, T., & Roberts, J. C. (2007). Sonification of Spatial Data. In The 13th International
Conference on Auditory Display (ICAD 2007) (pp. 112–119). Montréal, Canada: ICAD.
National Federation of the Blind. (2009). The Braille Literacy Crisis in America: Facing the
Truth, Reversing the Trend, Empowering the Blind.
Neis, P., Zielstra, D., & Zipf, A. (2011). The Street Network Evolution of Crowdsourced
Maps: OpenStreetMap in Germany 2007–2011. Future Internet, 4, 1–21.
336
New, B., Pallier, C., Ferrand, L., & Matos, R. (2001). Une base de données lexicales du
français contemporain sur internet : LEXIQUE. L’année psychologique, 101, 447–462.
Newell, A. F., & Gregor, P. (2000). User sensitive inclusive design; in search of a new
paradigm. In CUU ’00 Proceedings on the 2000 conference on Universal Usability (pp.
39–44). Arlington, Virginia, United States: ACM.
Nigay, L., & Coutaz, J. (1997). Multifeature Systems : The CARE Properties and Their
Impact on Software Design. In Intelligence and Multimodality in Multimedia Interfaces:
Research and Applications.
Oliveira, J., Guerreiro, T., Nicolau, H., Jorge, J., & Gonçalves, D. (2011). Blind people and
mobile touch-based text-entry: acknowledging the need for different flavors. In The
proceedings of the 13th international ACM SIGACCESS conference on Computers and
accessibility - ASSETS ’11 (pp. 179–186). Dundee, Scotland: ACM Press.
Paladugu, D. A., Wang, Z., & Li, B. (2010). On presenting audio-tactile maps to visually
impaired users for getting directions. In CHI EA ’10 (pp. 3955–3960). Atlanta,
Georgia, United States: ACM Press.
Parente, P., & Bishop, G. (2003). BATS : The Blind Audio Tactile Mapping System. In
Proceedings of ACM South Eastern Conference. Savannah, GA, USA: ACM Press.
Parkes, D. (1988). “NOMAD”: An audio-tactile tool for the acquisition, use and
management of spatially distributed information by partially sighted and blind
persons. In A. Tatham & A. Dodds (Eds.), Proceedings of Second International
Conference on Maps and Graphics for Visually Disabled People (pp. 24–29).
Nottingham, United Kingdom.
Passini, R., & Proulx, G. (1988). Wayfinding without vision: An experiment with
congenitally, totally blind people. Environment And Behavior, 20, 227–252.
Pavlovic, V. I., Sharma, R., & Huang, T. S. (1997). Visual interpretation of hand gestures for
human-computer interaction: a review. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 19, 677–695.
Petit, G., Dufresne, A., Levesque, V., Hayward, V., & Trudeau, N. (2008). Refreshable
tactile graphics applied to schoolbook illustrations for students with visual
337
impairment. In Assets ’08 Proceedings of the 10th international ACM SIGACCESS
conference on Computers and accessibility (pp. 89–96). New York, New York, USA:
ACM Press.
Petrie, H., & Bevan, N. (2009). The evaluation of accessibility , usability and user
experience. In C. Stephanidis (Ed.), The Universal Access Handbook (pp. 299–315).
Boca Raton, Florida, USA: CRC Press.
Picard, D., Albaret, J., & Mazella, A. (2013). Haptic Identification of Raised-line Drawings
By Children , Adolescents and Young Adults : an Age-related Skill. Haptics-e, 5, 24–
28.
Picard, D., Dacremont, C., Valentin, D., & Giboreau, A. (2003). Perceptual dimensions of
tactile textures. Acta Psychologica, 114, 165–184.
Picard, D., & Lebaz, S. (2012). Identifying Raised-Line Drawings by Touch : A Hard but Not
Impossible Task. Journal of Visual Impairment & Blindness, 106, 427–431.
Picard, D., Lebaz, S., Jouffrais, C., & Monnier, C. (2010). Haptic recognition of two-
dimensional raised-line patterns by early-blind, late-blind, and blindfolded sighted
adults. Perception, 39, 224–235.
Picard, D., & Monnier, C. (2009). Short-term memory for spatial configurations in the
tactile modality: A comparison with vision. Memory, 17, 789–801.
Picard, D., & Pry, R. (2009). Does Knowledge of Spatial Configuration in Adults with Visual
Impairments Improve with Tactile Exposure to Small-Scale Model of their Urban
Environment? Journal of Visual Impairment and Blindness, 103, 199–209.
Pielot, M., & Boll, S. (2010). Tactile Wayfinder: Comparison of Tactile Waypoint
Navigation with Commercial Pedestrian Navigation Systems. In P. Floréen, A.
Krüger, & M. Spasojevic (Eds.), Proceedings of the 8th International Conference,
Pervasive 2010, LNCS Vol 6030 (Vol. 6030, pp. 76–93). Helsinki, Finland: Springer
Berlin Heidelberg.
Pielot, M., Henze, N., Heuten, W., & Boll, S. (2007). Tangible User Interface for the
Exploration of Auditory City Map. In I. Oakley & S. Brewster (Eds.), Haptic and Audio
Interaction Design, LNCS 4813 (LNCS., Vol. 4813, pp. 86–97). Berlin, Heidelberg:
Springer Berlin Heidelberg.
Pielot, M., Poppinga, B., Heuten, W., & Boll, S. (2011). A tactile compass for eyes-free
pedestrian navigation. In P. Campos, N. Graham, J. Jorge, N. Nunes, P. Palanque, &
M. Winckler (Eds.), Human-Computer Interaction - Interact 2011, LNCS 6947 (pp. 640–
656). Lisbon, Portugal: Springer Berlin Heidelberg.
Pietrzak, T., Crossan, A., Brewster, S. A., Martin, B., & Pecci, I. (2009). Creating usable pin
array tactons for non-visual information. IEEE Transactions on Haptics, 2, 61–72.
Pietrzak, T., Martin, B., & Pecci, I. (2005). Affichage d’informations par des impulsions
haptiques. In 17ème Conférence Francophone sur l’Interaction Homme-Machine - IHM
2005 (pp. 223–226). New York, New York, USA: ACM Press.
338
Pölzer, S., Schnelle-Walka, D., Pöll, D., Heumader, P., & Miesenberger, K. (2013). Making
Brainstorming Meetings Accessible for Blind Users. In AAATE Conference.
Vilamoura, Portugal: IOS Press.
Poppinga, B., Magnusson, C., Pielot, M., & Rassmus-Gröhn, K. (2011). TouchOver map:
Audio-Tactile Exploration of Interactive Maps. In Proceedings of the 13th International
Conference on Human Computer Interaction with Mobile Devices and Services -
MobileHCI ’11 (pp. 545–550). New York, New York, USA: ACM Press.
Postma, A., Zuidhoek, S., Noordzij, M. L., & Kappers, A. M. L. (2007). Differences between
early-blind, late-blind, and blindfolded-sighted people in haptic spatial-
configuration learning and resulting memory traces. Perception, 36, 1253–1265.
Ramanahally, P., Gilbert, S., Niedzielski, T., Velazquez, D., & Anagnost, C. (2009). Sparsh
UI: A Multi-Touch Framework for Collaboration and Modular Gesture Recognition. In
Proceedings of the World Conference on Innovative VR (Vol. 2009, pp. 137–142).
Chalon-sur-Saone, France: American Society of Mechanical Engineers.
Ramloll, R., Yu, W., Brewster, S., Riedel, B., Burton, M., & Dimigen, G. (2000).
Constructing sonified haptic line graphs for the blind student: first steps. In
Proceedings of the fourth international ACM conference on Assistive technologies -
Assets ’00 (pp. 17–25). New York, New York, USA: ACM Press.
Rettig, M. (1994). Prototyping for tiny fingers. Communications of the ACM, 37, 21–27.
Rice, M. T., Jacobson, R. D., Caldwell, D. R., McDermott, S. D., Paez, F. I., Aburizaiza, A.
O., … Qin, H. (2013). Crowdsourcing techniques for augmenting traditional
accessibility maps with transitory obstacle information. Cartography and Geographic
Information Science, 1–10.
Rice, M. T., Jacobson, R. D., Golledge, R. G., & Jones, D. (2005). Design considerations for
haptic and auditory map interfaces. Cartography and Geographic Information
Science, 32, 381–391.
Roberts, J. C., & Paneels, S. (2007). Where are we with Haptic Visualization? In Second
Joint EuroHaptics Conference and Symposium on Haptic Interfaces for Virtual
Environment and Teleoperator Systems (WHC’07) (pp. 316–323). IEEE.
Roentgen, U. R., Gelderblom, G. J., Soede, M., & de Witte, L. P. (2008). Inventory of
Electronic Mobility Aids for Persons With Visual Impairments: A Literature Review.
Journal of Visual Impairment and Blindness, 102.
Roger, M., Bonnardel, N., & Le Bigot, L. (2009). Improving navigation messages for
mobile urban guides: Effects of the guide’s interlocutor model, spatial abilities and
use of landmarks on route description. International Journal of Industrial Ergonomics,
39, 509–515.
339
Roger, M., Bonnardel, N., & Le Bigot, L. (2011). Landmarks’ use in speech map navigation
tasks. Journal of Environmental Psychology, 31, 192–199.
Sánchez, J., Saenz, M., & Garrido, J. M. (2010). Usability of a Multimodal Video Game to
Improve Navigation Skills for Blind Children. ACM Transactions on Accessible
Computing, 3, 1–29.
Sang, J., Mei, T., Xu, Y.-Q., Zhao, C., Xu, C., & Li, S. (2013). Interaction Design for Mobile
Visual Search. IEEE Transactions on Multimedia, PP, 1–1.
Sato, M., Poupyrev, I., & Harrison, C. (2012). Touché: Enhancing Touch Interaction on
Humans, Screens, Liquids, and Everyday Objects. In CHI ’12 Proceedings of the
SIGCHI Conference on Human Factors in Computing Systems (pp. 483–492). ACM.
Schmidt, M., & Weber, G. (2009). Multitouch Haptic Interaction. In C. Stephanidis (Ed.),
Universal Access in Human-Computer Interaction. Intelligent and Ubiquitous Interaction
Environments Lecture Notes in Computer Science Volume 5615 (pp. 574–582). San
Diego, CA, USA: Springer Berlin / Heidelberg.
Schmitz, B., & Ertl, T. (2010). Making Digital Maps accessible using vibrations. In K.
Miesenberger, J. Klaus, W. Zagler, & A. Karshmer (Eds.), ICCHP 2010, Part I. LNCS,
Vol 6179 (pp. 100–107). Heidelberg: Springer.
Schmitz, B., & Ertl, T. (2012). Interactively Displaying Maps on a Tactile Graphics Display.
In SKALID 2012–Spatial Knowledge Acquisition with Limited Information Displays
(2012) (pp. 13–18).
Schneider, J., & Strothotte, T. (1999). Virtual Tactile Maps. In H.-J. Bullinger & J. Ziegler
(Eds.), Proceedings of HCI International (pp. 531–535). Munich, Germany: L. Erlbaum
Associates Inc.
Schöning, J. (2010a). Touch the future: The recent rise of multi-touch interaction. (A. Butz,
B. Fisher, M. Christie, A. Krüger, P. Olivier, & R. Therón, Eds.)PerAda Magazine:
European Commission’s Future and Emerging Technologies Proactive Initiative on
Pervasive Adaptation, 5531, 1–2.
Schöning, J. (2010b, June 2). Advanced user interfaces for spatial information. Saarland
University.
Schöning, J., Brandl, P., Daiber, F., Echtler, F., Hilliges, O., Hook, J., … Von Zadow, U.
(2008). Multi-Touch Surfaces: A Technical Guide (Technical Report TUMI0833 ).
Munich, Germany.
Sefelin, R., Tscheligi, M., & Giller, V. (2003). Paper prototyping - what is it good for?: a
comparison of paper- and computer-based low-fidelity prototyping. In CHI ’03
Extended Abstracts on Human Factors in Computing Systems (pp. 778–779). Ft.
Lauderdale, Florida, USA: ACM.
340
Seisenbacher, G., Mayer, P., Panek, P., & Zagler, W. L. (2005). 3D-Finger – System for
Auditory Support of Haptic Exploration in the Education of Blind and Visually
Impaired Students – Idea and Feasibility Study. In 8th European conference for the
advancement of assistive technology in europe - AAATE (pp. 73–77). Lille, France: IOS
Press.
Senette, C., Buzzi, M. C., Buzzi, M., Leporini, B., & Martusciello, L. (2013). Enriching
Graphic Maps to Enable Multimodal Interaction by Blind People. In C. Stephanidis &
M. Antona (Eds.), UAHCI 2013 - Universal Access in Human-Computer Interaction.
Design Methods, Tools, and Interaction Techniques for eInclusion (Vol. 8009, pp. 576–
583). Berlin, Heidelberg: Springer Berlin Heidelberg.
Serpa, A., Oriola, B., & Hermet, A. (2005). VoCal: Un agenda vocal sur PDA pour non-
voyants. In Proceedings of the 17th conference on 17ème Conférence Francophone sur
l’Interaction Homme-Machine - IHM 2005 (pp. 337–338). New York, New York, USA:
ACM Press.
Serrano, M., & Nigay, L. (2009). OpenWizard: une approche pour la création et
l’évaluation rapide de prototypes multimodaux. In IHM “09 Proceedings of the 21st
International Conference on Association Francophone d”Interaction Homme-Machine
(pp. 101–109). Grenoble, France: ACM.
Shimada, S., Murase, H., Yamamoto, S., Uchida, Y., Shimojo, M., & Shimizu, Y. (2010).
Development of Directly Manipulable Tactile Graphic System with Audio Support
Function. In K. Miesenberger, J. Klaus, W. Zagler, & A. Karshmer (Eds.), ICCHP 2010,
Part II. LNCS, vol. 6180 (pp. 451–458). Vienna, Austria: Springer.
Siegel, A. W., & White, S. (1975). The Development of Spatial Representations of Large-
Scale Environments. Advances in Child Development and Behavior, 10, 9–55.
Simonnet, M., Bothorel, C., Maximiano, L. F., & Thepaut, A. (2012). GeoTablet, une
application cartographique pour les personnes déficientes visuelles. In Handicap
2012 (Vol. 7, pp. 8–13).
Simonnet, M., Jacobson, D., Vieilledent, S., & Tisseau, J. (2009). SeaTouch: a haptic and
auditory maritime environment for non visual cognitive mapping of blind sailors. In
K. S. Hornsby (Ed.), COSIT 2009, LNCS 5756 (pp. 212–226). Aber Wrac’h, France:
Springer-Verlag.
Simonnet, M., Jonin, H., Brock, A., & Jouffrais, C. (2013). Conception et évaluation de
techniques d’exploration interactives non visuelles pour les cartes géographiques.
In SAGEO - atelier Cartographie et Cognition. Brest, France.
Simonnet, M., & Vieilledent, S. (2012). Accuracy and Coordination of Spatial Frames of
Reference during the Exploration of Virtual Maps: Interest for Orientation and
Mobility of Blind People? Advances in Human-Computer Interaction, 19.
Skelton, R. W., Bukach, C. M., Laurance, H. E., Thomas, K. G., & Jacobs, J. W. (2000).
Humans with traumatic brain injuries show place-learning deficits in computer-
generated virtual space. Journal of clinical and experimental neuropsychology, 22,
157–75.
Snyder, C. (2003). Paper prototyping: the fast and easy way to design and refine user
interfaces. San Francisco, California, United States: Morgan Kaufmann.
341
Stent, A., Syrdal, A., & Mishra, T. (2011). On the intelligibility of fast synthesized speech
for individuals with early-onset blindness. In The proceedings of the 13th
international ACM SIGACCESS conference on Computers and accessibility -
ASSETS ’11 (pp. 211–218). Dundee, Scotland: ACM Press.
Stephanidis, C. (2009). Universal Access and Design for All in the Evolving Information
Society. In C. Stephanidis (Ed.), The Universal Access Handbook (CRC Press.). Boca
Raton, Florida, USA: Taylor & Francis.
Strothotte, T., Fritz, S., Michel, R., Raab, A., Petrie, H., Johnson, V., … Schalt, A. (1996).
Development of dialogue systems for a mobility aid for blind people: initial design
and usability testing. In ASSETS ’96 (pp. 139–144). Vancouver, British Columbia,
Canada: ACM.
Su, J., Rosenzweig, A., Goel, A., de Lara, E., & Truong, K. N. (2010). Timbremap: :
Enabling the Visually-Impaired to Use Maps on Touch-Enabled Devices. In
Proceedings of the 12th international conference on Human computer interaction with
mobile devices and services - MobileHCI ’10 (pp. 17–26). New York, New York, USA:
ACM Press.
Symmons, M., & Richardson, B. (2000). Raised line drawings are spontaneously explored
with a single finger. Perception, 29, 621–626.
Taking Touch Screen Interfaces Into A New Dimension. (2012). Tactus Technology.
Tan, D. S., Pausch, R., Stefanucci, J. K., & Proffitt, D. R. (2002). Kinesthetic cues aid spatial
memory. In CHI 02 extended abstracts on Human factors in computer systems CHI 02
(pp. 806–807). Minneapolis, Minnesota, USA: ACM Press.
Tatham, A. F. (1991). The design of tactile maps: theoretical and practical considerations.
In M. Rybaczak & K. Blakemore (Eds.), Proceedings of international cartographic
association: mapping the nations (pp. 157–166). London, UK: ICA.
Thinus-Blanc, C., & Gaunet, F. (1997). Representation of space in blind persons: vision as
a spatial sense? Psychological Bulletin, 121, 20–42.
Tixier, M., Lenay, C., Le Bihan, G., Gapenne, O., & Aubert, D. (2013). Designing
interactive content with blind users for a perceptual supplementation system. In
Proceedings of the 7th International Conference on Tangible, Embedded and
Embodied Interaction - TEI ’13 (p. 229). New York, New York, USA: ACM Press.
Tobin, M. J., Greaney, J., & Hill, E. (2003). Braille: issues on structure, teaching and
assessment. In Y. Hatwell, A. Streri, & É. Gentaz (Eds.), Touching for knowing:
cognitive psychology of haptic manual perception2 (pp. 237–254). Amsterdam-
Philadelphia: John Benjamins Publishing Company.
Tolman, E. C. (1948). Cognitive maps in rats and men. Psychological Review, 55, 189–208.
342
Tornil, B., & Baptiste-Jessel, N. (2004). Use of Force Feedback Pointing Devices for Blind
Users. In C. Stary & C. Stephanidis (Eds.), 8th ERCIM Workshop on User Interfaces for
All, User-Centered Interaction Paradigms for Universal Access in the Information
Society, LNCS Vol 3196 (Vol. 3196, pp. 479–485). Vienna, Austria: Springer Berlin
Heidelberg.
Ullmer, B., & Ishii, H. (2000). Emerging frameworks for tangible user interfaces. IBM
Systems Journal, 39, 915–931.
Ungar, S., & Blades, M. (1997). Teaching visually impaired children to make distance
judgement from a tactile map. Journal of Visual Impairment & Blindness, 91.
Ungar, S., Blades, M., Spencer, C., & Morsley, K. (1994). Can the visually impaired
children use tactile maps to estimate directions? Journal of Visual Impairment &
Blindness, 88, 221–233.
Vinter, A., Fernandes, V., Orlandi, O., & Morgan, P. (2012). Exploratory procedures of
tactile images in visually impaired and blindfolded sighted children: How they relate
to their consequent performance in drawing. Research in developmental disabilities,
33, 1819–1831.
Walker, B. N., Nance, A., & Lindsay, J. (2006). Spearcons: Speech-based earcons improve
navigation performance in auditory menus. In Proceedings of the International
Conference on Auditory Display (pp. 63–68). London, UK.
Wang, Z., Li, B., Hedgpeth, T., & Haven, T. (2009). Instant Tactile-Audio Map: Enabling
Access to Digital Maps for People with Visual Impairment. In Assets ’09 Proceedings
of the 11th international ACM SIGACCESS conference on Computers and accessibility
(pp. 43–50). Pittsburgh, Pennsylvania, USA: ACM Press.
Wang, Z., Li, N., & Li, B. (2012). Fast and independent access to map directions for people
who are blind. Interacting with Computers, 24, 91–106.
Wechsung, I., & Naumann, A. B. (2008). Evaluation Methods for Multimodal Systems: A
Comparison of Standardized Usability Questionnaires. In E. André, L. Dybkjær, W.
Minker, H. Neumann, R. Pieraccini, & M. Weber (Eds.), Perception in Multimodal
343
Dialogue Systems, LNCS Vol. 5078 (Vol. 5078, pp. 276–284). Berlin, Heidelberg:
Springer Berlin Heidelberg.
Weir, R., Sizemore, B., Henderson, H., Chakraborty, S., & Lazar, J. (2012). Development
and Evaluation of Sonified Weather Maps for Blind Users. In S. Keates, P. J. Clarkson,
P. Langdon, & P. Robinson (Eds.), Proceedings of CWUAAT (pp. 75–84). Cambridge,
UK: Springer.
Weiss, M., Wacharamanotham, C., Voelker, S., & Borchers, J. (2011). FingerFlux. In
Proceedings of the 24th annual ACM symposium on User interface software and
technology - UIST ’11 (pp. 615–620). New York, New York, USA: ACM Press.
WHO. (2012). Visual Impairment and blindness Fact Sheet N° 282. World Health
Organization.
Wijntjes, M. W. A., van Lienen, T., Verstijnen, I. M., & Kappers, A. M. L. (2008a). Look
what I have felt: unidentified haptic line drawings are identified after sketching. Acta
psychologica, 128, 255–263.
Wijntjes, M. W. A., van Lienen, T., Verstijnen, I. M., & Kappers, A. M. L. (2008b). The
influence of picture size on recognition and exploratory behaviour in raised-line
drawings. Perception, 37, 602–614.
Wolf, K., Dicke, C., & Grasset, R. (2011). Touching the void: gestures for auditory
interfaces. In Proceedings of the fifth international conference on Tangible, embedded,
and embodied interaction - TEI ’11 (pp. 305–308). New York, New York, USA: ACM
Press.
Wright, T., Harris, B., & Sticken, E. (2010). A Best-Evidence Synthesis of Research on
Orientation and Mobility Involving Tactile Maps and Models. Journal of Visual
Impairment & Blindness, 104, 95–106.
Xu, C., Israr, A., Poupyrev, I., Bau, O., & Harrison, C. (2011). Tactile display for the
visually impaired using TeslaTouch. In CHI EA ’11 Extended Abstracts on Human
Factors in Computing Systems (pp. 317–322). New York, NY, USA: ACM Press.
Xu, S., & Bailey, K. (2013). Design Touch Feedback for Blind Users. In C. Stephanidis
(Ed.), HCI International 2013 - Posters’ Extended Abstracts Communications in
Computer and Information Science Volume 373 (Vol. 373, pp. 281–285). Berlin,
Heidelberg: Springer Berlin Heidelberg.
344
Yairi, I. E., Takano, M., Shino, M., & Kamata, M. (2008). Expression of paths and buildings
for universal designed interactive map with due consideration for visually impaired
people. In 2008 IEEE International Conference on Systems, Man and Cybernetics (pp.
524–529). IEEE.
Yatani, K., Banovic, N., & Truong, K. (2012). SpaceSense: representing geographical
information to visually impaired people using spatial tactile feedback. In
Proceedings of the 2012 ACM annual conference on Human Factors in Computing
Systems - CHI ’12 (pp. 415 – 424). New York, New York, USA: ACM Press.
Yi, B., Cao, X., Fjeld, M., & Zhao, S. (2012). Exploring user motivations for eyes-free
interaction on mobile devices. In Proceedings of the 2012 ACM annual conference on
Human Factors in Computing Systems CHI 12 (pp. 2789–2792). ACM Press.
Zeng, L., & Weber, G. (2010). Audio-Haptic Browser for a Geographical Information
System. In K. Miesenberger, J. Klaus, W. Zagler, & A. Karshmer (Eds.), ICCHP 2010.
LNCS, vol. 6180 (Vol. 6180/2010, pp. 466–473). Heidelberg: Springer.
Zeng, L., & Weber, G. (2011). Accessible Maps for the Visually Impaired. In Proc. of IFIP
INTERACT 2011 Workshop on ADDW (pp. 54–60). Lisbon, Portugal.
Zeng, L., & Weber, G. (2012). ATMap: Annotated Tactile Maps for the Visually Impaired.
In A. Esposito, A. M. Esposito, A. Vinciarelli, R. Hoffmann, & V. C. Müller (Eds.),
COST 2102 International Training School, Cognitive Behavioural Systems, LNCS Volume
7403, 2012 (pp. 290–298). Berlin, Heidelberg: Springer Berlin Heidelberg.
Zhao, H., Plaisant, C., Shneiderman, B., & Lazar, J. (2008). Data Sonification for Users with
Visual Impairment. ACM Transactions on Computer-Human Interaction, 15, 1–28.
Zimmermann, G., & Vanderheiden, G. (2007). Accessible design and testing in the
application development process: considerations for an integrated approach.
Universal Access in the Information Society, 7, 117–128.
345
Extended Table of Contents
ACKNOWLEDGMENTS ......................................................................................................................... 5
TABLE OF CONTENTS......................................................................................................................... 10
I INTRODUCTION .............................................................................................................................. 14
III DESIGNING INTERACTIVE MAPS WITH AND FOR VISUALLY IMPAIRED PEOPLE ........................... 108
346
III.1.1 Usability and Accessibility .................................................................................................. 108
III.1.2 Participatory Design and Accessibility ................................................................................ 109
III.2 ADAPTATIONS AND RESULTS OF PARTICIPATORY DESIGN WITH VISUALLY IMPAIRED USERS ............................ 113
III.2.1 General ............................................................................................................................... 113
III.2.2 Participants ........................................................................................................................ 114
III.2.3 Analyzing the Context of Use.............................................................................................. 118
III.2.4 Generating Ideas ................................................................................................................ 133
III.2.5 Prototyping ......................................................................................................................... 137
III.2.6 Evaluating Usability and Accessibility ................................................................................ 152
III.3 CONCLUSION ................................................................................................................................. 157
IV USABILITY OF INTERACTIVE MAPS AND THEIR IMPACT ON SPATIAL COGNITION ........................ 160
V.1 OBSERVING VISUALLY IMPAIRED USERS HAPTIC EXPLORATION STRATEGIES ................................................. 200
V.1.1 Strategies for Exploring Tactile Images without Vision ....................................................... 200
V.1.2 Kintouch: Observing Haptic Exploration Strategies ............................................................ 202
V.1.3 Conclusion and Perspectives ............................................................................................... 214
V.2 ENHANCING MAPS THROUGH NON-VISUAL INTERACTION ........................................................................ 216
V.2.1 Providing Functionality through basic Gestural Interaction ................................................ 216
V.2.2 Non-visual Interaction for Route Learning .......................................................................... 224
V.3 CONCLUSION .................................................................................................................................. 239
347
VI.1.2 Thesis Statement ................................................................................................................ 247
VI.1.3 Contributions ...................................................................................................................... 248
VI.2 FUTURE WORK ............................................................................................................................... 249
VI.2.1 Non-Visual Interaction for Visually Impaired and Sighted People ..................................... 249
VI.2.2 Interaction Design for Non-Visual Map Exploration .......................................................... 250
VI.2.3 Overcoming Limitations of Current Interactive Maps ........................................................ 253
VI.2.4 Usability of Interactive Maps ............................................................................................. 257
VI.2.5 Employment of Interactive Maps in “Real Life” ................................................................. 258
348
VII.5.1 Terminology ...................................................................................................................... 280
VII.5.2 Origin of the Projects......................................................................................................... 281
VII.5.3 Timeline ............................................................................................................................. 282
VII.5.4 Map Content and Scale ..................................................................................................... 283
VII.5.5 Non-Research Projects ...................................................................................................... 285
VII.6 INTERACTIVE MAP PROTOTYPE ......................................................................................................... 287
VII.6.1 Ivy Communication Protocol ............................................................................................. 287
VII.6.2 Lexical Map Content.......................................................................................................... 289
VII.6.3 Recommendations for Developing Interactive Maps ........................................................ 291
VII.7 PARTICIPATORY DESIGN WITH VISUALLY IMPAIRED PEOPLE .................................................................... 293
VII.8 EXPERIMENTAL STUDY (CHAPTER IV)................................................................................................. 295
VII.8.1 Participants ....................................................................................................................... 295
VII.8.2 User Study Questionnaire .................................................................................................. 296
VII.8.3 Santa Barbara Sense of Direction Scale ............................................................................ 303
VII.8.4 SUS questionnaire translated into French ......................................................................... 306
VII.8.5 Spatial tests ....................................................................................................................... 308
VII.9 ADVANCED NON-VISUAL INTERACTION (CHAPTER V) ........................................................................... 319
VII.9.1 SUS Questionnaire for Wizard of Oz .................................................................................. 319
VII.10 DANCE YOUR PHD ....................................................................................................................... 320
REFERENCES.................................................................................................................................... 323
349
Figures
Figure I.1: Methodology of this thesis. The development of interactive map
prototypes has been based on a participatory design cycle adapted for visually impaired
users. Several micro and macro iterations have been conducted. ...................................17
Figure II.1: Challenges for a visually impaired traveler in daily life. Displays of
grocery stores and road works are blocking the way. Reprinted from (Brock, 2010a). ....24
Figure II.2: Diagram of methods for evaluating route knowledge of visually
impaired people as proposed by Kitchin and Jacobson (1997). .......................................41
Figure II.3: Diagram of methods for evaluating survey knowledge of visually
impaired people as proposed by Kitchin and Jacobson (1997). .......................................42
Figure II.4: Small-scale model for visually impaired people as presented by
(Picard & Pry, 2009). Reprinted with permission. ............................................................46
Figure II.5: A visually impaired person reading a tactile map. ..............................50
Figure II.6: Different production methods for tactile maps. (a) Vacuum-formed map
of Europe. (b) Swell-paper map of a bus terminal ...........................................................55
Figure II.7: Production of swell-paper maps. A swell-paper map is passed through
the heater. .......................................................................................................................56
Figure II.8: Reading a braille legend that accompanies a tactile map. ..................59
Figure II.9: Overview of modalities employed in the interactive map prototypes for
visually impaired people.................................................................................................63
Figure II.10: Number of publications on interactive maps for visually impaired
people classified by devices. Categories are presented on the left grouped by the main
categories Haptic, Tactile actuators, Touch and Other. The x-axis represents the number
of publications. Bars in dark grey present the total for each category. ............................65
Figure II.11: Photograph of the GRAB interface as presented by (Iglesias et al.,
2004). Reprinted with permission. ...................................................................................66
Figure II.12: A blind person reading text on a dynamic Braille display. ................68
Figure II.13: Tactile actuator devices. (a) BrailleDis 9000 tablet as used by (Zeng &
Weber, 2010). (b) VTPlayer mouse with two 4x4 arrays of pins. Reprinted with
permission. .....................................................................................................................69
Figure II.14: Tactile actuator devices. (a) Schema of the Tactos device as proposed
by (Tixier et al., 2013) (b) The STReSS2 laterotactile device as used by (Petit et al., 2008).
Reprinted with permission. .............................................................................................70
Figure II.15: Access Lens application as proposed by (Kane, Frey, et al., 2013). The
map labels are located and recognized by image recognition and then translated into
350
speech output. The menu on the right displays an index of on-screen items. Reprinted
with permission. ..............................................................................................................71
Figure II.16: Tangible interaction as proposed by (Pielot et al., 2007). (a) User
interacting with the tangible object, a toy duck. (b) The setup: camera filming from
above. Reprinted with permission. ..................................................................................75
Figure II.17: Access Overlays as presented by (Kane, Morris, et al., 2011). The
image shows the edge projection technique. The user is exploring the menu on the x- and
y-axis. Reprinted with permission. ..................................................................................81
Figure II.18: Classification of audio output modalities as proposed by Truillet
(1999), translated and extended by spearcons with permission. .....................................82
Figure II.19: Placing a raised-line map overlay on a touchscreen. ........................91
Figure II.20: Set of Tactons for indicating cardinal directions as proposed by
(Pietrzak et al., 2009). (a) Static Tactons. (b) Dynamic tactons. Concatenating the patterns
in the colons leads to a wave form tacton. ........................................................................94
Figure II.21: Haptic Tabletop Puck. (a) User exploring a surface with the finger
placed on the rod. (b) View of the insight of the puck with different actuators taken from
below. Reprinted with permission. ..................................................................................96
Figure III.1: Participatory design cycle as applied on our project. The cycle was
iterative and composed of four phases (analysis, design, prototyping and evaluation).
Translated and reprinted from (Brock, Molas, & Vinot, 2010). ....................................... 113
Figure III.2: Different types of maps for visually impaired people. (a) Colored map
for people with residual vision, produced by vacuum forming. (b) Braille map for blind
people, produced on swell paper. (c) Legend accompanying a braille map. ................ 120
Figure III.3: Geotact tablet by E.O. Guidage. This device offered a 16*16 matrix of
rectangular interactive zones, thus providing 256 touchable zones. Reprinted from
(Brock, Molas, et al., 2010). ........................................................................................... 123
Figure III.4: Photograph of the Apple iPad (first generation)............................... 125
Figure III.5: General setup of a FTIR system by Tim Roth, reprinted from (Schöning
et al., 2008) with permission. ......................................................................................... 127
Figure III.6: Inside of the ILight Table: infrared rays are projected onto the touch
surface by ranges of diodes located below it. An infrared camera located below the
surface transmits the video stream to a computer which determines the position with
image processing algorithms ........................................................................................ 128
Figure III.7: Users working collaboratively on a large touch table. Reprinted with
permission from (Bortolaso, 2012). ................................................................................ 130
Figure III.8: Photograph of different multi-touch devices. (a) The Stantum SMK-15.4
Multi-Touch Display. (b) The 3M™ Multi-touch Display M2256PW. ............................... 132
351
Figure III.9: Brainstorming session with visually impaired users. A user is taking
notes with a BrailleNote device. Reprinted from (Brock, 2010a). ................................... 134
Figure III.10: Experimenters instructions for the simulated speech output (in
French). ........................................................................................................................ 136
Figure III.11: Wizard of Oz Session. A visually impaired user is exploring a tactile
map while the speech output is simulated. .................................................................... 137
Figure III.12: Map drawing used in the Wizard of Oz sessions. The different
"interactive" geographic elements are highlighted. Points of interest are represented by
circle symbols, bus stops by triangles, parks and rivers by different textures. Reprinted
from (Brock, 2010a). ...................................................................................................... 139
Figure III.13: Map drawing used for the second interactive map prototype.
Buildings were represented as black elements, whereas open spaces were represented
in white. ........................................................................................................................ 140
Figure III.14: Message-based software architecture. Four different modules
exchange messages using the ivy software bus. ............................................................ 142
Figure III.15: Double-tap state machine as implemented in the prototype. ......... 147
Figure III.16: Visually impaired user exploring the interactive map prototype ... 150
Figure III.17: Photograph of a user exploring the interactive map. The raised-line
map overlay is attached on top of the touch screen. The user is wearing mittens to prevent
unintended touch input from the palms. ........................................................................ 151
Figure III.18: Methods for evaluating route-based knowledge with visually
impaired people integrated into the classification by Kitchin and Jacobson (1997). Red
filling: methods included in our test battery. Blue outline: method that we added to the
classification.................................................................................................................. 153
Figure III.19: Methods for evaluating survey knowledge with visually impaired
people integrated into the classification by Kitchin and Jacobson (1997). Red filling:
methods included in our test battery. Blue outline: method that we added to the
classification.................................................................................................................. 155
Figure IV.1: Four different variants of the map existed in total. Two different map
contents are depicted in (a, b) and (c, d). They are based on the same geographic
elements, which were rotated and translated. Both map contents exist with braille (b, d)
and in interactive format (a, c). Circles are points of interest (either interactive or
accompanied by a braille abbreviation). The marks composed by three dots are
interactive elements to access street names. ................................................................. 164
Figure IV.2: Description of the visually impaired participants in our study. Means
and SDs have been omitted from this table. *When blindness was progressive, two values
are reported (the second value indicates the age at which blindness actually impaired the
subjects' life). ................................................................................................................ 166
352
Figure IV.3: Experimental design of the study. The experiment was composed by
a short-term and a long-term study. In the following the color code orange will be used
for the short-term and blue for the long-term study. ...................................................... 170
Figure IV.4 Structure of the spatial questions in the study. Questions were
separated in three categories: landmark, route and survey. Within each category
questions were counterbalanced. L = landmark, L-POI = landmark-points of interest, L-S
= landmark - street names, R = route, R-DE = route distance estimation, R-R = route
recognition, R-W = route wayfinding, S = survey, S-Dir = survey direction estimation, S-
Loc = survey location estimation, S-Dist = survey distance estimation. .......................... 173
Figure IV.5: Learning Time (mean values measured in minutes) for the paper map
(left) as compared to the interactive map (right). The Learning Time for the interactive
map was significantly lower than for the paper map (lower is better). In other words,
efficiency of the interactive map was significantly higher. Error bars show 95%
confidence intervals in this and all following figures. * p < .05. In the remainder of this
chapter we will use the color code red for the paper map and green for the interactive
map. .............................................................................................................................. 176
Figure IV.6: Mean spatial scores for responses to landmark, route and survey
questions (paper and interactive map summed up). Mean scores for the landmark tasks
were significantly higher than those for the route and survey tasks. There was no
significant difference between R and S questions. *** p< .001 ...................................... 177
Figure IV.7: Effect of order of presentation on landmark scores. The mean scores
for L questions were significantly higher when the interactive map was presented before
the paper map. ** p < .01 .............................................................................................. 178
Figure IV.8: Mean value (violet bar) and chance level (yellow box) for each spatial
cognition test. If no yellow box is reported, the chance level is equivalent to 0. (a) L
Scores have maximum values of 6, (b) R and S scores have maximum values of 4. ........ 179
Figure IV.9: For each map type the number of participants who preferred this map
type over the other is reported. One user had no preference. ....................................... 180
Figure IV.10: Mean spatial scores (depicted as graphs, right axis) and mean
confidence (depicted as bar chart, left axis) for responses to landmark, route and survey
questions (paper and interactive map summed up). Mean scores for the landmark tasks
were significantly higher than those for the route and survey tasks. There was no
significant difference between R and S questions. Confidence was significantly higher for
L than for R or S questions. Besides, the figure reveals a similar distribution of mean
confidence and mean spatial scores. *** p< .001 .......................................................... 182
Figure IV.11: Significant correlations for dependent variables, age-related factors
and personal characteristics. The size of the lines between nodes increases with the
strength of the correlation (r value). Nodes have been manually positioned for better
353
readability. Abbreviations: exp = expertise, freq = frequency, IM = interactive map, PM
= paper map, SBSOD = Santa Barbara Sense of Direction Scale. * p < .05, ** p < .01, *** p
< .001. The diagram was created with the Gephi software (Bastian, Heymann, & Jacomy,
2009). ............................................................................................................................ 183
Figure IV.12: Mean scores for landmark, route and survey responses at short-term
(immediate) and long-term (delayed). A significant effect of time is observed: all scores
were lower two weeks after exposure. The difference was most important for landmark
scores. The right axis depicts the percentage of the scores. LT: long-term, ST: short-term,
*** p < .001. .................................................................................................................. 186
Figure IV.13: Mean confidence (left y-axis) for landmark, route and survey
knowledge summed up for the paper and the interactive map both at short- and at long-
term. A significant effect of time is observed: confidence is lower two weeks after map
exploration. The difference is most important for landmark questions. Besides, the figure
reveals a strong correlation between confidence and spatial scores (orange) at short-
term but not at long-term (blue). The right axis applies to spatial scores. Abbreviations:
ST = short-term, LT = long-term. ** p < .01, *** p < .001. .............................................. 187
Figure V.1 Kintouch hardware setup: multi-touch screen with raised-line overlay
and Kinect camera......................................................................................................... 204
Figure V.2: Limiting the working zone in which image recognition is done (French:
“zone de travail”) to the actual map. Reprinted from (Ménard, 2011). ........................... 206
Figure V.3: Tracking of the user’s fingers. (a) Result based on depth image of the
Kinect camera (above) and real hand position (below). (b) Result based on RGB image of
the Kinect camera (above) and real hand position with color markers attached to finger
tips (below). Reprinted from (Brock, Lebaz, et al., 2012). .............................................. 207
Figure V.4: Finger tracks before (a) and after (b) fusion. (a) Black : tracks from the
Multi-touch table; colored tracks are from the KINECT (red thumb, green index, blue
middle finger, yellow ring finger, pink pinky). (b) Colored tracks after fusion (red thumb,
green index, blue middle finger, yellow ring finger, pink pinky); tracks above the surface
are marked by an “O”; tracks in contact with the surface by an “X”. Reprinted from
(Ménard, 2011). ............................................................................................................. 208
Figure V.5: Visualization of tracked fingers. Identified fingers are depicted in
violet and fingers touching the surface in pink. Reprinted from (Ménard, 2011). ........... 209
Figure V.6: Tracking of the fingers with the Kintouch prototype. (a) Detection was
working even when some fingers are occluded (names of fingers are indicated in
French). (b) When fingers were close to each other, the depth image algorithm
perceived the hand as one blob and separation of fingers was not possible. Reprinted
from (Ménard, 2011)...................................................................................................... 210
354
Figure V.7: A blindfolded participant testing the Kintouch prototype. Reprinted
from (Ménard, 2011)...................................................................................................... 211
Figure V.8: Visualization of the data in the log file before fusion. Traces marked in
red resulted from the palms of the hand that rested on the surface during map
exploration. Reprinted from (Ménard, 2011). ................................................................ 212
Figure V.9: Visualization of the data in the log file after fusion. Traces from the
palms of the hand have been removed. Reprinted from (Ménard, 2011). ...................... 213
Figure V.10: Spatial and temporal visualization (see colored traces) of movements
made by the right index during map exploration matched to the map content. Colors
indicate the successive positions of the index finger (1 red, 2 green, and 3 blue).
Reprinted from (Brock, Lebaz, et al., 2012). ................................................................... 213
Figure V.11: Exploration of the gestural prototype. The map drawing contains four
buttons for accessing different information modes (right side of the drawing). The image
also shows the braille display placed behind the screen. Reprinted from (Paoleschi,
2012). ............................................................................................................................ 219
Figure V.12: Detailed view of the raised-line overlay with the four buttons on the
right side. Reprinted from (Paoleschi, 2012). ................................................................. 220
Figure V.13: Different gestures used in the system, (a) Tap gesture, (b) Lasso
gesture. Reprinted from (Laufs, Ruff, & Weisbecker, 2010) with permission .................. 221
Figure V.14: View of the interactive map created as MT4J scene graph. Names of
streets are depicted in blue, names of points of interests in red. All names are in French.
Reprinted from (Paoleschi, 2012). ................................................................................. 222
Figure V.15: Map showing the road network of Lille as used for the Route Learning
Study. Four different itineraries (a, b, c, d) were prepared. .......................................... 228
Figure V.16: Photograph of the vibrating wristbands. (a) The vibration motor. (b)
The haptic bracelets with Arduino board. Reprinted with permission from (Kammoun,
2013). ............................................................................................................................ 229
Figure V.17: Perspectives of the user (red) and the traveler as represented by the
finger (blue). (a) Orientations are aligned. (b) Orientations are misaligned. ................. 230
Figure V.18: A participant while following the route guidance (with the right index
finger). .......................................................................................................................... 232
Figure V.19: Boxplot (25 – 75% interval, minimum and maximum value) of the
second exploration time for both explorations taken together for the four different
interaction techniques. The blue dots represent the median. ........................................ 233
Figure V.20: Boxplot (25 – 75% interval and maximum value) of the errors for both
explorations taken together for the four different interaction techniques. The blue dots
represent the median. ................................................................................................... 234
355
Figure V.21: Boxplot (25 – 75% interval, minimum and maximum value) of the
results from the SUS questionnaire for the four different interaction techniques. The blue
dots represent the median. ........................................................................................... 235
Figure VI.1: Propositions of interaction techniques for spatial learning on a tablet.
(a) Edge Projection. (b) Single Side Menu. (c) Spatial Regions. Reprinted from (Simonnet
et al., 2013).................................................................................................................... 252
Figure VI.2: Photograph of the Stimtac prototype as presented in (Casiez et al.,
2011). Reprinted with permission. ................................................................................. 256
Figure VII.1: Cities in which projects of interactive maps for visually impaired
people have been developed. The figure has been produced with Gephi GeoLayout
(Bastian et al., 2009) and OpenStreetMap. ..................................................................... 281
Figure VII.2: Timeline of publications on interactive maps for visually impaired
people........................................................................................................................... 282
Figure VII.3: Number of publications on interactive maps for visually impaired
people classified by map content and scale. Categories are presented on the left
(Outdoor, Indoor and Others). The x-axis represents the number of publications. Bars in
dark grey present the total for each category................................................................ 283
Figure VII.4: The iDact prototype provides accessible information about metro
stations in Paris. Reprinted with permission. ................................................................. 286
Figure VII.5 Detailed characteristics of the visually impaired participants in our
study. ............................................................................................................................ 295
Figure VII.6: Preparation of the choreography ................................................... 320
Figure VII.7 The dance performance .................................................................. 320
356
Tables
Table II.1: Classification of Visual Impairment based on visual acuity as defined by
(WHO, 2010). ..................................................................................................................22
Table III.1: Summary of the Design Choices ....................................................... 123
Table III.2: Evaluation of selected multi-touch devices according to different
criteria. ......................................................................................................................... 132
Table III.3: Comparison of different multi-touch APIs based on the criteria for our
map application. ........................................................................................................... 145
Table V.1: Participants in the Wizard of Oz study. Means and standard deviations
are reported in the two rightmost columns. SBSOD: Santa Barbara Sense of Direction
Scale. ............................................................................................................................ 231
Table VII.1: Wording of street names in map 1 (a) and map 2 (b). Each map
contains two terms for each category flowers, precious stones and birds. Each term is a
two-syllable word (= bi). All words are low-frequency terms (= LF). ............................. 289
Table VII.2: Description of the wording of POI in map 1 (a) and map 2 (b). Each
map contains three words with low frequency (= LF) and three words with high frequency
(=HF). For each of the two frequency one word has one syllable (= mono), one has two
syllables (= bi) and one has three syllables (= tri)......................................................... 290
357
Anke M. BROCK
Thesis Supervisors:
PhD Defense November 27th, 2013 at IRIT, University Toulouse 3 Paul Sabatier, France
Abstract:
Keywords:
Université Paul Sabatier, 118 route de Narbonne, 31 062 Toulouse Cedex 4, France
Anke M. BROCK
Directeurs de thèse:
Résumé:
Keywords:
Discipline: Informatique
Université Paul Sabatier, 118 route de Narbonne, 31 062 Toulouse Cedex 4, France