ARLAB Magazine1
ARLAB Magazine1
ARLAB Magazine1
april 2012
Re-introducing Mosquitos
Maarten Lamers
How Did we do IT
Wim van Eck
AR[t]
Magazine about Augmented Reality, art and technology
APRIL 2012
Colophon
Table of contents
ISSN Number
2213-2481
Contact
The Augmented Reality Lab (AR Lab) Royal Academy of Art, The Hague (Koninklijke Academie van Beeldende Kunsten)
32
Welcome
to AR[t]
36
07 08 12 20 28 30 36 42
Artist in Residence Portrait: Marina de haas
Hanna Schraffenberger
Prinsessegracht 4 2514 AN The Hague The Netherlands +31 (0)70 3154795 www.arlab.nl info@arlab.nl
60 66 70 72 76
Editorial team
Yolande Kolstee, Hanna Schraffenberger, Esm Vahrmeijer (graphic design) and Jouke Verlinden.
Contributors
Wim van Eck, Jeroen van Erp, Pieter Jonker, Maarten Lamers, Stephan Lukosch, Ferenc Molnr (photography) and Robert Prevel.
Re-introducing Mosquitos
Maarten Lamers
CoVer
George, an augmented reality headset designed by Niels Mulder during his Post Graduate Course Industrial Design (KABK), 2008
Die Walkre
Wim van Eck, AR Lab Student Project
How Did we do IT
Wim van Eck
Jouke Verlinden
5
WelcomE...
to the first issue of AR[t], the magazine about Augmented Reality, art and technology!
Starting with this issue, AR[t] is an aspiring magazine series for the emerging AR community inside and outside the Netherlands. The magzine is run by a small and dedicated team of researchers, artists and lecturers of the AR Lab (based at the Royal Academy of Arts, The Hague), Delft University of Technology (TU Delft), Leiden University and SME. In AR[t], we share our interest in Augmented Reality (AR), discuss its applications in the arts and provide insight into the underlying technology. At the AR Lab,we aim tounderstand, develop, refine and improvethe amalgamation of the physical world with the virtual. We do this through aproject-based approachand with the help of research funding from RAAK-Pro. In the magazine series,we invite writers from the industry, interview artists working with Augmented Reality and discuss the latest technological developments. It is our belief that AR and its associated technologies are important to the field of new media: media artists experiment with the intersection of the physical and the virtual and probe the limits of our sensory perception in order to create new experiences. Managers of cultural heritage are seeking after new possibilities for worldwide access to their collections. Designers, developers, architects and urban planners are looking for new ways to better communicate www.arlab.nl their designs to clients.Designers of games and theme parks want to create immersive experiencesthat integrate both the physical and the virtual world. Marketing specialists are working with new interactive forms of communication. For all of them, AR can serve as apowerful tool to realize their visions. Media artists and designers who want to acquire an interesting position within the domain of new media have to gain knowledge about and experience with AR.This magazine series is intended to provide both theoretical knowledge as well as a guide towards first practical experiences with AR. Our special focus lies on the diversity of contributions. Consequently, everybody who wants to know more about AR should be able to find something of interest in this magzine, be they art and design students, students from technical backgrounds as well as engineers, developers, inventors, philosophers or readers who just happened to hear about AR and got curious. We hope you enjoy the first issue and invite you to check out the website www.arlab.nl to learn more about Augmented Reality in the arts and the work of the AR Lab.
A short Overview of AR
We define Augmented Reality as integrating 3-D virtual objects or scenes into a 3-D environment in real time (cf. A zuma, 1997).
Paul C . Lauterbur and Peter Mansfield, who won the prize in 2003 for their discoveries concerning magnetic resonance imaging (MRI). Although their original goals were different, in the field of Augmented Reality one might use the 3D virtual models that are produced by such systems. However, they have to be processed prior to use in
Augmented Reality is a relatively recent computer based technology that differs from the earlier known concept of Virtual Reality. Virtual Reality is a computer based reality where the actual, outer world is not directly part of, whereas Augmented Reality can be characterized by a combination of the real and the virtual. Augmented Reality is part of the broader concept of Mixed Reality: environments that consist of the real and the virtual. To make these differences and relations more clear, industrial engineer Paul Milgram and Fumio Kishino introduced the Mixed Reality Continuum diagram in 1994, in which the real world is placed on the one end and the virtual world is placed on the other end.
Mixed Reality (MR)
Real Environment Augmented Reality (AR) Augmented Virtuality (AV) Virtual Environment
AR because they might be too heavy. A 3D laser scanner is a device that analyses a real-world object or environment to collect data on its shape and its appearance (i.e. colour). The collected data can then be used to construct digital, three dimensional models. These scanners are sometimes called 3D digitizers. The difference is that the above medical scanners are looking inside to create a 3D model while the laser scanners are creating a virtual image from the reflection of the outside of an object.
pioneering work on X-ray computed tomography (CT). Another couple of Nobel Prize winners are
Humanities, 40 articles were offered discussing the connection of hard physics and soft art. There are several ways in which art and Augmented Reality technology can be connected: we can, for example, make art with Augmented Reality technology, create Augmented Reality artworks or use Augmented Reality technology to show and explain existing art (such as a
Artist: KAROLINA SOBECKA | http://www.gravitytrap.com
society. (p.73., cited in Papagiannis, 2011, p.61) As Helen Papagiannis concludes, it is then up to the artist to act as a pioneer, pushing forward a new aesthetic that exploits the unique materials of the novel technology (2011, p.61). Like Helen, we believe this holds also for the emerging field of AR technologies and we hope, artists will set out to create exciting new Augmented Reality art and thereby contribute to the interplay between art and technology. An interview with Helen Papagiannis can be found on page 12 of this magazine. A portrait of the artist Marina de Haas, who did a residency at the AR Lab, can be found on page 60.
monument like the Greek Pantheon or paintings from the grottos of Lascaux). Most of the contributions of the conference concerned Augmented Reality as a tool to present, explain or augment
add information to a book, by looking at the book and the screen at the same time.
existing art. However, some visual artists use AR as a medium to create art. The role of the artist in working with the emerging technology of Augmented Reality has been discussed by Helen Papagiannis in her ISMAR paper The Role of the Artist in Evolving AR as a New Medium (2011). In her paper, Helen Papagiannis reviews how the use of technology as a creative medium has been discussed in recent years. She points out, that in 1988 John Pearson wrote about how the computer offers artists new means for expressing their ideas (p.73., cited in Papagiannis, 2011, p.61). According to Pearson, Technology has always been, the handmaiden of the visual arts, as is obvious, a technical means is
REFERENCES
Milgram P. and Kishino, F., A Taxonomy of Mixed Reality Visual Displays, IEICE Trans. Information Systems, vol. E77-D, no. 12, 1994, pp. 1321-1329. A zuma, Ronald T., A Survey of Augmented Reality. In Presence: Teleoperators and Virtual Environments 6, 4 (August 1997), pp. 355-385. Papagiannis, H., The Role of the Artist in Evolving AR as a New Medium, 2011 IEEE International Symposium on Mixed and Augmented Reality(ISMAR) Arts, Media, and Humanities (ISMAR-AMH), Basel, Switserland, pp. 61-65. Pearson, J., The computer: Liberator or Jailer of The creative Spirit. Leonardo, Supplemental Issue, Electronic Art, 1 (1988), pp. 73-80.
always necessary for the visual communication of ideas, of expression or the development of works of arttools and materials are required. (p. 73) However, he points out that new technologies were not developed by the artistic community for artistic purposes, but by science and industry to serve the pragmatic or utilitarian needs of
10
11
I wholeheartedly agree that AR can create a magical experience. In my TEDx 2010 talk, How Does Wonderment Guide the Creative Process (http://youtu.be/ScLgtkVTHDc), I discuss how AR enables a sense of wonder, allowing us to see our environments anew. I often feel like a magician when presenting demos of my AR work live; astonishment fills the eyes of the beholder questioning, How did you do that? So what happens when the magic trick is revealed, as you ask, when the illusion loses its novelty and becomes habitual? In Virtual Art: Illusion to Immersion (2004), new media art-historian Oliver Grau discusses how audiences are first overwhelmed by new and unaccustomed visual experiences, but later, once habituation chips away at the illusion, the new medium no longer possesses the power to captivate (p. 152). Grau writes that at this stage the medium becomes stale and the audience is hardened to its attempts at illusion; however, he notes, that it is at this stage that the observers are receptive to content and media competence (p. 152). When the initial wonder and novelty of the technology wear off, will it be then that AR is explored as a possible media format for various content and receive a wider public reception as a mass medium? Or is there an element of wonder that need exist in the technology for it to be effective and flourish?
Helen Papagiannis is a designer, artist, and PhD researcher specializing in Augmented Reality (AR) in Toronto, Canada. Helen has been working with AR since 2005, exploring the creative possibilities for AR with a focus on content development and storytelling. She is a Senior Research Associate at the Augmented Reality Lab at York University, in the Department of Film, Faculty of Fine Arts. Helen has presented her interactive artwork and research at global juried
conferences and events including TEDx (Technology, Entertainment, Design), ISMAR (International Society for Mixed and Augmented Reality) and ISEA (International Symposium for Electronic Art). Prior to her Augmented life, Helen was a member of the internationally renowned Bruce Mau Design studio where she was project lead on Massive Change: The Future of Global Design." Read more about Helens work on her blog and follow her on Twitter: @ARstories.
www.augmentedstories.com
So far, AR technologies are still new to many people and often AR works cause a magical experience. Do you think AR will lose its magic once people get used to the technology and have developed an understanding of how AR works? How have you worked with this magical element in your work The Amazing Cinemagician?
12
13
I believe AR is currently entering the stage of content development and storytelling, however, I dont feel AR has lost its power to captivate or become stale, and that as artists, designers, researchers and storytellers, we continue to maintain wonderment in AR and allow it to guide and inspire story and content. Lets not forget the enchantment and magic of the medium. I often reference the work of French filmmaker and magician George Mlis (1861-1938) as a great inspiration and recently named him the Patron Saint of AR in an article for The Creators Project (http://www.thecreatorsproject.com/ blog/celebrating-georges-mlis-patron-saintof-augmented-reality) on what would have been Mlis 150th birthday. Mlis was first a stage magician before being introduced to cinema at a preview of the Lumiere brothers invention, where he is said to have exclaimed, Thats for me, what a great trick. Mlis became famous for the trick-film, which employed a stop- motion and substitution technique. Mlis applied the newfound medium of cinema to extend magic into novel, seemingly impossible visualities on the screen. I consider AR, too, to be very much about creating impossible visualities. We can think of AR as a real-time stop-substitution, which layers content dynamically atop the physical environment and creates virtual actualities with shapeshifting objects, magically appearing and disappearing as Mlis first did in cinema. In tribute to Mlis, my Mixed Reality exhibit, The Amazing Cinemagician integrates Radio Frequency Identification (RFID) technology with the FogScreen, a translucent projection screen consisting of a thin curtain of dry fog. The Amazing Cinemagician speaks to technology as magic, linking the emerging technology of the
hidden within each physical playing card. Part of the magic and illusion of this project was to disguise the RFID tag as a normal object, out of the viewers sight. Each of these tags corresponds to a short film clip by Mlis, which is projected onto the FogScreen once a selected card is placed atop the RFID tag reader. The RFID card reader is hidden within an antique wooden podium (adding to the aura of the magic performance and historical time period). The following instructions were provided to the participant: Pick a card. Place it here. Prepare to be amazed and entertained. Once the participant placed a selected card atop the designated area on the podium (atop the concealed RFID reader), an image of the corresponding card was revealed on the FogScreen, which was then followed by one of Mlis films. The decision was made to provide visual feedback of the participants selected card to add to the magic of the experience and to generate a sense of wonder, similar to the witnessing and questioning of a magic trick, with participants asking, How did you know that was my card? How did you do that? This curiosity inspired further exploration of each of the cards (and in turn, Mlis films) to determine if each of the par ticipants cards could be properly identified.
You are an artist and researcher. Your scientific work as well as your artistic work explores how AR can be used as a creative medium. Whats the difference between your work as an artist/ designer and your work as a researcher?
Excellent question! I believe that artists and designers are researchers. They propose novel paths for innovation introducing detours into the usual processes. In my most recent TEDx 2011 talk in Dubai, Augmented Reality and the Power of Imagination (http://youtu.be/7QrB4cYxjmk),
FogScreenwith the pre-cinematic magic lantern and phantasmagoria spectacles of the Victorian era. The installation is based on a card-trick, using physical playing cards as an interface tointeract with the FogScreen. RFID tags are
15 14
I discuss how as a designer/artist/PhD researcher I am both a practitioner and a researcher, a maker and a believer. As a practitioner, I do, create, design; as a researcher I dream, aspire, hope. I am a make-believer working with a technology that is about make-believe, about imagining possibilities atop actualities. Now, more than ever, we need more creative adventurers and make-believers to help AR continue to evolve and become a wondrousnew medium, unlike anything weve ever seen before! I spoke to the importance and power of imagination and makebelieve, and how they pertain to AR at this critical junction in the mediums evolution. When we make-believe and when we imagine, we are in two places simultaneously; make-believe is about projecting or layering our imagination on top of a current situation or circumstance. In many ways, this is what AR is too: layering imagined worlds on top of our existing reality.
the reader through the storybook where various creepy crawlies (spider, ant, and butterfly) are awaiting to be discovered, appearing virtually as 3D models you can interact with. A tarantula attacks when you touch it, an ant hyperlinks to educational content with images and diagrams, and a butterfly appears flapping its wings atop a flower in a meadow. Hands are integrated throughout the book design, whether its pla cing ones hand down to have the tarantula crawl over you virtually, the hand holding the magnifying lens that sees the ant, or the hands that popup holding the flower upon which the butterfly appears. Its a method to involve the reader in the narrative, but also comments on the unique tactility AR presents, bridging the digital with the physical. Further, the story for the AR Pop-up Book was inspired by AR psychotherapy studies for the treatment of phobias such as arachnophobia. AR provides a safe, controlled environment to conduct exposure therapy within a patients physical surroundings, creating a more believable scenario with heightened presence (defined as the sense of really being in an imagined or perceived place or scenario) and provides greater immediacy than in Virtual Reality (VR). A video of the book may be watched at http://vimeo.com/25608606.
Hallucinatory Augmented Reality (AR), 2007, was an experiment which investigated the possibility of images which were not glyphs/AR trackables to generate AR imagery. The projects evolved out of accidents, incidents in earlier experimentsin which the AR software was mis taking non-marker imagery for AR glyphs and attempted to generate AR imagery. This confusion, by the software, resulted in unexpected and random flickering AR imagery. I decided to explore the creative and artistic possibilities of thiseffect further and conduct experiments with non-traditional marker-based tracking. The process entailed a study of what types of non-marker images might generate such hallu cinations and a search for imagery that would evoke or call upon multiple AR imagery/videos from a single image/non-marker. Upon multiple image searches, one image emerged which proved to be quite extraordinary. A cathedral stained glass window was able to evoke four different AR videos, the only instance, from among many other images, in which multiple AR imagery appeared. Upon close examination of the image, focusing in and out with a web camera, a face began to emerge in the black and white pattern. A fantastical image of a man was encountered. Interestingly, it was when the image was blurred into this face using the web camera that the AR hallucinatory imagery worked best, rapidly multiplying and appearing more prominently. Although numerous attempts were made with similar images, no other such instances occurred; this image appeared to be unique. The challenge now rested in the choice of what types of imagery to curate into this hallucinatory viewing: what imagery would be best suited to this phantasmagoric and dream-like form? My criteria for imagery/videos were like-form and shape, in an attempt to create a collage-like set of visuals. As the sequence or duration of the imagery in Hallucinatory AR could not be predetermined, the goal was to identify imagery
Youve had quite a success with your AR pop-up book Whos Afraid of Bugs?In your blog you talk about your inspiration for the story behind the book: it was inspired by AR psychotherapy studies for the treatment of phobias such as arachnophobia. Can you tell us more?
Whos Afraid of Bugs? was the worlds first Augmented Reality (AR) Pop-up designed for iPad2 and iPhone 4. The book combines hand-crafted paper-engineering and AR on mobile devices to create a tactile and hands-on storybook that explores the fear of bugs through narrative and play. Integrating image tracking in the design, as opposed to black and white glyphs commonly seen in AR, the book can hence be enjoyed alone
Picture: Helen Papagiannis
as a regular pop-up book, or supplemented with Augmented digital content when viewed through a mobile device equipped with a camera.The book is a playful exploration of fears using AR in a meaningful and fun way. Rhyming text takes
In your work, technology serves as an inspiration. For example, rather than starting with a story which is then adapted to a certain technology, you start out with AR technology, investigate its strengths and weaknesses and so the story evolves. However, this does not limit you to only use the strength of a medium. On the contrary, weaknesses such as accidents and glitches have for example influenced your work Hallucinatory AR. Can you tell us a bit more about this work?
16
17
that possessed similarities, through which the possibility for visual synchronicities existed. Themes of intrusions and chance encounters are at play in Hallucinatory AR, inspired in part by Surrealist artist Max Ernst. In What is the Mechanism of Collage? (1936), Ernst writes: One rainy day in 1919, finding myself on a village on the Rhine, I was struck bythe obsession which held under my gaze the pages of an illus trated catalogueshowing objects designed for anthropologic, microscopic, psychologic,mineralogic, and paleontologic demonstration. There I found brought togetherelements of figuration so remote that the sheer absurdity of that col lectionprovoked a sudden intensification of thevisionary faculties in me and brought forth an illusive succession of contradictory images, double, triple, and multipleimages, piling up on each other with the persistence and rapidity which areparticular to love memories and visions of half-sleep (p. 427). Of particular interest to my work in exploring and experimenting with Hallucinatory AR was Ernsts description of an illusive succession of contradictory images that were brought forth (as though independent of the artist), rapidly multiplying and piling up in a state of halfsleep. Similarities can be drawn to the process of the seemingly disparate AR images jarringly coming in and out of view,layered atop one another. One wonders if these visual accidents are what the future of AR might hold: of unwelcome glitches in software systems as Bruce Sterling describes on Beyond the Beyond in 2009; or perhaps we might come to delight in the visual poetry of these Augmented hallucinations that are As beautiful as the chance encounter of a sewing machine and an umbrella on an operating table.
1
example of the technology failing. To the artist, however, there is poetry in these glitches, with new possibilities of expression and new visual forms emerging. On the topic of glitches and accidents, Id like to return to Mlis. Mlis became famous for the stop trick, or double exposure special effect, a technique which evolved from an accident: Mlis camera jammed while filming the streets of Paris; upon playing back the film, he observed an omnibus transforming into a hearse. Rather than discounting this as a technical failure, or glitch, he utilized it as a technique in his films. Hallucinatory AR also evolved from an accident, which was embraced and applied in attempt to evolve a potentially new visual mode in the medium of AR. Mlis introduced new formal styles, conventions and techniques that were specific to the medium of film; novel styles and new conventions will also emerge from AR artists and creative adventurers who fully embrace the medium.
As beautiful as the chance encounter of a sewing machine and an umbrella on an operating table.
Comte de Lautramont
[1] Comte de Lautreamonts often quoted allegory, famous for inspiring both Max Ernst and Andrew Breton, qtd. in: Williams, Robert. Art Theory: An Historical Introduction. Malden, MA: Blackwell Publishing, 2004: 197
To a computer scientist, these glitches, as applied in Hallucinatory AR, could potentially be viewed or interpreted as a disaster, as an
Picture: Pippin Lee
18 19
20
21
is computing power and energy consumption. Companies such as Microsoft, Google, Sony, Zeiss,... will enter the consumer market soon with AR technology.
(salient) key points in the left and right camera image. If, for example, you twist your head, this shift is determined on the basis of those key points with more than 30 frames per second. This way, a 3D map of these keypoints can be built and the computer knows the relationship (distance and angle) between the keypoints and the stereo camera. This method is more robust than marker based tracking because you have many keypoints widely spread in the scene and not just the four corners of the marker close together in the scene. If someone walks in front of the camera and blocks some of the keypoints, there will still be enough keypoints left and the tracking is not lost. Moreover, you do not have to stick markers all over the world.
Tracking Technology
A current obstacle for major applications which soon will be resolved is the tracking technology. The problem with AR is embedding the virtual objects in the real world. You can compare this with color printing: the colors, e.g., cyan, magenta, yellow and black have to be printed properly aligned to each other. What you often see in prints which are not cut yet, are so called fiducial markers on the edge of the printing plates that serve as a reference for the alignment of the colors. These are also necessary in AR. Often, you see that markers are used onto which a 3D virtual object is projected. Moving and rotating the marker, lets you move and rotate the virtual object. Such a marker is comparable to the fiducial marker in color printing. With the help of computer vision technology, the camera of the headset can identify the marker and based on its size, shape and position, conclude the relative position of the camera. If you move your head relative to the marker (with the virtual object), the computer knows how the image on the display must be transformed so that the virtual object remains stationary. And conversely, if your head is stationary and you rotate the marker, it knows how the virtual object should rotate so that it remains on top of the marker. AR smartphone applications such as Layar use the build in GPS and compass for the tracking. This has an accuracy of meters and measures angles of 5-10 degrees. Camera-based tracking, however, is accurate to the centimetre and can measure angles of several degrees. Nowadays, using markers for the tracking is already out of date and we use so called natural feature tracking also called keypoint tracking. Here, the computer searches for conspicuous
Collaboration with the Royal Academy of Arts (KABK) in The Hague in the AR Lab (Royal Academy, TU Delft, Leiden University, various SMEs) in the realization of applications.
The TU Delft has done research on AR since 1999. Since 2006, the university works with the art academy in The Hague. The idea is that AR is a new technology with its own merits. Artists are very good at finding out what is possible with the new technology. Here are some pictures of realized projects. liseerde projecten
Fig 1. The current technology that replaces the markers with natural feature tracking or so called keypoint tracking. Instead of the four corners of the marker, the computer itself determines which points in the left and right images can be used as anchor points for calculating the 3D pose of the camera in 3D space. From top: 1: you can use all points in the left and right images to slowly build a complete 3D map. Such a map can, for example, be used to relive your past experience because you can again walk in the now virtual space. 2: the 3D keypoint space and the trace of the camera position within it. 3: keypoints (the color indicates the suitability) 4: you can place virtual objects (eyes) on an existing surface
22
23
Fig 2. Virtual furniture exhibition at the Salone di Mobile in Milan (2008); students of the Royal Academy of Art, The Hague show their furnitures by means of AR headsets. This saves transportation costs.
Fig 4. Exhibition in Museum Boijmans van Beuningen (2008-2009). From left: 1) Sgraffitto in 3D; 2) the 3D print version may be picked up by the spectator, 3) animated shards, the table covered in ancient pottery can be seen via the headset, 4) scanning antique pottery with the CT scanner delivers a 3D digital image.
Fig 3. Virtual sculpture exhibition in Krller-Mller (2009). From left: 1) visitors on adventure with laptops on walkers, 2) inside with a optical see-through headset, 3) large pivotable screen on a field of grass, 4) virtual image.
Fig 5. The TUD, partially in collaboration with the Royal Academy (with the oldest industrial design course in the Netherlands), has designed a number of headsets.This design of headsets is an ongoing activity. From left: 1) first optical see-through headset with Sony headset and self-made inertia tracker (2000), 2) on a construction helmet (2006), 3) SmartCam and tracker taped on a Cyber Mind Visette headset (2007); 4) headset design with engines by Niels Mulder, a student at Royal Academy of Art, The Hague (2007), based on Cybermind technology, 5) low cost prototype based on the Carl Zeiss Cinemizer headset, 6) future AR Vizor?, 7) future AR lens?
24
25
There are many applications that can be realized using AR; they will find their way in the coming decades:
1. Head-Up Displays have already been used for many years in the Air Force for fighter pilots; this can be extended to other vehicles and civil applications. 2. The billboards during the broadcast of a football game are essentially also AR; more can be done by also ivolving the game itself an allowing interaction of teh user, such as off-side line projection. 3. In the professional sphere, you can, for example, visualize where pipes under the street lie or should lie. Ditto for designing ships, houses, planes, trucks and cars. Whats outlined in a CAD drawing could be drawn in the real world, allowing you to see in 3D if and where there is a mismatch. 4. You can easily find books you are looking for in the library. 5. You can find out where restaurants are in a city... 6. You can pimp theater / musical / opera / pop concerts with (immersive) AR decor. 7. You can arrange virtual furniture or curtains from the IKEA catalog and see how they look in your home. 8. Maintenance of complex devices will become easier, e.g. you can virtually see where the paper in the copier is jammed. 9. If you enter a restaurant or the hardware store, a virtual avatar can show you the place to find that special bolt or table.
showing the Serra room in Museum Boijmans van Beuningen during the exhibition Sgraffito in 3D
26
27
Re-introducing Mosquitos
Maarten Lamers
Around 2004, my younger brother Valentijn introduced me to the fascinating world of augmented reality. He was a mobile phone salesman at the time, and Siemens had just launched their first smartphone, the bulky Siemens SX1. This phone was quite marvelous, we thought it ran the Symbian operating system, had a built-in camera, and came with three games.
One of these games was Mozzies, a.k.a Virtual Mosquito Hunt, which apparently won some 2003 Best Mobile Game Award and my brother was eager to show it to me in the store where he worked at that time. I was immediately hooked Mozzies lets you kill virtual mosquitos that fly around superimposed over the live camera feed. By physically moving the phone you could chase after the mosquitos when they attempted to fly off the phones display. Those are all the ingredients for Augmented Reality in my personal opinion: something that interacts with my perception and manipulation of the world around me, at that location, at that time. And Mozzies did exactly that. Now almost eight years later, not much has changed. Whenever people around me speak of AR, because they got tired of saying Augmented Reality, they still refer to bulky equipment (even bulkier than the Siemens SX1!) that projects stuff over a live camera feed and lets you interact with whatever
that stuff is. In Mozzies it was pesky little mosquitos -- nowadays it is anything from restaurant information to crime scene data. But nothing really changed, right? Right! Technology became more advanced, so we no longer need to hold the phone in our hand, but get to wear it strapped to our skull in the form of goggles. But the idea is unchanged; you look at fake stuff in the real world and physically move around to deal with it. You still dont get the tactile sensation of swatting a mosquito or collecting virtually heavy information. You still dont even hear the mosquito flying around you Its time to focus on those matters also, in my opinion. Lets take up the challenge and make AR more than visual, exploring interaction models for other senses. Lets enjoy the full experience of seeing, hearing, and particularly swatting mosquitos, but without the itchy bites.
28
29
When I enter Lieven van Velthovens room, the people from the Efteling have just left. They are interested in his virtual growth installation. And they are not the only ones interested in Lievens work. In the last year, he has won the Jury Award for Best New Media Production 2011 of the international Cinekid Youth Media Festival as well as the Dutch Game Award 2011 for the Best Student Game. The winning mixed realitygame Room Racers has been shown at the Discovery festival,Mediamatic, the STRP festival and the ZKM in Karlsruhe. His virtual growth installation has embellished the streets of Amsterdam at night. Now, he is going to show Room Racers to me, in his living room where it all started.
The room is packed with stuff and on first sight it seems rather chaotic, with a lot of random things laying on the floor. There are a few plants, which probably dont get enough light, because Lieven likes the dark (thats when his projections look best). It is only when he turns on the beamer, that I realize that his room is actually not chaotic at all. The shoe, magnifying class, video games, tape and stapler which cover the floor are all part of the game.
Lieven tells me. He hands me a controller and soon we are racing the little projected cars around the chocolate spread, marbles, a remote control and a flash light. Trying not to crash the car into a belt, I tell him what I remember about when I first met him a few years ago at a Media Technology course at Leiden University. Back then, he was programming a virtual bird, which would fly from one room to another, preferring the room in which it was quiet. Loud and sudden sounds would scare the bird away into another room. The course for which he developed it was called sound space interaction, and his installation was solely based on sound. I ask him whether the virtual bird was his first contact with Augmented Reality. Lieven laughs.
My first encounter with AR was during our first Media Technology course a visit to the Ars Electroncia festival in 2007 where I saw Pablo Valbuenas Augmented Sculpture. It was amazing. I was asking myself, can I do something like this but interactive instead?
Armed with a bachelor in technical computer science from TU Delft and the new found possi bility to bring in his own curiosity and ideas at the Media Technology Master program at Leiden University, he set out to build his own inter active projection based works.
You create your own race game tracks by placing real stuff on the floor
31
Room Racers
Up to four players race their virtual cars around real objects which are lying on the floor. Players can drop in or out of the game at any time. Everything you can find can be placed on the floor to change the route. Room Racers makes use of projection-based mixed reality. The structure of the floor is analysed in real-time using a modified camera and self-written software. Virtual cars are projected onto the real environment and interact with the detected objects that are lying on the floor. The game has won the Jury Award for Best New Media Production 2011 of the international Cinekid Youth Media Festival, and the Dutch Game Award 2011 for Best Student Game. Room Racers shas been shown at several international media festivals. You can play Room Racers at the 'Car Culture' exposition at the Lentos Kunstmuseum in Linz, Austria until 4th of July 2012.
Picture: Lieven van Velthoven, Room Racers at ZKM | Center for Arts and Media in Karlsruhe, Germany on June 19th, 2011
32
33
The first time, I experimented with the combination of the real and the virtual myself was in a piece called shadow creatures which I made with Lisa Dalhuijsen during our first semester in 2007.
More interactive projections followed in the next semester and in 2008, the idea for Room Racers was born. A first prototype was build in a week: a projected car bumping into real world things. After that followed months and months of optimizations. Everything is done by Lieven himself, mostly at night in front of the computer.
His success does surprise him and he especially did not expect the attention it gets in an art context.
stable, even on a bike. That has been witnessed in Amsterdam, where the audiovisual bicycle project Volle Band put beamers on bikes and invented Lieven to augmented the city with his mobile installation. People who experienced Virtual Growth on his journeys around Amsterdam, at festivals and parties, are enthusiastic about his (smashing!) entertainment-art. As the virtual structure grows, the audience members not only start to interact with the piece but also with each other.
at university. While talking, he smokes his cigarette and takes the ashtray from the floor. With the road no longer blocked by it, the cars take a different route now. Lieven might take a different route soon as well. I ask him, if he will still be working from his living room, realizing his own ideas, once he has graduated.
My projects are never really finished, they are always work in progress, but if something works fine in my room, its time to take it out in the world.
After having friends over and playing with the cars until six oclock in the morning, Lieven knows its time to steer the cars out of his room and show them to the outside world.
I knew it was fun. That became clear when I had friends over and we played with it all night. But I did not expect the awards. And I did not expect it to be relevant in the art scene. I do not think its art, its just a game. I dont consider myself an artist. I am a developer and I like to do interactive projections. Room Racers is my least arty project, nevertheless it got a lot of response in the art context.
A piece which he actually considers more of an artwork is Virtual Growth: a mobile installation which projects autonomous growing structures onto any environment you place it in, be it buildings, people or nature.
They put themselves in front of the projector, have it projecting onto themselves and pass on the projection to other people by touching them. I dont explain anything. I believe in simple ideas, not complicated concepts. The piece has to speak for itself. If people try it, immediately get it, enjoy it and tell other people about it, it works!
Virtual Growth works, that becomes clear from the many happy smiling faces the projection grows upon. And thats also what counts for Lieven.
Its actually funny. It all started to fill my portfolio in order to get a cool job. I wanted to have some things to show besides a diploma. Thats why I started realizing my ideas. It got out of control and soon I was realizing one idea after the other. And maybe, Ill just continue doing it. But also, there are quite some companies and jobs Id enjoy working for. First I have to graduate anyway.
If I have learned anything about Lieven and his work, I am sure his graduation project will be placed in the real world and work in in realtime. More than that, it will be fun. It aint Lieven, if it aint fun. Name: Lieven van Velthoven Born: 1984 Leiden University Background: Computer Science, TU Delft Selected AR Works: Room Racers, Virtual Growth Watch: http://www.youtube.com/ user/lievenvv Study: Media Technology MSc,
I wanted to present Room Racers but I didnt know anyone, and no one knew me. There was no network I was part of.
Uninhibited by this, Lieven took the initiative and asked the Discovery Festival if they were interested in his work. Luckily, they were and showed two of his interactive games at the Discovery Festival 2010. After the festival requests started coming and the cars kept rolling. When I ask him about this continuing success he is divided:
Its fun, but it takes a lot of time I have not been able to program as much as I used to.
34
For me AR has to take place in the real world. I dont like screens. I want to get away from them. I have always been interested in other ways of interacting with computers, without mice, without screens. There is a lot of screen based AR, but for me AR is really about projecting into the real world. Put it in the real world, identify real world objects, do it in real-time, thats my philosophy. It aint fun if it aint real-time. One day, I want to go through a city with a van and do projections on buildings, trees, people and whatever else I pass.
For now, he is bound to a bike but that does not stop him. Virtual Growth works fast and
At first it was hard, I didnt get paid for doing these projects. But when people see them and are enthusiastic, that makes me happy. If I see people enjoying my work, and playing with it, thats what really counts.
I wonder where he gets the energy to work that much alongside being a student. He tells me, what drives him, is that he enjoys it. He likes to spend the evenings with the programming language C#. But the fact that he enjoys working on his ideas, does not only keep him motivated but also has caused him to postpone a few courses
35
Always wanted to create your own augmented reality pro jects but never knew how? Dont worry, AR[t] is going to help you! however, There are many hurdles to take when realizing an augmented reality project. Ideally you should be a skillful 3d animator to create your own virtual objects, and a great programmer to make the project technically work. providing you dont just want to make a fancy tech-demo, you also need to come up with a great concept!
In case you dont want to create your own 3d models you can also download them from various websites. Turbosquid (http://www.turbosquid.com), for example, offers good quality but often at a high price, while free sites such as Artist-3d (http://artist-3d.com) have a more varied quality. When a 3d model is not constructed properly it might give problems when you import it or visualize it. In coming issues of AR[t] we will talk more about optimizing 3d models for
My name is Wim van Eck and I work at the AR Lab, based at the Royal Academy of Art. One of my tasks is to help art-students realize their Augmented Reality projects. These students have great concepts, but often lack experience in 3d animation and programming. Logically I should tell them to follow animation and programming courses, but since the average deadline for their projects is counted in weeks instead of months or years there is seldom time for that... In the coming issues of AR[t] I will explain how the AR Lab helps students to realize their projects and how we try to overcome technical boundaries, showing actual projects we worked on by example. Since this is the first issue of our magazine I will give a short overview of recommendable programs for Augmented Reality development. We will start with 3d animation programs, which we need to create our 3d models. There are many 3d animation packages, the more well known ones include 3ds Max, Maya, Cinema 4d, Softimage, Lightwave, Modo and the open source
Blender (www.blender.org). These are all great programs, however at the AR Lab we mostly use Cinema 4d (image 1) since it is very user friendly and because of that easier to learn. It is a shame that the free Blender still has a steep learning curve since it is otherwise an excellent program. You can download a demo of Cinema 4d at http://www.maxon.net/downloads/demo-version.html, these are some good tutorial sites to get you started: http://www.cineversity.com http://www.c4dcafe.com http://greyscalegorilla.com
Augmented Reality usage. To actually add these 3d models to the real world you need Augmented Reality software. Again there are many options, with new software being added continuously. Probably the easiest to use software is BuildAR (http://www.buildar.co.nz) which is available for Windows and OSX. It is easy to import 3d models, video and sound and there is a demo available. There are excellent tutorials on their site to get you started. In case you want to develop for iOS or Android the free Junaio (http://www.junaio.com) is a good option. Their online GLUE application is easy to use, though their preferred .m2d format for 3d models is not the most common. In my opinion the most powerful Augmented Reality software right now is Vuforia (https://developer.qualcomm.com/ develop/mobile-technologies/Augmented-reality) in combination with the excellent game-engine Unity (www.unity3d.com). This combination offers high-quality visuals with easy to script
Image 1
36
37
=
Image 7
Image 5 Image 8
A webcam was placed on top of the screen, and a laptop running ARToolkit (http://www. hitl.washington.edu/artoolkit) was mounted on the back of the screen. A large marker was placed near the sculpture as a reference point for ARToolkit. Now it was time to create the 3d models of the extra couple and environment. The students working on this part of the project didnt have much experience with 3d animation, and there wasnt much time to teach them, so manually modeling the sculptures would be a difficult task. Soon options such as 3d scanning the sculpture were opted, but it still needs quite some skill to actually prepare a 3d scan for Augmented Reality usage. We will talk more about that in a coming issue of this magazine.
actually build what the camera will see. This will already save us quite some work. We can also see the screen is positioned quite far away from the sculpture, and when an object is viewed from a distance it will optically lose its depth. When you are one meter away from an object and take one step aside you will see the side of the object, but if the same object is a hundred meter away you will hardly see a change in perspective when changing your position (see image 6). From that distance people will hardly see the difference between an actual 3d model and a plain 2d image. This means we could actually use photographs or drawings instead of a complex 3d model, making the whole process easier again. We decided to follow this route.
Image 9
Image 10
But when we look carefully at our setup (image 5) we can draw some interesting conclusions. Our screen is immobile, we will always see our added 3d model from the same angle. So since we will never be able to see the back of the 3d model there is no need to actually model this part. This is a common practice while making 3d models, you can compare it with set construction for Hollywood movies where they also only
Image 6
38
Image 11
39
The Lab collaborated in this project with students from different departments of the KABK: Ferenc Molnar, Mit Koevoets, Jing Foon Yu, Marcel Kerkmans and Alrik Stelling. The AR Lab team consisted of: Yolande Kolstee, Wim van Eck, Melissa Coleman en Pawel Pokutycki, supported by Martin Sjardijn and Joachim Rotteveel.
Image 12
To be able to place the photograph of the sculpture in our 3d scene we have to assign it to a placeholder, a single polygon, image 7 shows how this could look. This actually looks quite awful, we see the statue but also all the white around it from the image. To solve this we need to make usage of something called an alpha channel, an option you can find in every 3d animation package (image 8 shows where it is located in the material editor of Cinema 4d). An alpha channel is a grayscale image which declares which parts
of an image are visible, white is opaque, black is transparent. Detailed tutorials about alpha channels are easily found on the internet. As you can see this looks much better (image 9). We followed the same procedure for the second statue and the grass (image 10), using many separate polygons to create enough randomness for the grass. As long as you see these models from the right angle they look quite realistic (image 11). In this case this 2.5d approach probably gives even better results than a normal 3d model, and it is much easier to create. Another advantage is that the 2.5d approach is very easy
to compute since it uses few polygons, so you dont need a very powerful computer to run it or you can have many models on screen at the same time. Image 12 shows the final setup. For the iglo sculpture by Mario Merz we used a similar approach. A graphic design student imagined what could be living inside the iglo, and started drawing a variety of plants and creatures. Using the same 2.5d approach as described before we used these drawings and placed them around the iglo, and an animation was shown of a plant growing out of the iglo (image 12).
We can conclude that it is good practice to analyze your scene before you start making your 3d models. You dont always need to model all the detail, and using photographs or drawings can be a very good alternative. The next issue of AR[t] will feature a new How did we do it, in case you have any questions you can contact me at w.vaneck@kabk.nl
40
41
1. Introduction
By Jouke Verlinden
From the early head-up display in the movie Robocop to the present, Augmented Reality (AR) has evolved to a manageable ICT environment that must be considered by product designers of the 21st century. Instead of focusing on a variety of applications and software solutions, this article will discuss the essential hardware of Augmented Reality (AR): display techniques and tracking techniques. We argue that these two fields differentiate AR from regular human-user interfaces and tuning these is essential in realizing an AR experience. As often, there is a vast body of knowledge behind each of the principles discussed below, hence a large variety of literature references is given. Furthermore, the first author of this article found it important to elude his own preferences and experiences throughout this discussion. We hope that this material strikes a chord and makes you consider employing AR in your designs. After all, why should digital information always be confined to a dull, rectangular screen?
42
43
2. Display Technologies
To categorise AR display technologies, two important characteristics should be identified: imaging generation principle and physical layout. Generic AR technology surveys describe a large variety of display technologies that support imaging generation (Azuma, 1997; Azuma et al., 2001); these principles can be categorised into: 1. Video-mixing. A camera is mounted somewhere on the product; computer graphics are combined with captured video frames in real time. The result is displayed on an oblique surface, for example, an immersive Head-Mounted Display (HMD). 2. See-through: Augmentation by this principle typically employs half-silvered mirrors to superimpose computer graphics onto the users view, as found in head-up displays of modern fighter jets. The resulting imaging and arrangement combinations are summarised in Table 1. a) head-attached, which presents digital images directly in front of the viewers eyes, establishing a personal information display. b) hand-held, carried by a user and does not cover the whole field of view c) spatial, which is fixed to the environment. 3. Projector-based systems: one or more projectors cast digital imagery directly on the physical environment. As Raskar and Bimber (2004, p.72) argued, an important consideration in deploying an Augmented system is the physical layout of the image generation. For each imaging generation principle mentioned above, the imaging display can be arranged between user and physical object in three distinct ways:
2. See-through
3. Projection-based
tive assessment of HMDs were less than those of hand-held or spatial imaging devices. However, new developments (specifically high-resolution OLED displays) show promising new devices, specifically for the professional market (Carl Zeiss) and enterntainment (Sony), see figure right.
Figure 1. Recent Head Mounted Displays (above: KABK the Hague and under: Carl Zeiss).
Spatial projection-based
When the AR image generation and layout principles are combined, the following collection of display technologies are identified: HMD, Handheld devices, embedded screens, see-through boards and spatial projection-based AR. These are briefly discussed in the following sections.
44
45
GPS Antenna
Camera + IMU
Joystick Handles
UMPC
resolution as that of PDAs and mobile phones, which is QVGA: 320 240 pixels. Such devices are connected to a workstation by a specialised cable, which can be omitted if autonomously components are used, such as a smartphone. Regular embedded screens can only be used on planar surfaces and their size is limited while their weight impedes larger use. With the advent of novel, flexible e-Paper and Organic LightEmitting Diode (OLED) technologies, it might be possible to cover a part of a physical model
Figure 2. The VespR device for underground infrastructure visualization (Schall et al., 2008).
through wireless networks (Schmalstieg and Wagner, 2008). The resulting device acts as a hand-held window of a mixed reality. An example of such a solution is shown in Figure 2, which is a combination of an Ultra Mobile Personal Computer (UMPC), a Global Positioning System
tems are found in each modern smartphone, and apps such as Layar (www.layar.com) and Junaio (www.junaio.com) offer such functions for free to the user allowing different layers of content to the user (often social-media based). The advantage of using a video-mixing approach is that the lag times in processing are less influential than with the see-through or projector-based systems the live video feed is also delayed and, thus, establishes a consistent combined image. This hand-held solution works well for occasional, mobile use. Long-term use can cause strain in the arms. The challenges in employing this principle are the limited screen coverage/resolution (typically with a 4-in diameter and a resolution of 320 240 pixels). Furthermore, memory, processing power and graphics processing is limited to ren-
dering relatively simple 3D scenes, although these capabilities are rapidly improving by the upcoming dual-core and quad-core mobile CPUs.
Figure 3. Impression of the Luminex material
46
47
compelling display system for exhibits and trade fairs. However, see-through boards obstruct user interaction with the physical object. Multiple viewers cannot share the same device, although a limited solution is offered by the virtual showcase by establishing a faceted and curved mirroring surface (Bimber, 2002).
48
49
projector. There are initiatives to employ LED lasers for direct holographic projection, which also decreases power consumption compared to traditional video projectors and ensures that the projection is always in focus without requiring optics (Eisenberg, 2004). Both fixed and handheld spatial projection-based systems have been demonstrated. At present, hand-held projectors measure 10 5 2 cm and weigh 150 g, including the processing unit and battery. However, the light output is little (1545 lumens). The advantage of spatial projection-based tech nologies is that they support the perception of all visual and tactile/haptic depth cues without the need for shutter glasses or HMDs. Furthermore, the display can be shared by multiple co-located users. It requires less expensive equipment, which are often already available at design studios. Challenges to projector-based AR approaches include optics and occlusion. First, only a limited field of view and focus depth can be achieved. To reduce these problems, multiple video projectors can be used. An alternative so lution is to employ a portable projector, as
Figure 6. Projection-based display principle (adapted from (Raskar and Low, 2001)), on the right the dynamic shader lamps demonstration (Bandyopadhyay et al., 2001)).
3. Input Technologies
In order to merge the digital and physical, position and orientation tracking of the physical components is required. Here, we will discuss two different types of input technologies: tracking and event sensing. Furthermore, we will briefly discuss other input modalities.
Logitech 3D Tracker, Microscribe and Minolta VI900). All these should be considered for object tracking in Augmented prototyping scenarios. There are significant differences in the tracker/ marker size, action radius and accuracy. As the physical model might consist of a number of parts or a global shape and some additional components (e.g., buttons), the number of items to be tracked is also of importance. For simple tracking scenarios, either magnetic or passive optical technologies are often used. In some experiments we found out that a projector could not be equipped with a standard Flock of Birds 3D magnetic tracker due to interference. Other tracking techniques should be used for this paradigm. For example, the ARToolkit employs complex patterns and a regular webcamera to determine the position, orientation and identification of the marker. This is done by measuring the size, 2D position and perspective distortion of a known rectangular marker, cf. Figure 7 (Kato and Billinghurst, 1999). Passive markers enable a relatively untethered system, as no wiring is necessary. The optical markers are obtrusive when markers are visible to the user while handling the object. Although computationally intensive, marker-less optical
proposed in the iLamps and the I/O Pad concepts (Raskar et al., 2003) (Verlinden et al., 2008). Other issues include occlusion and shadows, which are cast on the surface by the user or other parts of the system. Projection on nonconvex geometries depends on the granularity and orientation of the projector. The perceived quality is sensitive to projection errors (also known as registration errors), especially projection overshoot (Verlinden et al., 2003b). A solution for this problem is either to include an offset (dilatation) of the physical model or introduce pixel masking in the rendering pipeline. As projectors are now being embedded in consumer cameras and smartphones, we are expecting this type of augmentation in the years to come.
Tracking type Magnetic Optical passive Optical active Ultrasound Mechanical linkage Laser scanning
Size of tracker (mm) 16x16x16 80x80x0.01 10x10x5 20x20x10 defined by working envelope none
Action radius/ accuracy 1.5 m (1 mm) 3m (1 mm) 3m (0.5 mm) 1m (3 mm) 0.7 m (0.1 mm) 2m ( 0.2mm)
DOF
Issues
6 6 3 6 5 6
Ferro-magnetic interference line of sight line of sight, wired connections line of sight limited degrees of freedom, inertia line of sight, frequency, object recognition
the objects surface. Such local positioning systems might have less advanced technical requirements; for example, the sampling frequency can be decreased to only once a minute. One local tracking system is based on magnetic resonance, as used in digital drawing tablets. The Sensetable demonstrates this by equipping an altered commercial digital drawing tablet with custom-made wireless interaction devices (Patten et al., 2001). The Senseboard (Jacob et al., 2002) has similar functions and an intricate grid of RFID receivers to determine the (2D) location of an RFID tag on a board. In practice, these systems rely on a rigid
Figure 7. Workflow of the ARToolkit optical tracking algorithm,
sliders, rotation knobs and sensors to measure force, touch and light. More elaborate components like a mini joystick, Infrared (IR) motion sensor, air pressure and temperature sensor are commercially available. Similar initiatives are iStuff (Ballagas et al., 2003), which also hosts a number of wireless connections to sensors. Some systems embed switches with short-range wireless connections, for example, the Switcheroo and Calder systems (Avrahami and Hudson, 2002; Lee et al., 2004) (cf. Figure 9). This allows a greater freedom in modifying the location of the interactive components while prototyping. The Switcheroo system uses custom-made RFID tags. A receiver antenna has to be located nearby (within a 10-cm distance), so the movement envelope is rather small, while the physical model is wired to a workstation. The Calder toolkit (Lee et al., 2004) uses a capacitive coupling technique that has a smaller range (6 cm with small antennae), but is able to receive and transmit for long periods on a small 12 mm coin cell. Other active wireless technologies would draw more power, leading to a system that would only fit a few hours. Although the costs for this system have not been specified, only standard electronics components are required to build such a receiver.
tracking table, but it is possible to extend this to a flexible sensing grid. A different technology was proposed by Hudson (2004) to use LED pixels as light emitters and sensors. By operating one pixel as a sensor whilst its neighbours are illuminated,
http://www.hitl.washington.edu/artoolkit/documentation/userarwork.html
tracking has been proposed (Prince et al.,2002). The employment of Laser-Based tracking systems is demonstrated by the illuminating Clay system by Piper et al. (2002): a slab of Plasticine acts as an interactive surface the user influences a 3D simulation by sculpting the clay, while the simulation results are projected on the surface. A laser-based Minolta Vivid 3D scanner is employed to continuously scan the clay surface. In the article, this principle was applied to geodesic analysis, yet it can be adapted to design applications, e.g., the sculpting of car
bodies. This method has a number of challenges when used as a real-time tracking means, including the recognition of objects and their posture. However, with the emergence of depth cameras for gaming such as the Kinect (Microsoft), similar systems are now being devised with a very small technological threshold. In particular cases, a global measuring system is combined with a different local tracking principle to increase the level of detail, for example, to track the position and arrangement of buttons on
it is possible to detect light reflected from a fingertip close to the surface. This principle could be applied to embedded displays, as mentioned in Section 2.3.
Hand tracking
Instead of attaching sensors to the physical environment, fingertip and hand tracking technologies can also be used to generate user events. Embedded skins represent a type of interactive surface technology that allows the accurate measurement of touch on the objects surface (Paradiso et al., 2000). For example, the Smartskin by Reikimoto (2002) consists of a flexible grid of antennae. The proximity or touch of human fingers changes the capacity locally in the grid and establishes a multi-finger tracking cloth, which can be wrapped around an object. Such a solution could be combined with embedded displays, as discussed in Section 2.3. Direct electric
Physical sensors
The employment of traditional sensors labelled physical widgets (phidgets) has been studied extensively in the Computer-Human Interface (CHI) community. Greenberg and Fitchett (2001) introduced a simple electronics hardware and software library to interface PCs with sensors
Figure 8. Illuminating clay system with a projector/laser scanner (Piper et al., 2002).
(and actuators) that can be used to discern user interaction. The sensors include switches,
52
53
tions and constraints in terms of the field of view and resolution and lend themselves to a kind of isolation. For all display technologies, the current challenges include an untethered interface, the enhancement of graphics capabilities, visual coverage of the display and improvement of resolution. LED-based laser projection and OLEDs are expected to play an important role in the next generation of IAP devices because this technology can be employed by see-through or projection-based displays. To interactively merge the digital and physical parts of Augmented prototypes, position and orientation tracking of the physical components is needed, as well as additional user input means.For global position tracking, a variety of
principles exist. Optical tracking and scanning suffer from the issues concerning line of sight and occlusion. Magnetic, mechanical linkage and ultrasound-based position trackers are obtrusive and only a limited number of trackers can be used concurrently. The resulting palette of solutions is summarized in Table 3 as a morphological chart. In devising a solution for your AR system, you can use this as a checklist or inspiration of display and input.
Figure 9. Mockup equipped with wireless switches that can be relocated to explore usability (Lee et al., 2004).
contact can also be used to track user interaction; the Paper Buttons concept (Pedersen et al.,2000) embeds electronics on the objects and equips the finger with a two-wire plug that supplies power and allows bidirectional communication with the embedded components when they are touched. Magic Touch (Pedersen, 2001) uses a similar wireless system; the user wears an RFID reader on his or her finger and can interact by touching the components, which have hidden RFID tags. This method has been adapted to Augmented Reality for design by Kanai et al. (2007). Optical tracking can be used for finger
tip and hand tracking as well. A simple example is the light widgets system (Fails and Olsen, 2002) that traces skin colour and determines finger/hand position by 2D blobs. The OpenNI library enables hand and body tracking of depth range cameras such as the Kinect (OpenNi.org). A more elaborate example is the virtual drawing tablet by Ukita and Kidode (2004); fingertips are recognised on a rectangular sheet by a head-mounted infrared camera. Traditional VR gloves can also be used for this type of tracking (Schfer et al., 1997).
See-through
3D laser scanning
Event sensing
Wired connection
3D tracking
54
55
Further reading
For those interested in research in this area, the following publication means offer a range of detailed solutions: International Symposium on Mixed and Augmented Reality (ISMAR) ACM-sponsored annual convention on AR, covering both specific applications as emerging technologies. accesible through http://dl.acm.org Augmented Reality Times a daily update on demos and trends in commercial and academic AR systems: http://artimes.rouli.net Procams workshop annual workshop on projector-camera systems, coinciding with the IEEE conference on Image Recognition and Robot Vision. The resulting proceedings are freely accessible at http://www.procams. org Raskar, R. and Bimber, O. (2004) Spatial Augmented Reality, A.K. Peters, ISBN: 1568812302 personal copy can be downloaded for free at http://140.78.90.140/medien/ar/SpatialAR/download.php BuildAR download simple webcam-based application that uses markers, http://www. buildar.co.nz/buildar-free-version
eters of the projector, respectively. Then a point P in 3D-space is transformed to: p=[IE] P (1)
References
Avrahami, D. and Hudson, S.E. (2002) empirical testing, Int. J. Technology Management, Vol. 21, Nos. 34, pp.340352. Bordegoni, M. and Covarrubias, M. (2007) Augmented visualization system for a haptic interface, HCI International 2007 Poster. Eisenberg, A. (2004) For your viewing pleasure, a projector in your pocket, New York Times, 4 November. Azuma, R., Baillot, Y., Behringer, R., Feiner, S., Julier, S. and MacIntyre, B. (2001) Recent advances in augmented reality, IEEE Computer Graphics and Applications, Vol. 21, No. 6, pp.3447. Fails, J.A. and Olsen, D.R. (2002) Ballagas, R., Ringel, M., Stone, M. and Borchers, J. (2003) LightWidgets: interacting in everyday spaces, Proceedings of IUI 02, pp.6369. Greenberg, S. and Fitchett, C. (2001) Phidgets: easy development of physical interfaces through physical widgets, Proceedings of UIST 01, pp.209218. Hoeben, A. (2010) Using a projected Trompe LOeil to highlight a church interior from the outside, EVA 2010 Faugeras, O. (1993) Three-Dimensional Computer Vision: a Geometric Viewpoint, MIT press.
where p is a point in the projectors coordinate system. If we decompose rotation and translation components in this matrix transformation we obtain: p=[R t] P (2)
Forming interactivity: a tool for rapid pro totyping of physical interactive products, Proceedings of DIS 02, pp.141146. Azuma, R. (1997) A survey of augmented reality, Presence: Teleoperators and Virtual Environments, Vol. 6, No. 4, pp.355385.
In which R is a 3x3 matrix corresponding to the rotational components of the transformation and t the 3x1 translation vector. Then we split the rotation columns into row vectors R1, R2, and R3 of formula 3. Applying the perspective division results in the following two formulae: (3)
(4)
iStuff: a physical user interface toolkit for ubiquitous computing environments, Proceedings of CHI 2003, pp.537544.
in which the 2D point pi is split into (ui,vi). Given n measured point-point correspondences (pi; Pi); (i = 1::n), we obtain 2n equations:
Bandyopadhyay, D., Raskar, R. and Fuchs, H. (2001) Dynamic shader lamps: painting on movable objects, International Symposium on Augmented Reality (ISMAR), pp.207216. Bimber, O. (2002) Interactive rendering for projection-based
(5) (6)
Hudson, S. (2004) Using light emitting diode arrays as touchsensitive input and output devices, Proceedings of the ACM Symposium on User Interface Software and Technology, pp.287290. Jacob, R.J., Ishii, H., Pangaro, G. and Patten, J. (2002) A tangible interface for organizing information using a grid, Proceedings of CHI 02, pp.339346.
We can rewrite these 2n equations as a matrix multiplication with a vector of 12 unknown variables, comprising the original transformation components R and t of formula 3. Due to measurement errors, a solution is usually non-singular; we wish to estimate this transformation with a minimal estimation deviation. In the algorithm presented at (Bimber & Raskar, 2004), the minimax theorem is used to extract these based on determining the singular values. In a straightforward matter, internal and external transformations I and E of formula 1 can be extracted from the resulting transformation.
augmented reality displays, PhD dissertation, Darmstadt University of Technology. Bimber, O., Stork, A. and Branco, P. (2001) Projection-based augmented engineering, Proceedings of International Conference on Human-Computer Interaction (HCI2001), Vol. 1, pp.787791. Bochenek, G.M., Ragusa, J.M. and Malone, L.C. (2001) Integrating virtual 3-D display systems into product design reviews: some insights from
56
57
Kanai, S., Horiuchi, S., Shiroma, Y., Yokoyama, A. and Kikuta, Y. (2007) An integrated environment for testing and assessing the usability of information appliances using digital and physical mock-ups, Lecture Notes in Computer Science, Vol. 4563, pp.478487.
Pederson, T. (2001) Magic touch: a simple object location tracking system enabling the development of physical- virtual artefacts in office environments, Personal Ubiquitous Comput., January, Vol. 5, No. 1, pp.5457. Piper, B., Ratti, C. and Ishii, H. (2002)
Saakes, D.P., Chui, K., Hutchison, T., Buczyk, B.M., Koizumi, N., Inami, M. and Raskar, R. (2010) Slow Display. In SIGGRAPH 2010 emerging technologies: Proceedings of the 37th annual conference on Computer graphics and interactive techniques, July 2010.
Verlinden, J., Horvath, I. (2008) Enabling interactive augmented prototyping by portable hardware and a plugin-based software architecture Journal of Mechanical Engineering, Slovenia, Vol 54(6), pp. 458-470. Welch, G. and Foxlin, E. (2002)
Kato, H. and Billinghurst, M. (1999) Marker tracking and HMD calibration for a video-based augmented reality conferencing system, Proceedings of International Workshop on Augmented Reality (IWAR 99), pp.8594. Klinker, G., Dutoit, A.H., Bauer, M., Bayer, J., Novak, V. and Matzke, D. (2002) Fata Morgana a presentation system for product design, Proceedings of ISMAR 02, pp.7685. Oviatt, S.L., Cohen, P.R., Wu, L., Vergo, J., Duncan, L., Suhm, B., Bers, J., et al. (2000) Designing the user interface for multimodal speech and gesture applications: state-ofthe-art systems and research directions, Human Computer Interaction, Vol. 15, No. 4, pp.263322. Paradiso, J.A., Hsiao, K., Strickon, J., Lifton, J. and Adler, A. (2000) Sensor systems for interactive surfaces, IBM Systems Journal, Vol. 39, Nos. 34, pp.892914. Patten, J., Ishii, H., Hines, J. and Pangaro, G. (2001) Sensetable: a wireless object tracking platform for tangible user interfaces, Proceedings of CHI 01, pp.253260. Pedersen, E.R., Sokoler, T. and Nelson, L. (2000) PaperButtons: expanding a tangible user interface, Proceedings of DIS 00, pp.216223.
Illuminating clay: a 3-D tangible interface for landscape analysis, Proceedings of CHI 02, pp.355362. Prince, S.J., Xu, K. and Cheok, A.D. (2002) Augmented reality camera tracking with homographies, IEEE Comput. Graph. Appl., November, Vol. 22, No. 6, pp.3945. Raskar, R., Welch, G., Low, K-L. and Bandyopadhyay, D. (2001) Shader lamps: animating real objects with image based illumination, Proceedings of Eurographics Workshop on Rendering, pp.89102.
Schfer, K., Brauer, V. and Bruns, W. (1997) A new approach to human-computer interaction synchronous modelling in real and virtual spaces, Proceedings of DIS 97, pp.335344. Schall, G., Mendez, E., Kruijff, E., Veas, E., Sebastian, J., Reitinger, B. and Schmalstieg, D. (2008) Handheld augmented reality for underground infrastructure visualization, Journal of Personal and Ubiquitous Computing, Springer, DOI 10.1007/s00779-008-0204-5. Schmalstieg, D. and Wagner, D. (2008) .
Motion tracking: no silver bullet, but a respectable arsenal, IEEE Computer Graphics and Applications, Vol. 22, No. 6, pp.2438.
Raskar, R. and Low, K-L. (2001) Interacting with spatially augmented reality, ACM International Conference on Virtual Reality, Computer Graphics and Visualization in Africa (AFRIGRAPH), pp.101108. Raskar, R., van Baar, J., Beardsley, P., Willwacher, T., Rao, S. and Forlines, C. (2003) iLamps: geometrically aware and self-configuring projectors, SIGGRAPH, pp.809818.
Mobile phones as a platform for augmented reality, Proceedings of the IEEE VR 2008 Workshop on Software Engineering and Architectures for Realtime Interactive Systems, pp.4344. Sutherland, I.E. (1968) A head-mounted three-dimensional display, Proceedings of AFIPS, Part I, Vol. 33, pp.757764. Ukita, N. and Kidode, M. (2004)
Raskar, R. and Bimber, O. (2004) Spatial Augmented Reality, A.K. Peters, ISBN: 1568812302. Reikimoto, J. (2002) SmartSkin: an infrastructure for freehand manipulation on interactive surfaces, Proceedings of CHI 02, pp.113120.
Wearable virtual tablet: fingertip drawing on a portable plane-object using an activeinfrared camera, Proceedings of IUI 2004, pp.169175. Verlinden, J.C., de Smit, A., Horvth, I., Epema, E. and de Jong, M. (2003) Time compression characteristics of the augmented prototyping pipeline, Proceedings of Euro-uRapid03, p.A/1.
58
59
"Hi Marina. Nice to meet you! I have heard a lot about you."
I usually avoid this kind of phrases. Judging from my experience, telling people that you have heard a lot about them makes them feel uncomfortable. But this time I say it. After all, its no secret that Marina and the AR Lab in The Hague share a history which dates back much longer than her current residency at the AR Lab. At the lab, she is known as one of the first students who overcame the initial resistance of the fine arts program and started working with AR. With
support of the lab, she has realized the AR artworks Out of the blue and Drops of white in the course of her study. In 2008 she graduated with an AR installation that shows her 3d animated portfolio. Then, having worked with AR for three years, she decided to take a break from technology and returned to photography, drawing and painting. Now, after yet another three years, she is back in the mixed reality world. Convinced by her concepts for future works, the AR Lab has invited her as an Artist in Residence. That is what I have heard about her, and made me want to meet her for an artist-portrait. Knowing quite
Marina de Haas
By Hanna Schraffenberger
60
61
After having paused for three years, Marina is positively surprised about how AR technology has emerged in the meantime:
a lot about her past, I am interested in what she is currently working on, in the context of her residency. When she starts talking, it becomes clear that she has never really stopped thinking about AR. Theres a handwritten notebook full of concepts and sketches for future works. Right now, she is working on animations of two animals. Once she is done animating, she'll use AR technology to place the animals an insect and a dove in the hands of the audience.
ogy, the audience will then see a dyingdove or dying crane fly with a missing foot.
Marina tells me her current piece is about impermanence and mortality, but also about the fact that death can be the beginning of something new. Likewise, the piece is not only about death but also intended as an introduction and beginning for a forthcoming work. The AR Lab makes this beginning possible through financial support but also provides technical assistance and serves as a place for mutual inspiration and exchange. Despite her long break from the digital arts, the young artist feels confident about working with AR again:
AR is out there, its alive, its growing and finally, it can be markerless. I dont like the use of markers. They are not part of my art and people see them, when they dont wear AR glasses. I am also glad that so many people know AR from their mobile phones or at least have heard about it before. Essentially, I dont want the audience to wonder about the technology, I want them to look at the pictures and animations I create. The more people are used to the technology the more they will focus on the content. I am really happy and excited how AR has evolved in the last years!
I ask, how working with brush and paint differs from working with AR, but there seems to be surprisingly little difference.
When I was a child I found a book with code and so I programmed some games. That was fun, I just understood it. Its the same with creating AR works now. My way of thinking perfectly matches with how AR works. It feels completely natural to me.
Nevertheless, working with technology also has its downside:
The most annoying thing about working with AR is that you are always facing technical limitations and there is so much that can go wrong. No matter how well you do it, there is always the risk that something wont work. I hope for technology to get more stable in the future.
When realizing her artistic augmentations, Marina sticks to an established workflow:
I usually start out with my own photographs and a certain space I want to augment.
"They will hold a little funeral monument in the shape of a tile in their hands. Using AR technol62
Its a bit like biking, once youve learned it, you never unlearn it. Its the same with me and AR, of course I had to practice a bit, but I still have the feel for it. I think working with AR is just a part of me.
The main difference is that with AR I am working with a pen-tablet, a computer and a screen. I control the software, but if I work with a brush I have the same kind of control over it. In the past, I used to think that there was a difference, but now I think of the computer as just another medium to work with. There is no real difference between working with a brush and working with a computer. My love for technology is similar to my love for paint.
Marina discovered her love for technology at a young age:
I usually start out with my own photographs and a certain space I want to augment. Preferably I measure the dimensions of the space, and then I work with that
63
room in my head. I have it in my inner vision and I think in pictures. There is a photo register in my head which I can access. Its a bit like parking a car. I can park a car in a very small space extremely well. I can feel the car around me and I can feel the space I want to put it in. Its the same with the art I create. Once I have clear idea of the artwork I want to create, I use Cinema4D software to make 3d models. Then I use BuildAR to place my 3d models it the real space. If everything goes well, things happen that you could not have imagined.
A result of this process is, for example, the AR installation Out of the blue which was shown at Todays Art festival in The Hague in 2007:
For me, Augmented Reality means using digital images to create something which is not real. However, by giving meaning to it, it becomes real and people realize that it might as well exist.
I wonder whether there is a specific place or space shed like to augment in the future and Marina has quite some places in mind. They have one thing in common: they are all known museums that show modern art.
The idea behind Out of the blue came from a photograph I took in an elevator. I took the picture so that the lights in the elevator looked like white ellipses on a black background. I took this basic elliptical shape as a basis for working in a very big space. I was very curious if I could use such a simple shape and still convince the audience that it really existed in the space. And it worked people tried to touch it with their hands and were very surprised when that wasnt possible.
The fact that people believe in the existence of her virtual objects is also important for Marinas personal understanding of AR:
I would love to create works for the big museums such as the TATE Modern or MoMa. In the Netherlands, Id love to augment spaces at the Stedelijk Museum in Amsterdam or Boijmans museum in Rotterdam. Thats my world. Going to a museum means a lot to me. Of course, one can place AR artworks everywhere, also in public spaces. But it is important to me that people who experience my work have actively chosen to go somewhere to see art. I dont want them to just see it by accident at a bus stop or in a park.
Rather than placing her virtual models in a specific physical space, her current work follows a different approach. This time, Marina will place the animated dying animals in the hands of the audiences. The artist has some ideas about how to design this physical contact with the digital animals.
the animal. The funeral monuments will therefor have a certain weight.
It is still open where and when we will be able to experience the piece:
Coming from a fine arts background, Marina has a tip for art students who want to to follow in her footsteps and are curious about working with AR:
In order for my piece to work, the viewer needs to feel like he is holding something in his hand. Ideally, he will feel the weight of
My residency lasts 10 weeks. But of course thats not enough time to finish. In the past, a piece was finished when the time to work on it was up. Now, a piece is finished when it feels complete. Its something I decide myself, I want to have control over it. I dont want any more restrictions. I avoid deadlines.
I know it can be difficult to combine technology with art, but it is worth the effort. Open yourself up to for art in all its possibilities, including AR. AR is a chance to take a step in a direction of which you have no idea where youll find yourself. You have to be open for it and look beyond the technology. AR is special I couldnt live without it any more...
64
65
A magical leverage
in search of the killer application
Jeroen van Erp graduated from the Faculty of Industrial Design at the Technical University of Delft in 1988. In 1992, he was one of the founders of Fabrique in Delft, which positioned itself as a multidisciplinary design bureau. He established the interactive media department in 1994, focusing primarily on developing websites for the world wide web - brand new at that time.
tural institutions are starting to understand how they can benefit from this technology. At the moment there are a variety of applications available (mainly mobile applications for tablets or smart phones) that create added value for the user or consumer. This is great, because it not only allows the audience to gain experience in the field of this still-developing technology, but also the industry. But to make Augmented Reality a real success, the next step will be of vital importance.
www.fabrique.nl
66
67
Innovating or innovating?
Lets have a look at different forms of innovating in figure 1. On the left we see innovations with a bottom-up approach, and on the right a top-down approach to innovating. A bottomup approach means that we have a promising new technique, concept or idea although the exact goal or matching business model arent clear yet. In general, bottom-up developments are technological or art-based, and are therefore what I would call autonomous: the means are clear, but the exact goal has still to be defined. The usual strategy to take it further is to set up a start-up company in order to develop the technique and hopefully to create a market. This is not always that simple. Innovating from a top-down approach means that the innovation is steered on the basis of a more or less clearly defined goal. In contrast with bottomup innovations, the goal is well-defined and the designer or developer has to choose the right means, and design a solution that fits the goal. This can be a business goal, but also
a social goal. A business goal is often derived from a benefit for the user or the consumer, which is expected to generate an economic benefit for the company. A marketing specialist would state that there is already a market. This approach means that you have to innovate with an intended goal in mind. A business goal-driven innovation can be a product innovation (either physical products, services or a combination of the two) or a brand innovation (storytelling, positioning), but always with an intended economical or social benefit in mind. As there is an expected benefit, people are willing to invest. Its interesting to note the difference on the vertical axis between radical innovations and incremental changes (Robert Verganti Design Drive Innovation). Incremental changes are improvements of existing concepts or products. This is happening a lot, for instance in the automotive industry. In general, a radical innovation changes the experience of the product in a fundamental way, and as a result of this often changes an entire business.
This is something Apple has achieved several times, but it has also been achieved by TomTom, and by Philips and Douwe Egberts with their Senseo coffee machine.
me that the experience of AR wasnt suitable at all for this form of publishing. AR doesnt do well on a projection screen. It does well in the users head, where time, place, reality and imagination can play an intriguing game with our senses. It is unlikely that the technique of
Augmented Reality will lead to mass consumption as in experiencing the same thing with a lot of people at the same time. No, by their nature, AR applications are intimate and intense, and this is one of its biggest assets.
Future
We have come a long way, and the things we can do with AR are becoming more amazing by the day. The big challenge is to make it applicable in relevant solutions. Theres no discussion about the value of AR in specialist areas, such as the military industry. Institutions in the field of art and culture have discovered the endless possibilities, and now it is the time to make the big leap towards solutions with social or economic value (the green area in figure 1). This will give the technique the chance to develop further in order to flourish at the end. From that perspective, it wouldnt surprise me if the first really good, efficient and economically profitable application will emerge for educational purposes. Lets not forget we are talking about a technology that is still in its infant years. When I look back at the websites we made 15 years ago, I realize the gigantic steps we have made, and I am aware of the fact that we could hardly imagine then what the impact of the internet would be on society today. Of course, its hard to compare the concept of Augmented Reality with that of the internet, but it is a valid comparison, because it gives the same powerless feeling of not being able to predict its future. But it will probably be bigger than you can imagine.
Figure 1.
68
69
optical see-through systems), then we can still keep track of the viewpoint by attaching the camera to it. Say, for example, that we have an optical see-through HMD with an attached video camera. Then, if we calculate the pose of the camera, we can then determine the pose of the viewpoint, provided that the cameras position relative to the viewpoint remains fixed.
does not change, should deliver the same AR experience each time. Sometimes however, it is not feasible to prepare an environment with markers. Often it is desirable to use an AR application in an unknown or unprepared environment. In these cases, an alternative to using markers is to identify the natural features found in the environment. The term natural features can be used to
The problem then, has been reduced to identifying landmarks in the environment. Historically, this has been achieved by the use of fiducial markers, which act as points of reference in the image. Fiducial markers position and orientation (pose) in 3D space, and its scale, should be. However, if the view point changes, then how we view the virtual object should also change. For example, if Iwalk around to face the back of a virtual object, I expect to be able to see the rear of that object. The solution to this problem is to keep track of the users viewpoint and, in the event that the viewpoint changes, to update the pose of any virtual content accordingly. There are a number of ways in which this can be achieved, by using, for example: positional sensors (such as inertia trackers), a global positioning system, computer vision techniques, etc. Typically the best results are those systems that take the data from many tracking systems and blend them together. At TU Delft, we have been researching and developing techniques to track position using computer vision techniques. Often it is the case that video cameras are used in AR systems; indeed, in the case where the AR system uses video see-through, the use of cameras is necessary. Using computer vision techniques, we can identify landmarks in the To recap, virtual objects are blended with the real world view in order to achieve an Augmented world view. From our initial viewpoint we can determine what the virtual objects
70
describe the parts of an image that stand out. Examples include: edges, corners, areas of high contrast, etc. In order to be able to use the natural features to track the camera position in an unknown environment, we need to be able to first identify the natural features, and then determine their relative positions in the environment. Whereas you could place 20 markers in an environment and still only have 80 identifiable corners, there are often hundreds of natural features in any one image. This makes using natural features a more robust solution than using markers, as there are far more landmarks we can use to navigate, not all of which need to be visible. One of the key advantages to using natural features over markers is that: as we already need to identify and keep track of those natural features seen from our initial view point, we can use the same method to continually update a 3D map of features as we change our view point. This allows our working environment to grow, which could not be achieved in a prepared environment. Although we are able to determine the relative distance between features, the question remains: how can we determine the absolute position of features in an environment without some known measurement? The short answer is that we cannot; either we need to estimate the distance or we can introduce a known measurement. In a future edition we will discuss the use of multiple video cameras and how, given the absolute distance between the cameras, we can determine the absolute position of our identified features.
When using Augmented Reality (AR) for vision, virtual objects are added to the real world and displayed in some way to the user; be that via a monitor, projector, or head-mounted display (HMD). Often it is desirable, or even unavoidable, for the viewpoint of the user to move around the environment (this is particularly the case if the user is wearing a HMD). This presents a problem, regardless of the type of display used: how can the viewpoint be decoupled from the augmented virtual objects?
provide us with a means of determining the scale of the visible environment, provided that: enough points of reference are visible, we know their relative positions, and these relative positions dont change. A typical marker often used in AR applications consists of a card with a black rectangle in the centre, a white border, and an additional mark to determine which edge of the card is considered the bottom. As we know that the corners of the black rectangle are all 90 degrees, and we know the distance between corners, we can identify the marker and determine the pose of the camera with regard to the points of reference (in this case the four corners of the card). A large number of simple desktop AR applications make use of individual markers to track camera pose, or conversely, to track the position of the markers relative to our viewpoint. Larger applications require multiple markers linked together, normally distinguishable by a unique pattern or barcode in the centre of the marker. Typically the more points of reference that are visible in a scene, the better the results when determining the cameras pose. The key advantage to using markers for tracking the pose of the camera is that an environment can be carefully prepared in advance, and provided the environment
environment, and, using these landmarks, we can determine the pose of our camera with basic geometry. If the camera is not used directly as the viewpoint (as is the case in
71
Figure 1. Mediated reality head mounted device in use during the experiment in the Dutch forensic field lab.
Time needed for reconstruction: data capture, alignment, data clean-up, geometric modelling and analyses are manual steps. Expertise required to deploy dedicated software and secure evidence at the crime scene.
tion: The hands of the CSIs have to be free to physically interact with the crime scene when needed, e.g. to secure evidence, open doors, climb, etc. Additional hardware such as data gloves or physically touching an interface such as a mobile device is not acceptable. Remote connection to and collaboration with experts: Expert crime scene investigators are a scarce resource and are not often available at location on request. Setting up a remote connection to guide a novice investigator through the crime scene and to collaboratively analyze the crime scene has the potential to improve the investigation quality. To address the above requirements, a novel mediated reality system for collaborative spatial analysis on location has been designed, developed and evaluated together with experts in the field and the NFI. This system supports collaboration between crime scene investigators (CSIs) on location who wear a HMD (see Figure 1) and expert colleagues at a distance. The mediated reality system builds a 3D map of the environment in real-time, allows remote users to virtually join and interact together in shared Augmented space with the wearer of the HMD, and uses bare hand gestures to operate the 3D multi-touch user interface. The resulting medi-
for future crime scene investigation and to tackle current issues in crime scene investigation. In AugmentedReality, virtual data is spatially overlaid on top of physical reality. With this technology the flexibility of virtual reality can be used and is grounded in physical reality (Azuma, 1997). Mediated reality refers to the ability to add to, subtract information from, or otherwise manipulate ones perception of reality through the use of a wearable computer or hand-held device (Mann and Barfield, 2003). In order to reveal the current challenges for supporting spatial analysis in crime scene investigation, structured interviews with ve international experts in the area of 3D crime scene reconstruction were conducted. The interviews showed a particular interest for current challenges in spatial reconstruction and the interaction with the reconstruction data. The identified challenges are:
Complexity: Situations differ significantly. Time freeze: Data capture is often conducted once after a scene has been contaminated. The interview sessions ended with an open discussion on how mediated reality can support crime scene investigation in the future. Based on these open discussions, the following requirements for a mediated reality system that is to support crime scene investigation were identified: Lightweight head-mounted display (HMD): It became clear that the investigators whom arrive first on the crime scene currently carry a digital camera. Weight and ease of use are important design criteria. Experts would like those close to a pair of glasses. Contactless augmentation alignment (no markers on the crime scene): The first investigator who arrives on a crime scene has to keep the crime scene as untouched as possible. Technology that involves preparing the scene is therefore unacceptable. Bare hands gestures for user interface opera-
73
ated reality system supports a lightweight headmounted display (HMD), contactless augmentation alignment, and a remote connection to and collaboration with expert crime scene investigators. The video see-through of a modified Carl Zeiss Cinemizer OLED (cf. Figure 2) for displaying content fulfills the requirement for a lightweight HMD, as its total weight is ~180 grams. Two Microsoft HD-5000 webcams are stripped and mounted in front of the Cinemizer providing a full stereoscopic 720p resolution pipeline. Both cameras record at ~30hz in 720p, images are
Figure 2. Head mounted display, modified Cinemizer OLED (Carl Zeiss) with two Microsoft HD-5000 webcams.
The cameras are part of the HMD and an adaptive algorithm has been designed to determine whether to rely on the color, disparity or on both depending on the lighting conditions. This is the core technology to fulfill the requirement of bare hand interfacing. The user interface and the virtual scene are general-purpose parts of the mediated reality system. They can be used for CSI, but also for any other mediated reality application. The tool set, however, needs to be tailored for the application domain. The current mediated reality system supports the following tasks for CSIs: recording the scene, placing tags, loading 3D models, bullet trajectories and placing restricted area ribbons. Figure 4 shows the corresponding menu attached to a users hand. The mediated reality system has been evaluated on a staged crime scene at the NFIs Lab with three observers, one expert and one layman with only limited background in CSI. Within the experiment the layman, facilitated by the expert, conducted three spatial tasks, i.e. tagging a specific part of the scene with information tags, using barrier tape and poles to spatially secure the body in the crime scene and analyzing a bullet trajectory analysis with ricochet. The experiment was analyzed along seven dimensions (Burkhardt et al., 2007): fluidity of collaboration, sustaining mutual understanding, information exchanges for problem solving, argumentation and reaching consensus, task and time management, cooperative orientation, and individual task orientation. The results show that the mediated reality system supports remote spatial interaction with the physical scene as well as collaboration in shared augmented space while tackling current issues in crime scene investigation. The results also show that there is a need for more support to identify whose turn it is and who wants the next turn, etc. Additionally, the results show the need to represent the expert in the scene to increase the awareness and trust of working in a team and to counterbalance the feeling of being observed. Knowing the experts focus and current
activity could possibly help to overcome this issue. Whether traditional patterns for computermediated interaction (Schmmer and Lukosch, 2007) support awareness in mediated reality or rather new forms of awareness need to be designed, will be the subject of future research. Further tasks for future research include the design and evaluation of alternative interaction possibilities, e.g. by using physical objects that are readily available in the environment, sensor fusion, image feeds from spectral cameras or previously recorded laser scans, to provide more situational awareness and the privacy, security and validity of captured data. Finally, though IT is being tested and used for educational purposes within the CSI Lab of the Netherlands Forensic Institute (NFI), only the application and test of the mediated reality system in real settings can show the added value for crime scene investigation.
REFERENCES
projected in our engine, and render 720p stereoscopic images to the Cinemizer. As for all mediated reality systems, robust realtime pose estimation is one of the most crucial parts, as the 3D pose of the camera in the physical world is needed to render virtual objects correctly on required positions. We use a heavily modified version of PTAM (Parallel Tracking and Mapping) (Klein and Murray, 2007), in which a single camera setup is replaced by a stereo camera setup using 3D natural feature matching and estimation based on natural features. Using this algorithm, a sparse metric map (cf. Figure 3) of the environment is created. This sparse metric map can be used for pose estimation in our Augmented Reality system. In addition to the sparse metric map, a dense
3D map of the crime scene is created. The dense metric map provides a detailed copy of the crime scene enabling detailed analysis and is created from a continuous stream of disparity maps that are generated while the user moves around the scene. Each new disparity map is registered (combined) using the pose information from the PE module to construct or extend the 3D map of the scene. The point clouds are used for occlusion and collision checks, and for snapping digital objects to physical locations. By using an innovative hand tracking system, the mediated reality system can recognize bare hands gestures for user interface operation. This hand tracking system utilizes the stereo
G. Klein, D. Murray, Parallel Tracking and Mapping for Small AR Workspaces, Proc. International Symposium on Mixed and Augmented Reality, 2007, 225-234
T. Schmmer, S. Lukosch, Patterns for ComputerMediated Interaction, John Wiley & Sons, Ltd. 2007
74
75
On Friday December 16th 2011 The Symphony Orchestra of the Royal Conservatoire played Die Walkre (act 1) by Richard Wagner, at the beautiful concert hall De Vereeniging in Nijmegen. The AR Lab was invited by the Royal Conservatoire to provide visuals during this live performance. Together with students from different departments of the Royal Academy of Art, we designed a screen consisting of 68 pieces of transparent cloth (400x20 cm), hanging in four layers above the orchestra. By projecting on this cloth we created visuals giving the illusion of depth. We chose 7 leitmotivs (recurring theme, associated with a particular person, place, or idea), and created animations representing these using colour, shape and movement. These animations were played at key-moments of the performance.
76
77
Contributors
Wim van Eck
Royal Academy of Art (KABK) w.vaneck@kabk.nl esearcher in professional universities) in the r field of Innovative Visualisation Techniques in higher Art Education for the Royal Academy of Art, The Hague.
Robert Prevel
Delft University of Technology r.g.prevel@tudelft.nl
Wim van Eck is the 3D animation specialist of the AR Lab. His main tasks are developing Augmented Reality projects, supporting and supervising students and creating 3d content. His interests are, among others, real-time 3d animation, game design and creative research.
Maarten Lamers
Leiden University lamers@liacs.nl
Maarten Lamers is assistant professor at the Leiden Institute of Advanced Computer Science (LIACS) and board member of the Media Technology MSc program. Specializations include social robotics, bio-hybrid computer games, scientific creativity, and models for perceptualization.
Robert Prevel is working on a PhD focusing on localisation and mapping in Augmented Reality applications at the Delft Biorobotics Lab, Delft University of Technology under the supervision of Prof.dr.ir P.P.Jonker.
Context lab that focuses on blend between bits and atoms for design and creativity. Co-founder and lead of the minor on advanced prototyping programme and editor of the International Journal of Interactive Design, Engineering and Manufacturing.
Special thanks
We would like to thank Reba Wesdorp, Edwin van der Heide, Tama McGlinn, Ronald Poelman, Karolina Sobecka, Klaas A. Mulder, Joachim Rotteveel and last but not least the Stichting Innovatie Alliantie (SIA) and the RAAK (Regionale Aandacht en Actie voor Kenniscirculatie) initiative of the Dutch Ministry of Education, Culture and Science.
Hanna Schraffenberger
Leiden University hkschraf@liacs.nl
Hanna Schraffenberger works as a researcher and PhD student at the Leiden Institute of Advanced Computer Science (LIACS) and at the AR Lab in The Hague. Her research interests include interaction in interactive art and (non-visual) Augmented Reality.
Jeroen van Erp co-founded Fabrique, a multidisciplinary design agency in which the different design disciplines (graphic, industrial, spatial and new media) are closely interwoven. As a designer he was recently involved in the flagship store of Giant Bicycles, the website for the Dutch National Ballet and the automatic passport control at Schiphol airport, among others.
Stephan Lukosch
Delft University of Technology S.g.lukosch@tudelft.nl
Pieter Jonker
Delft University of Technology P.P.Jonker@tudelft.nl
Pieter Jonker is Professor at Delft University of Technology, Faculty Mechanical, Maritime and Materials Engineering (3ME). His main interests and fields of research are: real-time embeddedimage processing, parallel image processing architectures, robotvision, robot learning and Augmented Reality.
Stephan Lukosch is associate professor atthe Delft University of Technology. His current research focuses on collaborativedesign and engineering in traditionalas well as emerging interaction spaces such as augmented reality. In this research, he combines recent results from intelligent andcontext-adaptive collaboration support, collaborative storytelling for know ledgeelicitationand decision-making,and design patterns forcomputer-mediated interaction.
Esm Vahrmeijer
Royal Academy of Art (KABK) e.vahrmeijer@kabk.nl
Next Issue
The next issue of AR[t] will be out in October 2012.
Ferenc Molnr
Photographer info@baseground.nl
Esm Vahrmeijer is graphic designer and webmaster of the AR Lab. Besides her work at the AR Lab, she is a part time student at the Royal Academy of Art (KABK) and runs her own graphic design studio Ooxo. Her interests are in graphic design, typography, web design, photography and education.
Jouke Verlinden
Ferenc Molnris a multimedia artist based in The Hague since 1991. In 2006 he has returned to the KABK to study photography and thats where he started to experiment with AR. His focus is on the possibilities and on the impact of this new technology as a communication platform in our visual culture. Delft University of Technology j.c.verlinden@tudelft.nl
Yolande Kolstee
Royal Academy of Art (KABK) Y.Kolstee@kabk.nl
Yolande Kolstee is head of the AR Lab since 2006. She holds the post of Lector(Dutch for
78
Jouke Verlinden is assistant professor at the section of computer aided design engineering at the Faculty of Industrial Design Engineering. With a background in virtual reality and interaction design, he leads the Augmented Matter in
79