You are on page 1of 39

02

nov 2012

Augmented Reality, art and technology

Marty
The new affordable AR headset Yolande Kolstee & Pieter Jonker

Augmented Prototyping
Augmented Reality to support the design process

Jouke Verlinden

Radioscape
Edwin van der Heide

AR[t]
Magazine about Augmented Reality, art and technology

november 2012

Image courtesy STUDIO Edwin van der Heide

Colophon
ISSN Number
2213-2481

Table of contents

Contact
The Augmented Reality Lab (AR Lab) Royal Academy of Art, The Hague (Koninklijke Academie van Beeldende Kunsten)

48
welcome
to AR[t]

18
[AR]chitectural Education at Play
Vincent Hui

Prinsessegracht 4 2514 AN The Hague The Netherlands +31 (0)70 3154795 www.arlab.nl info@arlab.nl

06 08 14 18 24 28 32 34

36 42 46 48 52 54 62 72
5

Penetrating deeper into the temporal lobe


Barbara Nordhjem

Editorial team
Yolande Kolstee, Hanna Schraffenberger, Esm Vahrmeijer (graphic design) and Jouke Verlinden.

Marty The new affordable AR headset!


Yolande Kolstee and Pieter Jonker

Digital technologies and fine art a complex relationship


Martin Sjardijn

Who owns the space?


Yolande Kolstee

Contributors
AR Lab & Partners: Wim van Eck, Edwin van der Heide, Pieter Jonker, Maarten Lamers, Ferenc Molnr (photography), Maaike Roozenburg and Dirk Vis. Guest Contributors: Esther de Graaff, Vincent Hui, Barbara Nordhjem and Martin Sjardijn.

Radioscape in the context of augmented reality


Edwin van der Heide

Chasing virtual spooks, losing real weight


Hanna Schraffenberger

Finding what I didnt seek


Esther de Graaff

Augmented Reality: A Story


Dirk Vis

Smart Replicas bringing heritage back to life


Maaike Roozenburg

Unspecialize! The more you know the less you see


A portrait of Daniel Disselkoen Hanna Schraffenberger

CoVer
Marty, the new Augmented Reality headset by the AR Lab, worn by Mariana Kniveton.

How it was made: a tangible replica with mobile AR


Jouke Verlinden, Maaike Roozenburg & Wolf Song

Augmented Prototyping: Augmented Reality to support the design process


Jouke Verlinden

Augmented Belief in Reality


Maarten H. Lamers
www.arlab.nl

The revival of Virtual Reality?


Wim van Eck

WelcomE...

to the second issue of AR[t], the magazine about Augmented Reality, art and technology!
Much has happened in the six months since AR[t] magazines first issue. The AR Lab has developed a new Augmented Reality headset in close co-operation with the Bio-robotics Lab at Delft University of Technology. Our article The Augmented Painting: Playful Interaction with Multi-Spectral Images has been accepted at ISMAR 2012, the International Symposium on Mixed and Augmented Reality. The project described in this article shows in an interactive way discoveries made by material specialists and art historians in Vincent van Goghs paintings. The AR Lab has grown: in September 2012 Dirk Vis joined our team for one day per week to promote AR and other new visualization techniques like 3D printing, in various disciplines at the Royal Academy of Art by organizing a Pop-Up Art Gallery. We have furthermore welcomed Mariana Kniveton as an intern at the Lab. She researches the use of innovative visualization techniques in museums and has supported the editors of this magazine in various tasks. We have also invited DPI Animation House, a company based in Schevingen, to join the AR Lab. We are experimenting with artistic projection mapping, while Jouke Verlinden works in this field for industrial goals like rapid prototyping. While the magazine undoubtedly has grown, its The hard work of the last few months has paid off and the AR Lab won the 3rd prize at the RAAK-AWARD 2012 for the high quality of the research work done in the framework of a RaakPro Research Programme. Furthermore, the AR Labs Van Gogh iPad project has been shortlisted for the Most Innovative use of AR 2012 Award as part of the AR Summit held in June 2012 in core remains the same. In AR[t], we share our interest in Augmented Reality (AR), discuss its applications in the arts and provide insight into the underlying technology. If you havent done so yet, we invite you to check out the first issue and our the website www.arlab.nl to learn more about Augmented Reality in the arts and the work of the AR Lab. The AR[t] magazine has developed as well. Six months ago we started off as an aspiring magazine series for the emerging AR community in and outside the Netherlands. We received enthusiastic feedback on the first issue of the magazine. More than that, we were approached with various contributions from interested parties. The editors have selected several of these contributions to be included in this second issue. As a consequence this issue is even more diverse than its predecessor. Besides contributions from researchers, artists and lecturers of the AR Lab (based at the Royal Academy of Arts, The Hague), Delft University of Technology (TU Delft) and Leiden University, this second issue also features texts by international researchers as well as local young writers. Contributions include technical articles, entertaining essays, in depth discussions of AR art works as well as short columns. London. For the second time in a row we have presented our research at the ISMAR conference, which took place from 5 till 8 November 2012 in Atlanta and the AR Lab's presentation of the Van Gogh project was rewarded as Best Demo 2012.

Penetrating deeper into the temporal lobe


Barbara Nordhjem
I am always looking for art that messes with your mind and tricks your senses. The performance Terra Nova by CREW starts out as a perverted experiment with the audience as guinea pigs. In other words, I feel like I have come to the right place. CREW is an interdisciplinary group of artists and scientists based in Belgium. Their performances incorporate new technological environments with live performance. Terra Nova can be experienced by 55 participants at a time. Before entering the performance space, we are all given headphones and divided into five smaller groups. During the first part of the performance, I am placed in a steel chair and can observe how other people are guided through an alternate reality. A line of people enters, wearing brown jackets and heavy backpacks. Their footsteps are slow and insecure, the outside world is sealed off by headphones and video goggles. Each person is accompanied by an

CREW
CREW is an interdisciplinary collective of artists and scientists mainly based in Belgium. The composition of the group changes depending on the project, but the performances are always at the intersection between art and science. CREW has been combining theatre and technology since 1998 with Eric Joris as artistic leader. Their first immersive performance, Crash, premiered in 2004. Since then, the group has been experimenting with hybrid performances and installations where story-telling, live elements, and human-machine interfaces come together. There is no traditional separation between the actors on a stage and the audience. Instead the visitor is guided through different settings where active participation is required. A lot of CREWs activities evolve around issues that are also currently being investigated by neuroscientists and philosophers. Themes such as the human mind, senses and our experience of reality are explored with "multimedia as a prosthesis". Apart from the performance group, there is also the CREW_lab which focuses on research related to immersive media and development of new technology.

actor. The movements of the actors appear to be orchestrated by a central person who gives directions by using gestures, as if they carry out a medical procedure or experimental protocol with care and precision. It looks like a scene taking place in a dystopian near-future society. The fear of death is in the temporal lobe, we are going deeper, a monotone voice announces. The participants are then tied to wooden boards which are flipped backwards into horizontal position. On the sideline, we can also watch a video projection, images of narrow hallways, is there light at the end?

Crew, Terra Nova at DEAF2012. Image by Jan Sprij 2012

Crew, Terra Nova at DEAF2012. Image by Jan Sprij 2012

At some point, I am instructed to proceed into another room by a distant voice. I line up behind other group members and we enter a room with a bare-chested man. He tells a story about delirium and fatigue during a rough expedition to the south-pole. The central theme is our perception of reality and how certain situations can create a distorted view of our surroundings. This way, the polar quest becomes a metaphor for exploring the brain and our senses. After a dramatic mind trip involving penguins, snowstorms, and the frontal lobe, I finally find myself in the role of immersant: a traveller in a virtual surrounding. I am eager to try out the video goggles and see if I indeed will feel like I am somewhere in a different time and place. Perhaps a few people from CREW noticed one person continuously turning her head around after being plugged in. The projection on the video goggles gives a partial view of the scene according to the direction I

turn my head. I can explore the environment as I look and walk around (or rather, I wildly bounce my head in all directions to see if the equipment can keep up). This direct relationship between head movement and vision really gives the feeling of being placed in the video.

around in a multi-layered video environment. What appears to be one coherent view of the environment is created by combining 360 degree pre-recorded video, real-time video from the current environment, and metadata from the orientation tracker. Different recordings are merged into one unified scene where the immersant can move around and explore. Unlike Augmented Reality (AR) and Virtual Reality (VR) there are no computer-generated virtual elements, but different layers of video recorded reality. VR takes place completely in a virtual 3D world, whereas AR allows you to see the real world with added virtual elements. What CREW presents is a form of mixed reality where video recorded elements from a different time and place are combined with elements recorded in real-time. In mixed reality there has been a strong focus on the visual aspect of the experience, especially in earlier VR scenarios where the

mind travels in a completely computer-generated world detached from the physical body. CREW takes a turn towards embodied technology and physical presence by combining video, sound, motion tracking, and tactile stimulation given by the actors.

Levels of reality
CREW collaborates with institutes for media technology to develop customized tools for immersive environments. The group works with omni-directional video (OVD), which allows you to see surround video on a head mounted display (HMD). The places you are exploring as an immersant are pre-recorded while actors around you appear via real-time video filmed with a small head mounted camera. By using an orientation tracker placed on the immersants head, it is possible to move

Tapping into the senses


CREW takes a lot of experimental findings from cognitive neuroscience and puts them into practice in the performance. In the beginning of an immersion, you find yourself in a state of sensory deprivation. Visual references to the surrounding space are concealed during an initial period of complete darkness. To add to the loss of orientation, you are then strapped onto a standing bed which is slowly flipped backwards. This creates a very effective feeling of disorientation by mov-

10

11

ing the body around in space without any visual references. By shielding off visual input and relocating the body in space, the reference to reality becomes weaker. It is already known that the brain is more likely to fill in information when there is no sensory input. Early studies in the 1960s and 1970s showed that people start to hallucinate when they are put into tanks where they are floating in complete darkness. More recently, there has also been an experiment showing

your own hand being touched the same way. The combination of visual and tactile stimulation can create a powerful illusion: you start experiencing the virtual hand as your own. In other ways, your sense of bodily self is moved into the virtual world, you feel like it is really you on the screen. This effect works with a principle called the rubber hand illusion (RHI), which has been used in several

Negotiating reality?
CREW tells a story about perception, the brain, and what we experience as real. It is a story about the mind and a poetic interpretation of scientific studies. A lot of fundamental questions are posed about who we are and what we believe is real. Can we trust our senses at all if our experience of where we are and what we percieve can be manipulated so easily? CREW states that neuro-philosopher Thomas Metzinger inspired the performance Terra Nova. The idea of our bodily self is something we take for granted and which we see as a fundamental part of being a person. Metzinger argues that how we experience ourselves is an ongoing process rather than something fixed and stable. The experience of being a person with a body and a mind is the result of neural processes and sensory information. When your brain begins to receive evidence that your body is located somewhere in a virtual world, that becomes the experienced reality. It may be a disturbing idea that our perception of reality and of having a self is not stable but can be manipulated. At the same time, it also means that we are able to see and feel much more than what we may think. After the performance, I walk outside with my unstable self and my deceivable senses. I look down at my hands a second time to check that they are really mine.

Further reading
 Botvinick, M., & Cohen, J. (1998). Rubber hands feel touch that eyes see. Nature, 391, 756.  Lenggenhager, B., Tadi, T., Metzinger, T., & Blanke, O. (2007). Video ergo sum: manipulating bodily self-consciousness. Science, 317, 1096-1099.  Mason, O. J. & Brady, F. The psychotomimetic effects of short-term sensory deprivation. The Journal of Nervous and Mental Disease, 197, 783785 (2009).  Metzinger, T. (2007). Empirical perspectives from the self-model theory of subjectivity: a brief summary with examples. Progress in Brain Research, 168, 215-278. Wynants, N., Vanhoutte, K., & Bekaert, P. (2008). Being Inside the Image. Heightening the Sense of Presence in a Video Captured Environment through Artistic Means: The Case of CREW. Proceedings of the 11th Annual International Workshop on Presence, 157-162.

your sense of bodily self is moved into the virtual world


scientific experiments. The participants real hand is hidden and a rubber hand (or someone elses hand) is visible instead. Synchronous stroking and tapping of both the participants hidden hand and the rubber hand creates the illusion that the rubber hand actually belongs to the participant. If stimulation is not synchronous, the illusion does not take place. You can try the illusion at home by putting a glove on the table and keep your own hand hidden under the table. Let someone stroke both the glove and the real hand in the same way at the same time. The RHI is an example of visual capture, where the visual sense dominates over other senses. You feel tactile stimulation on your body, but see it happening somewhere else. The brain has to integrate conflicting information with the result that the fake hand feels like a part of your own body.

some hallucinations after just 15 minutes of sensory deprivation. It seems like you begin to perceive your own reality in the absence of external input. In the case of CREW, the initial period of complete disorientation seems to create a heightened sense of presence by preparing you for accepting the new mediated reality. If you dont know where you are, you might as well be located in the video projection appearing in front of you.

Virtual body
Another tactic used to enhance presence the feeling of being there is to create a link between what is shown on the HMD and what you feel your body. During the immersion, you will see a virtual hand being touched via the video goggles and feel

Barbara Nordhjem
I graduated in 2011 with a masters degree in Cognitive Neuroscience from the University of Leiden. Now, I am in the Visual Neuroscience Group at the University Medical Center in Groningen. I am generally interested in visual perception and how we are able to extract the most useful information from the environment in different situations. I have a rather mixed background. Before I started my PhD research about different ways of seeing our surroundings and the neural mechanisms involved, I initiated a festival for live visuals and electronic music in Denmark and have worked at the media art institute V2_ in Rotterdam.
Image by Ren Passet

12

13

The new affordable AR headset!


Yolande Kolstee and pieter jonker The AR Lab has developed a new Augmented Reality headset in close co-operation with our partner the Delft Biorobotics Lab of the Delft University of Technology. It is a successor to the well-known headset George, which became famous because of the work done on co-operation through AR by our partner, the Systems Engineering Department of the Faculty of Technology, Policy & Management at the Delft University of Technology for the CSI-The Hague project (www.csithehague.com).
14

Marty

15

Our new headset deserves an appropriate name: Marty refers to the scene in Back to the Future (2) in which VR headsets were used. In contrast with George, the new headset is a videosee through system. Niels Mulder designed both George and the new Marty.

sitioned. In contrast to markers, natural feature tracking is non-intrusive as it uses key points from the world around you.

Advantages of AR head-sets over smartphone or tablet PC based AR


Using a smartphone or tablet PC to see AR information has a lot of advantages because smart phones and tablets are so widely used. Many users experience an extra layer of information by scanning AR markers or by using GPS and compass information on their location. This can link them to cultural, historical or commercial information. However, there are some disadvantages too. The screen size is very limited, even when using an iPad or Android tablet. Moreover, smartphonebased AR requires that the user hold his phone at eye-level to position the phone between his eyes and the real world he is augmenting. This puts the user in a tiring pose that cannot be sustained and interferes with other activities such as walking around in a street finding historical or architectural information to insert into the AR environment. Another annoyance is the process of setting up an AR application via, for example, Layar. You have to start up Layar, activate the application,

say Restaurants around, and then wait for the phone to get the correct information. In most cases, you also have to lay the smartphone flat to prevent compass errors. In addition, distortion of the virtual layer in relation to the real world makes smartphones insufficient for certain applications. Head-set AR displays, on the other hand, permit the user to see the real world in 3D, which enables the user to move around safely and to see and experience virtual reality at the same time. The experience of a virtual layer when using AR eye-wear is immersive, as we know from our earlier experiments with AR headsets in an exhibition in Milan, where we showed virtual furniture. By designing the headset-frame properly, we can keep occlusions to a minimal and maximize the users field of view. This allows the AR glasses to be used hands free and lets the user do tasks requiring both hands, as many people have seen in the BMW commercial where a mechanical engineer is repairing a motor.

Open Source and low-cost


This project is open Source. The AR Lab will make the 3d model and print associated with this this low-cost AR headset available to the public. This allows anyone to make an AR headset at 720p for only 1.500, paving the way for high quality Augmented Reality for the general public. With this new headset and software, developed by the Delft University of Technology and the AR Lab, it will be possible to walk through any space and experience Augmented Reality as never before. The design files of Marty will be made available a.s.a.p. on Sourceforge. The tracking, mapping and user interface software will be made available (on Sourceforge) after Oytun Akman defends his PhD thesis on December 3, 2012. The AR Lab will open a blog for help and comments on the design of Marty and the tracking, mapping and user interface software. The PhD thesis of Oytun Akman (see Figure 1) can be downloaded from: http://repository.tudelft. nl/v iew/ir/uuid:3adeccef-19db-4 a0 6-ab268636ac03f5c0/

Optics
The Sony HMZ-T1 VR optics was used as a starting point for building Marty. The basic components of the Sony head mounted display have been re-used. Two Logitech C905 cameras were added to the design and the glasses have been made more robust. In line with the design of the George headset, simplicity and robustness have been integrated with state of the art design. Niels drew inspiration from the famous View-master; a simple, low cost setup which has been in use for decades and allows you to see real 3D images in a simple stereo setup.

Tracking
Oytun Akman and Robert Prevel, PhD researchers at the Bio-Robotics Lab at Delft University of Technology, developed the software for this headset. The software allows:  Head-pose tracking and on-line building of a coarse 3D point-cloud map at 30fps;  Reconstruction of the 3D scene and on-line building of a fine grain 3D point-cloud map at 2fps;  Hand pose tracking for human-machine interfacing to control the headset. The software works by tracking 3D natural features on multiple scales, using only a stereo video-feed from a pair of webcams. It is still very difficult to position virtual information in three-dimensional space for AR headsets, so we are very glad that mayor breakthroughs in software development by the TU-Delft have made 3D natural feature tracking possible. With natural feature tracking, no markers are needed; instead, salient points in the 3D space on walls, books, and paintings for example are tracked and used to provide invisible anchor points by which the virtual objects or scenes can be po-

The CYBER-I BI-Ocular: the new professional AR headset


The ARLab ordered the manufacturing and obtained the very first optical-see through headset from our partner CybermindNL.com, the CYBER-I BI-Ocular. This is an XSGA OpticalSee-Through AR headset with two USB 2 uEye LE cameras. It is about a factor 10 more expensive than Marty, but unlike Marty apt for the professional market. We are negotiating with business partners to bring a professional version of the software of Oytun Akman on the market for this headset. Contact info@arlab.nl if you have a business proposal. The CUBER-I BI-Ocular headset

Figure 1: Oytun Akmans PhD thesis


17

16

Radioscape
in the context of augmented reality
Edwin van der Heide
When I initially got the idea for Radioscape, I was thinking about creating a parallel reality that traverses the existing physical space and reveals itself by navigating through that physical space. It was later, when I encountered the term augmented reality, that I started reflecting about artworks in which the augmentation of reality plays a central role. During this process I became more and more convinced of the idea that reality gets augmented. Its important to mention that I consider all modalities (not only the visual) when thinking about augmentation and I believe that its a requirement for an augmented experience that the perceived result is more than the combination of the real and the virtual. The real and the virtual should not just co-exist but relate to, and influence each other, leading to an experience that is more complex than the addition of the two. I like to see it as a perceived multiplication of the world around us with a virtual constructed reality. In this text Ill first give a more technical explanation of Radioscape. I believe its important to understand the parallels between the acoustic space and the electromagnetic space that form the foundations of the work. After that Ill go into more detail on the nature of the augmentation that occurs in the work. It was a call for artworks for an area in the country side of Japan that brought me to the following question: We perceive sounds as individual sources that merge and interact with the acoustic space around us; would it be possible to achieve a similar experience with electromagnetic (radio) waves? Sound is not only spatial because it traverses and reflects in the three dimensional space around us, it is also spatial because our listening results in the perception of sound as a spatial phenomenon. Furthermore, any spatial experience, such as the perception of sound, has a bodily component to it because we relate the perception of (things in) space to the perception of our body in space. We can act in, and move ourselves through the acoustic space. Radio is often used to transmit individual signals from one point to one or more other points. The spatial nature of the electromagnetic waves and the multiplicity of sources are ignored in order to realize individual communication channels (examples of exceptions are certain radio amateur contests, fox hunting and gps). To establish these channels, different forms of modulation (like AM, FM, GSM, etc.) are being used. Individual transmitters broadcast on their own frequency and with a receiver we tune in to a specific frequency while filtering away the others. I wondered what would happen if we would simply shift sound up in frequency, transmit it at one location, receive it at another location and shift it back down. This is a form of transmitting without using any form of modulation and just using a certain frequency range within the electromagnetic spectrum. What I especially wanted to know is what would happen if we would transmit two independent sounds by shifting them up individually, transmitting them at two different locations, receive them at another location, shift them back down and listen to the received signal. I was expecting that the signals would mix in the electromagnetic space just like acoustic sounds mix and co-exist in the acoustic space. It is an
Image courtesy STUDIO Edwin van der Heide

Radioscape can be seen as an artwork in which

18

19

approach to radio where we think of it as a field of transmitted sources with the receiver moving through this field, instead of a receiver that is tuning in to a single transmitter and establishing a (one directional) non-spatial, point-to-point channel. The above setup of transmitter(s) and receiver can simply be compared to a setup of speaker(s) and a microphone, but addressing the electromagnetic space instead of the acoustic space. The transmitters can be seen as amplifiers with an antenna instead of a speaker connected to the output. And the receivers use an antenna instead of a microphone. The used frequency range is determined by the amount of frequency shift used by the transmitter and receiver.1 The loudness of acoustic sound sources decays when

we increase the distance to the source because the radiated energy gets spread over a bigger surface in space and besides that, the air itself absorbs a bit of the energy as well. Electromagnetic waves have that same behavior but when using regular radio transmission techniques, this effect is avoided or compensated for. The reason for this is that both FM and AM use a carrier signal that is modulated by the audio signal.2 The proposed system of shifting the audio signal up (and down) in frequency is a form of transmission without using a carrier frequency. Any signal the receiver receives within the chosen frequency range becomes audible. It is the individual strength of the received signal(s) that directly corresponds to the loudness of the signal(s). The closer the receiver gets to the transmitter, the

louder the signal becomes and vice versa. While sound travels with a speed of about 340 meters per second, radio waves travel with the speed of light: 300,000 kilometers per second. An acoustic sound reflecting in space is something that we perceive as both a timbral and a temporal phenomenon. We can hear an acoustic sound reflecting in space, resulting in reverb or echoes. It is temporal not because it just happens in time but temporal because we perceive it as something that happens in time. We speak for example about a reverb with a duration of 1.5 seconds or an echo that arrives half a second later. Radio waves can also reflect in space but since they are that much faster than sound we would need enormously big spaces in order to be able to perceive these temporally. For example, it takes

approximately 2.5 seconds for a radio signal to travel to the moon and come back. The moon has an average distance of about 384,400 kilometers. The wavelength of a wave depends on its frequency and traveling speed. Frequencies in the FM band range have a wavelength of about three meters. When these waves reflect between buildings in a street this leads to standing wave patterns with cancellations points that occur - for example - every 1.5 meters. We can avoid these standing wave patterns by using a relatively long wavelength of 175 meters (=1.7 mHz). At this wavelength buildings are not just reflectors, they start to become conductors and resonators for the transmitted signals. This means that the physical environment is excited by, and responds to, the transmitted radio waves. While developing the work I learned about different antenna principles. A vertical antenna has an omni-directional sensitivity pattern and relates to the electric component of the electromagnetic field. A coil or loop antenna is only sensitive from the sides and relates to the magnetic component of the electromagnetic field. I realized that these two directivity patterns were equal to the patterns of the omni directional microphone and the figure eight microphone. There is a stereo recording technique called mid-side (m-s) that uses exactly these two microphones and I started to wonder whether it would be possible to realize a stereo receiver with such an antenna setup. It would not be a receiver that receives a signal that is broadcasted in stereo but a receiver that creates a stereo image with the positions of the individual (mono) transmitters. Transmitters that are to the left of the antenna will be heard more on the left and transmitters to the right side of the antenna will be heard more on the right. Rotating and moving the receiver changes the stereo image directly. Each transmitter is transmitting its own layer of the meta-composition. The layers are slowly changing and eventually repeating after 4-10 minutes. The changes within a layer are the slowest changes that you can experience in this environment. Its a result of not walking and not mov-

Image courtesy STUDIO Edwin van der Heide

20

21

ing the receiver and just listening to the changes within the received layers themselves. The next level of change is the interaction that occurs when you dont walk but just move the receiver. By doing so, you re-orient yourself in the field of received signals and find new perspectives to the environment. The last level of change is the result of simply walking and thereby getting closer to certain transmitters while moving away from others. Certain transmitted signals will decrease or disappear while other signals will fade-in or become louder. While listening, you alternate your focus and the way of interacting. The Radioscape receiver is a handheld receiver. By moving the receiver you explore the space around you. Every movement of the receiver has a direct effect on the received signals. The scale and the speed of the changes match well with the space that we describe with the movements of our hands and arms. Therefore the space around you becomes an almost tangible space in which you explore and remember positions and transi-

transmitter that is otherwise inaudible because of its large distance or that one of the transmitted signals becomes dominant over the others. The resonating buildings are an interesting example of a situation where the real world interacts with the added layer. The radio signals are not just a parallel reality that leads to associated relations between the physical space and the electromagnetic space. They directly influence each other. Besides the intentionally transmitted signals, the receiver picks up other signals that are present in the same frequency range. Neon lights and computers that control street lights can emit strong fields around them that dont carry far but can take over in loudness on a local scale. Already present signals that are otherwise unperceivable become perceivable and part of the received environment. These moments add to experience and make it even more real. Radioscape is a work in which you can share your experiences. By navigating the city you generate your own sonic order, combinations and timing of the composition. When walking together with other people you share the same augmented space. The global experience will be more or less identical while local signals really differ from each other. Ive seen par-

2   The carrier signal is generated by an oscillator that corresponds to the transmitter frequency. When using frequency modulation (FM) the transmitted audio signal modulates the transmitter frequency (carrier) and therefore the content is independent from the received amplitude. When using amplitude modulation (AM) the amplitude of the

transmitter signal (carrier) is multiplied with the audio signal. The received signal decays with increasing distance between the transmitter and receiver, but this is compensated for by an automatic gain control system build into the receiver and resulting in a constant amplitude of the carrier. It does not influence the dynamics within the audio signal.

By navigating the city you generate your own sonic order, combinations and timing of the composition.
tions. Its intuitive to navigate in the sense that it reveals itself easily but is complex enough to keep on exploring. Radioscape takes place in the public space. My preferred locations are areas within a city with a lot of diversity, that are easy to walk and have streets that are close to each other so that the participants frequently have to choose where to go. It is the chosen city area that dictates the possibilities of where you can go. As mentioned earlier the buildings in the environment become conductors and resonators for the radio waves. There is often a clear effect when you move the receiver close to a building. It happens for example that you start hearing a

ticipants, talk about what they experience, imitate each other and notify each other of particularities they discover. They are in an augmented world that is based on the intersection of the physical world with a composed parallel world of transmitted signals. Furthermore, there is a true interaction between these two worlds.

Footnotes
1   Examples of frequency bands are the LW, MW, SW and FM band. Radioscape is however not using the corresponding modulation technique and therefore incompatible with consumer receivers.
Image courtesy STUDIO Edwin van der Heide

22

23

Ive never been here

Finding what I didnt seek


Esther de GraafF
Ive never been here, says one of the participants who has lived in Rotterdam for eleven years. At the beginning of the Serendipitor Walk ten minutes earlier, he told us he knows every part of Rotterdam like the back of his hand. Within a few minutes, my initial scepticism about this navigation-app has disappeared. Mark Shepard, a charming and pleasant American artist, architect, and researcher, smiles. In 2010, he developed the Serendipitor app together with the V2 _ Institute for the Unstable Media in Amsterdam. It is part of his longstanding project Sentient City Survival Kit; a collection of objects for surviving futuristic urban life. Surviving may sound rather severe, but the term indicates that Shepard hasnt just made a nice toy. His work is an investigation into the relationship between the ubiquitous computer and the city, in an attempt to provoke thought about our controlled society. By following various instructions, you are forced to relate to the city in a novel way. For example: follow a man with an umbrella, or buy a rose and give it to the first passerby. Each walk is determined by four types of instructions: social (meeting people), geographic (go left), abstract (track a square) or architectural (enter the tallest building). You decide how long and complicated your walk will be and how many assignments you wish to do, depending on how much time you have to play, explains Shepard. It is not surprising that his instructions draw on artists like Vito Acconci and Yoko Ono of the Fluxus movement, an international avant-garde art movement from the sixties which focused on art based on mundane things. During the DEAF festival in Rotterdam (May 17 20, 2012), Shepard was present every day to walk the Serendipitor Walk with a group; an amazing opportunity. On a beautiful Friday afternoon, we accompany the artist from the DEAF main site at the Coolsingel into the city. Our destination has been entered, but the exact route depends on the instructions we are given. For our first assignment, we must walk down the Coolsingel to the Meent, find a quiet spot, and stay there for a while. OK. So we arrive at a very busy intersection, not the easiest place for quietness. We spot a wide staircase at the side of a building, which leads us to a higher, quieter place. Suddenly we are faced with a door, behind which we find an oasis of calm and beauty. Weve come across a gallery unknown to me: the WTC Art Gallery. After walking around here for a while, we press next for our next assignment. We must go east and find a bar, offer someone a drink and ask Current navigational products, such as TomTom, make it very easy to get from A to B. Previously, computer technology could only be found indoors. Nowadays, the Internet is everywhere in the city. We are surrounded by these invisible digital information systems, which constitute a kind of virtual layer over our reality. This creates a capacity to gather and process information him/her to draw a map of his/her childhood. At Caf Brasserie Dudok, we meet a woman from Turkey who is waiting for a friend. While enjoying a drink she tells and draws us her life story in just ten minutes. She has established an impressive career as a doctor in many European cities. Despite how short our encounter was, I am moved by the beautiful story. Next, we placed her drawing over the map of Rotterdam, so that the lines literally determine our route. This dictates that we head west.

25 24

that is increasingly interwoven with the physical structure of our urban space. For instance, various applications for smartphones and tablets provide digital information about your immediate surroundings. Simply hold your phone in front of a street to see where you can find the nearest ATM, or point it at an old building to read its history. This is incredibly convenient, but the rise of computers also makes it easier for futuristic cities to keep an eye on us. You could say that the city is getting smarter. Shepard wants to make us think about the consequences this could have on our culture and politics, in regards to privacy for example. His app presents an alternative way to navigate through a city where everything is regulated, public and controlled. Shepards point is to become aware of your surroundings and the route you travel. However the instructions themselves are fairly concrete and thus exert some form of control. Shepard responds with a smile: Freedom is not possible without control. After having walked for an hour and a half, we collectively decide to end the artist walk. The assignments kept guiding us away from our original route. We did not reach our destination, but that is not important. During this walk, I found

places in Rotterdam that I would otherwise have walked right by. We found places unexpectedly when we were searching for something else. And Ill be damned. He inspired me and got me thinking. I often have no idea what route I traveled, simply because I look at the screen instead of my surroundings. The app stimulates you to really look around and to make contact with people. The app is available for download and use in every city in the world. Want to skip a task? Anything is possible, simply press next. Each assignment gives the opportunity to find something you were never really looking for. Something we should do much more often. The app is free to download from AppStore: itunes.apple.com/us/app/serendipitor/ id382597390?mt=8 Or from: www. serendipitor.net Small detail: unfortunately not suitable for android users. More information: www.andinc.org June 2012, Esther de Graaff

Esther de Graaff
www.estherdegraaff.nl Project manager, curator, writer, coordinator call me any of these things; I do them all. My love for organizing events is central. Its all about the end result. I support artists, curators and institutions in organising various projects, both in content and in organisation. Social themes are essential to my work. I am not interested in art for the sake of art; its all about sharing experiences. Culture is an encounter. The projects, texts and exhibitions created by Esther de Graaff are characterized by creative solutions and efficiency. Inspiring people with beautiful images and deepening their view of the world is the whole point.

26

27

Smart Replicas bringing heritage back to life


Maaike Roozenburg
Treasuries of objects
Our museological heritage comprises collections of objects from paintings to pottery which have been categorized, grown and preserved over the course of centuries. Objects we value for their historical, cultural and social context; objects that play a key role in the way we view our shared history and cultural identity. Our history is examined, reconstructed and visualized through these objects, but how can we relate to all these stored items? What is their relevance at this time? And what is their significance in a rapidly digitizing society? Museums and cultural heritage sites are the institutions who manage this heritage; storehouses of objects, stories, images and historical knowledge. The conservative way these objects are made accessible you must to go to a museum or exhibition means you can admire, but not touch, let alone use them. This strips the objects of their main purpose and function their original use and isolates them from our daily lives.

Image by Sam Rentmeester

Enriching objects
The knowledge and information which museums and cultural heritage sites hold, gather and develop about the items they own and manage is barely accessible to the public. It is fixed within the confines of the museum and is often reduced to a short text on a museum plaque or an item in a catalogue. In the project Smart replicas, we investigate how 3D prototyping such as 3D scanning, printing and reproductions which make digital objects real and vice-versa and Augmented Reality (AR) can contribute to the accessibility of museum objects while providing the means with which we can also increase our knowledge of the objects. How can you reverse engineer a museum piece? Will this allow you to take the museum piece out of the museum and put it to use? Can you turn cultural heritage into new design? How can you link these objects to the historical information which is stored in a museum? How could visitors interact with these enriched objects? These questions will be examined in the project Smart Replicas.

Smart Replicas
Smart replica is a term we have given to replicas of historical objects made useable again by means of 3D scanning and printing techniques. Smart refers to intelligent; these objects are not just copies, but replicas enriched by innovative technologies such as AR to bring information across, so that they can serve their original intent and provide information outside the museum at the same time. That way, heritage can

28

29

provide new designs; objects shaped and refined through the centuries can be used in the present. Smart Replicas is a project by Studio Maaike Roozenburg in cooperation with four partners: Delft University of Technology, Faculty of Industrial Design and Faculty of Civil Engineering; Delft Heritage; Museum Boijmans Van Beuningen Department of Archaeology, the AR Lab and students from the Graphic Design department of the Royal Academy of Art. A team of very different parties with their own background and expertise contribute to this project: academics, students, curators and technical experts. Smart Replicas is a project in which the worlds of museological heritage, design, art history, 3D prototyping and AR come together.

Maaike Roozenburg
Maaike Roozenburg is a designer. She develops concepts, projects and products on the border of heritage, design and visual communication. After graduating at the Gerrit Rietveld Academy in Amsterdam she founded Studio Maaike Roozenburg in witch she combines a historical fascination with high tech materials and techniques and traditional crafts. Research by design plays an important role in these projects. The studio aims to research and communicate the value of historical (museum) heritage for us now. This approach can result in a design collection of ceramics pieces, a work for the facade of a building, an exhibition for a museum, or a mobile application that unlocks a historical collection. Besides her work at the studio Roozenburg teaches at the Post Graduate Course of Industrial Design at the Royal Academy of Art.
Image by Julia Blaukopf

'Reverse engineering' historic items


Seven tea cups from the Boijmans Van Beuningen collection form the base material to explore how reverse engineering and 3D prototyping can be used to produce useful replicas of museum pieces. Previous tests have shown that a (medical) CT scanner can scan very vulnerable and untouchable historical objects. This does not damage the objects and meets the museums guidelines for handling these objects. The resulting data can be converted to a digital reconstruction of the object which can be printed out on a 3D printer. The aim is to combine 3D prototyping techniques with the historical porcelain techniques used to create the original cups. This includes experimenting with milling porcelain casting moulds straight from 3D models as well as printing onto porcelain directly. It is important that the replica is a complete designer product that invites everyday use and can lead a new life outside the museums walls.

information; stories about travelling, locations, production processes, use and rituals. AR allows us to visualise this information and actually link it to the object; connecting knowledge to the object itself, outside the context of a museum. This allows the knowledge, carried by the replica, to leave the museum and enter the wide world. A group of students from the Graphic Design department at the Royal Academy of Art is working on structuring and shaping digitized historical information concerning the cups. They can develop their own storyline or narrative and design it for a corresponding medium. Routes can be accessed with Google maps; moving images can be displayed through animation and text can be used by aid of visual or audio stimuli. These designs will be linked to the replicas using AR. The research also includes the markers that create that link. Here we examine how to combine the historical techniques and decorations of the original museum pieces with technology aided by markers and object recognition.

What is the real significance of our heritage?


With the project Smart replicas, we examine how 3D prototyping can be used to put heritage back to use and how AR can turn objects into information carriers outside the museum. The underlying questions we want to ask are: What is the meaning of all these stored historical objects? How can we relate to them? Are they relevant to us now? What happens when you copy them? What does this change in our perception and appreciation of the real object? What is the relevance of these objects in our increasingly digital society? Augmented Reality and 3D prototyping offer opportunities to pose these questions (indirectly), and to identify and investigate them. The result is a trial installation in Museum Boijmans Van Beuningen which will run in spring 2013,where the smart replica prototypes can be touched, used and tested by the public. The project can be followed at http://smartreplicas.blogspot.nl/

'Smart' objects
AR brings the physical and digital worlds of an object together, thus transforming it into an information medium. Or rather, AR can show the information, history, context and stories associated with the object. Each porcelain cup is
Images by Maaike Roozenburg
30

a historical source, with a wealth of historical

31

How it was made:


a tangible replica with mobile AR
Jouke Verlinden, Maaike Roozenburg & Wolf Song

3D Reconstruction
A substantial part of the reverse engineering is the digital reconstruction of the scanned point clouds into a valid, 3D CAD model that is printable. In this case, much effort was spent to convert the collection of jpeg pictures of 2D slices into a valid 3D mesh. However, this was not possible given the time constraints and we ended up extracting a vertical section view to generate a working revolve in Rhinoceros - a regular type of CAD package. In the cup, an AR tag was embossed, to be used as an optical marker. A 3D print was made with a polyjet technique in maximum resolution (an Objet Eden, accuracy 16 micron).

Acknowledgements
This project was done with students Anne Lien van der Linden, Kenneth van Kogelenberg, Mariet Sauerwein and Elizabeth Berghuijs. Furthermore, we would like to thank Wim Verwaal and Wim van Eck for their technical advice and guidance, as well as Alexandra Gaba-van Dongen of museum Boijmans Van Beuningen for trusting us with such a unique piece of cultural heritage.

Manufacturing and decoration


Based on the 3D prints, molds were made in plaster to pour the porcelain clay into. After drying Replicated cup and lid in porcelain, augmented by animated 3D graphics on a smartphone. the mass, the molds were carefully removed, baked and painted, after which a final transparent glaze was applied. As part of the Smart Replica theme initiated by Maaike Roozenburg, we made a proof of concept in the minor on Advanced Prototyping at the Faculty of Industrial Design, Delft University of Technology. The top figure presents an impression of the final result: an exact replica in porcelain of an 18th century sugar cup + lid, the decorations function as markers for a handheld AR overlay. On the cup, the augmentation is a 3D animation floating on top of the physical object, while the physical lid is covered exactly with a virtual golden decoration that matches the original lid. Most of the technologies that we used require only a basic skill level with some experience in 3D modeling (CAD or visualization). A close account of this developmental process can be found on the weblog http://porcelain2011.weblog.tudelft.nl Below, the most essential steps are discussed.

3D scanning
The original Loosdrechts porcelain objects are part of the collection of museum Boijmans Van Beuningen. They measure approximately 8 cm and due to their fragility non-contact scanning had to be selected. A medical Computer Tomography scanner was employed, in which a radiation source and the detectors rotate around the sample and measure the attenuation of the x-rays of the sample from different angles. In the slice the pixel resolution is approximately 0.2 mm, yielding a result which is still sufficient to inspect object thickness and geometry. Furthermore, the scans gave a surprising effect: the gold decorations (goudluster) caused distortions in the scanned geometry.

Augmentation
For mobile AR, we chose Junaio a straightforward smartphone application that offers location-based augmentation as well as image recognition. The so-called GLUE functionality allowed us to overlay 3D models and animations on top of predetermined images (pattern) with a webbased interface to adapt the scale, rotation, and position of the objects. For modeling we used 3D Studio Max, which could export in the proprietary .md2 format with a special plugin. The augmentation of the lid was a gold version of the same object, which was simply a modified version of the same reconstructed 3D file of the lid. The cup was extended with an animation of several portraits that seemed to float in the air. Original Loosdrecht sugar cup and resulting mesh model after the CT scan.

32

33

Augmented Belief in Reality


Maarten H. Lamers

Let me explain.
If you were to take photographs with your own camera of, lets say, Barack Obama, and the Image Fulgurator was nearby, then a peace dove or a tomato could appear in all your photos. But you wouldnt see the dove or tomato with your own eyes, only in the photos that you made. It works by projecting the visual element into the scene, for example onto Barack, exactly during the flash of your own camera. Your eyes wont register it, but your camera will. But how is this augmented reality? you may be asking now. Well, the Image Fulgurator definitely augments something, namely your photos. And since we accept photos as reality, particularly the ones we make ourselves, it augments what we believe to be reality. Actually, that is what all see-through AR does; it
Image courtesy the artist and alexander levy, Berlin

our hands. We know that the bunny isnt really in front of us, but we happily ignore this knowledge so we can enjoy it longer. In general, to appreciate most augmented reality, we must willingly ignore what we know is real. And that is what sets the Image Fulgurator apart from most AR that I encounter: it does not rely on our willing suspension of disbelief! When tricked by the Image Fulgurator, we are truly confused by what is augmented and what is real. Just go online and watch the faces of the photographers that Julius tricked and filmed, while they stare puzzled onto the LCD of their fancy digital cameras. Their faces do not show understanding and acceptance of what happened. Quite the opposite! They are completely surprised by what appears to be reality. It is like the pink bunny just jumped from the book and kicked them in the nuts: Ultimate augmented reality.

At the Ars Electronica Festival 2008 I encountered artist Julius von Bismarck, who refreshingly resembled more an Alaskan bearded mountain-man than the typical sleek newmedia artist. Anyway, Julius was there to present his work, the Image Fulgurator, a low-tech device that secretly adds elements to photographs taken nearby. Truly augmented reality, to my opinion.

adds elements to our mediated vision of the real world. And with most AR systems we willingly suspend our disbelief that, for example, a pink bunny really appears from the pages of the book in

34

35

Image courtesy of Sam Ghantous

[AR]chitectural Education at Play


Vincent Hui

As a child, video games seemed to be an escape from the rigors of school yet learning continued in the gaming environment whether it was keeping track of scores and experience points, checking my health meter, or even simply reading in-game dialogue, my entertainment was fueling my education. The superposition of gaming data atop the fantastic onscreen activity ensured that I could better understand the world I was playing in. It comes as no surprise then that as an architecture professor, I bring these same edutainment and gaming perspectives into my pedagogy through the development and deployment of an

Augmented Reality (AR) app, entitled Arch-App, at the nations largest architecture program at Ryerson University in Toronto, Canada. The widespread ubiquity of AR technologies has extended into video gaming platforms such as the Playstation Vita and Xbox Kinect pioneered by its rapid integration in mobile computing devices and smart phones. In these gaming worlds, ubiquitous data is persistently on display, informing players about everything from mapping information to enemy data thereby enhancing the gaming experience. Where current generation gaming technologies have pushed the boundaries of rep-

36

37

VINCENT HUI

resentation and simulation of the real world with incredible fidelity, AR access has empowered people with the ability to bring this persistent, ubiquitous data to the real world. With a quick glance through a smartphone, entirely new layers of information are made visible. Recent revelations that the rapid adoption of smartphones is accelerating (approximately one third of the global cell phone market) serve to highlight the inevitable ubiquity of an AR-enabled population. It is within this context of interface familiarity and hardware access that I have worked with my Ryerson University colleagues, Graham McCarthy and Steven Marsden, in assembling the infrastructure of an AR app that effectively provides a mobile database of building information to anyone with a smartphone. One of the biggest challenges faced by educators in architectural science programs is providing meaningful and self-directed pedagogical tools that are accessible, current, and engaging. To many students, architectural engineering material is often discussed in the confines of classrooms rather than contextualized in real world projects. It is incumbent upon educators to seamlessly connect academia to real world application. Augmented reality proved to be the most suitable technology to ameliorate this condition. In the case of Ryerson University, the vast majority of new students are commuters who take public transit from their suburban homes to what amounts to a relatively unfamiliar environment in the downtown core. Despite the incredible architectural renaissance Toronto has experienced in the past three decades, many incoming students arrive unfamiliar with the acclaimed work literally across the street from their classrooms. Developed within four months, the early prototype of the Arch-App was a simple database of notable architectural projects within the downtown Toronto campus that highlighted basic project information such as the completion date, the architects, and some basic text describing the projects architectural relevance alongside supporting imagery. Originally developed in HTML 5, the project was a website that catered to a student audience that would explore the city while mobilized with access to a database of architectural landmarks. The prototypical infra structure for the app, known as RULA Maps (Ryerson University Library & Archives Maps) became a tricorder of sorts where any user with a web-enabled phone could bring up individual layers of pedagogically valuable information. Like the venerable tricorder of science fiction, the RULA Maps project transformed a simple phone into a device able to uncover pinpoint data on nearby landmarks. Instead of listening to lectures about the built environment within the classroom, the Arch-App has become a tool to introduce students to their surrounding buildings and serves to contextualize the academic discussion via real world application. For It has since grown into an AR interface that allows users to visualize everything from historic imagery and structural drawings, investigate the history and theory behind a buildings design, leverage global positioning to offer tours and guidance to particular sites, and even watch interviews with the architects behind the projects. Though a proprietary standalone application is currently underway, the Arch-App content from the robust database has been integrated into widely accessible AR viewers such as Layar and Junaio.
Image courtesy of Sam Ghantous

Vincent Hui (MRAIC, Assoc. AIA, LEED AP) received his Bachelor of Environmental Studies (Architecture), Master of Architecture, and Teaching Certification from the University of Waterloo as well as a Master of Business Administration (specializations in Marketing and Strategy) from the Schulich School of Business at York University. After gaining international and domestic work experience with architecture firms around the world, he became a partner at Atelier Anaesthetic in 2003. He has been awarded several teaching citations while at the University of Waterloo since 2001 within both the Schools of Planning and Architecture. He currently teaches a variety of courses within the Department of Architectural Science at Ryerson University in Toronto, Canada, ranging from design studios to advanced architectural computing, and digital fabrication. Vincents works with physical computing and digital fabrication have been exhibited and published internationally. His recent work with architectural appropriation of ubiquitous computing, augmented reality, and data-scapes has culminated the development of tools that allow users to access data on any landmark in the built environment.

The Arch-App in use in the summer allowing a user to see a building in the winter.

38

39

example, rather than assign readings and show slides of notable steel buildings in a Structures course, students could follow a preordained tour to visit relevant buildings in the city in person using the Google maps interface and global positioning system integrated in the Arch-App. Once at a key landmark, students could also leverage

in seeing what they have learned in class applied in the real world. Taking advantage of the Web 2.0 paradigm of information sharing and user-generated content, the Arch-App also serves as a platform for students to add and update content as part of their academic responsibilities in various courses. Stu-

tandem with existing conditions and historic imagery to understand why certain architectural decisions were made. This level of integration among courses and connection to existing buildings in real-time is a unique opportunity that is only made possible via the AR opportunities provided by the Arch-App.

tegrative thinking with other courses, connecting students to a community beyond the university, and spurring greater curricular engagement within the department. Its success since its deployment a year ago has merited great interest from both within the academic community and a variety of external stakeholders including the general pub-

AR interface identifying nearby architectural landmarks

Image courtesy of Sam Ghantous

the online database through the app to not only simultaneously see historic imagery atop what currently exists in the real world, but also better understand the building through detail and interior imagery as well as future design proposals by other architects. An indispensable feature to students within the Arch-App is the ability to arbitrarily use it to discover information about buildings they find interesting as they explore the city. Beyond showcasing orthographic, rendered, and historic imagery, the Arch-App also provides users a glimpse into a building at different times of day and through the seasons. The app effectively became an augmented reality architecture professor in students pockets, uncovering facets of the buildings inaccessible or not necessarily visible in person. As though immersed in a video game environment, students engage the built world with a sense of discovery and confidence

dents are not only part of the end-user audience community, but also content creators and editors who vigilantly ensure app content is current, relevant, and robust. For one course students are asked to create 3D structural models that can be pulled up in the app while in another they would be asked to take photos of a construction site to maintain an archive of progress imagery once the building is complete. The aggregation of this process results in an archive that is both current and diverse while also leveraging students learning and sense of accomplishment in having their content uploaded on the app. This has proven to be quite successful as anecdotally students have not only drawn material from courses to the built world, but have also leveraged the Arch-App in integrating material from multiple courses. For example, using the AR component of the app, students are able to view structural models in

The pedagogical model that the Arch-App has established has since partially become a victim of its own success. As students and even members of the general public continued to upload content freely, the limited capacity of the testing servers indicated a need to find better models of maintaining content. The current model in use relies upon providing students enrolled in specific classes using the Arch-App with accounts whereupon they may upload content on their own. Unfortunately as with any wiki-based system, there remains to be a stronger mechanism for curating content which currently is in the hands of a few professors and research assistants. The Arch-App has served to address several educational issues in architectural engineering including increasing accessibility, maintaining currency, and drawing in real world application while also netting secondary benefits such as encouraging in-

lic. Future directions for greater integration with other courses within the architectural science program and even with other departments such as Interior Design are excellent opportunities in continuing the positive academic trajectory of the Arch-App while interest from professional architecture organizations has generated requests for collaborations to provide content from its membership to serve as a platform documenting great local projects. Another dimension currently in discussion is how to make the Arch-App more engaging by integrating social media and possibly adopting game mechanics within the AR interface. AR-gaming hybridization under the Arch-App may prove to be an excellent opportunity to maintain learning in an engaging, entertaining framework. AR-gaming interfaces fostering learning apparently some things never change!

40

41

Theres only one mind, the one we all share


John Cage

Image by Martin Sjardijn

3D print Weightless Sculpture 10x10x10 cm (3D printer KABK)

The launch of the first Sputnik in 1957, also called

At the same time, we see the appearance of art that has transitioned through cyberspace as a wormhole, and thereby took a super-, supra- digior megamodern form1. An interactive digital (visual) language is developing globally. Works of art are created in real-time, sometimes through social media, sometimes through virtual and augmented reality, sometimes converting physical material to virtual forms and sometimes the reverse. With the advent of 3D printing, sculptures can be made with modern software that never could have been real-

Digital technologies and fine art a complex relationship


Martin Sjardijn
42

the start of the space age had major implications for the ensuing Internet. The fact that mankind wanted to leave the earth seemed to run counter to the retrospective mentality of the artists, who turned to mythology rather than technology. With the advent of digital media, integration is gradually becoming noticeable in society, which, instead of opposing modernism, gives the modern perspective a higher plan through digital possibilities.

43

photography, film and applied art departments. During my research into zero-gravity in sculpture at the University of Technology Delft, I was much more impressed by their interactive technical visualizations, than by those I found at academies of art. Opportunities at art academies abound in the form of knowledge, software and available equipImage by Martin Sjardijn

References
1  Introduction to digimodernism http://www.alanfkirby.com/Introduction.pdf 2  Anabela Sarmento, Issues of Human Computer Interaction, 2005, IRM Press, London. 3  Achille Bonito Oliva, Trans-Avantgarde International, Review of art in the late 1970s and early 1980s, Published 1982 by Giancarlo Politi Editore 4 Not Just for the Boutique: Art schools and digital everyday culture. Florian Cramer 2012.

ment. In addition to traditional subjects, a wider knowledge base should be offered; digital literacy courses should be offered as part of the liberal arts, to allow students to understand the artistic possibilities these techniques present. Cooperation with Universities of Technology provides highly interesting results, as seen from the AR Lab of the Royal Academy. The artist, born from romanticism, is now both an artist and an explorer and is able to poeticize widely introduced and socially accepted technology in dialogue with technically oriented scientists.

Gerwin de Haan tests Elements installation with virtual sculpture Blue Cyber, TU Delft

ized before. The results of digital media also have clear effects on image development, which can be seen, for instance, in architecture and literature2. Developments in the field of virtual and augmented reality are still young and provide challenging possibilities for aesthetic and visual experiments, especially from a traditional fine arts perspective. These new forms of expression deserve attention from artists with a free mindset, because they allow free investigations, backed by a rich tradition. However, besides technological advancement, also significant cultural developments have taken place during the digital revolution in the past fifty years. At art schools this has led to a critical attitude towards the modern desire for innovation. Around 1978, the exhausted modern artist could no longer keep pace with new technologies, and was left behind in myth and irony as was concluded by Achille Bonita Oliva in his book TransAvant-garde International3, a review of art in the late 1970s and early 1980s. The consequences can still be seen in art academies and especially in the liberal arts departments. The new tools that arrived with the advent of computers, such as the Internet and digitally controlled equipment, are available at art academies, but their application is still geared towards industrial design the fine arts departments still focus mainly on traditional

techniques. Only social media seem to attract the attention of fine arts students as a contemporary means of communication. In Not Just for the Boutique: Art schools and digital everyday culture, Florian Cramer reports on the use of digital equipment and software programs, among 350 3th year Art Students in The Netherlands and observes: "Conversely, art schools are now mostly chosen by students who love manual craft such as drawing and hand making of tangible products. Often, for example in zine- and print making, this is motivated by cultural opposition to electronic media"4. In line with this, the exhibition called Simply painting which was shown at the Gemeentemuseum The Hague from 10th march till 17th June 2012, seemed to be a protest against the everpresent digital media and its profound cultural consequences. Not that there is anything wrong with handmade art on the contrary, it is an essential cog in the fine arts machine, and the closest one to mankind but computers have penetrated so deeply into our society, partly due to rapid miniaturization, that research and experimentation from the aesthetically oriented field of fine arts is of paramount importance. This research is currently being conducted by academics with technical backgrounds and at academies of art, mostly in the

Martin Sjardijn
Martin Sjardijn was born in The Hague, The Netherlands. He Studied Fine Arts at the Royal Academy of Art and for some years Cultural Sciences and Philosophy. After painting for many years, he started the Weightless Sculpture Project in 1985. In 1990 he began to work with virtual reality, using, for example, head-mounted displays and Datagloves with tactile feedback. Since 1998 he has been developing an Art and Educational Project using interactive 3D technology in collaboration with the Groninger Museum in the Netherlands. His latest concept is called ArtSpaceLab, which contains a virtual exhibition, a database and a proposal for an artproject inside the International Space Station. The virtual Coop Himmelb(l)au Pavilion gives access to a database, containing a big collection of free software and (video)tutorials as used by Sjardijn. At the Royal Academy of Art he teaches virtual modeling for autonomous art. www.sjardijn.com
Drawing by Cora Beijersbergen van Henegouwen

44

45

Who owns the space?


A legal issue on the use of markerless Augmented Reality
Yolande Kolstee Using the right AR app, we can see virtual commercial and cultural information and even virtual art placed all around us. But what if we dislike the content made visible by the app? Who owns the space that surrounds us? Can one object to any of this information placed in AR space? What if the information is false? To be aware of a virtual layer that is placed on top of physical space is one thing, but to be able to erase or correct the virtual layer is another.
When using visible markers, such as AR or QR codes, there is at least a chance to notice that virtual information might be available. A marker might even work as a warning. The use of visible markers (by an invisible entity) can even be found in one of the most terrible stories in the bible, Exodus 12:713, where
46

www.youtube.com/watch?v=wyEy2DLu7Wk ) Other, less playful uses of AR stand in stark contrast to Sanders artistic use of AR. In a column by Lester Madden, he discusses information that might be useful to burglars. There are more critical comments on the application of AR as we can see in the blog of Brian Wassom, a commercial litigator, who discusses legal issues related to AR. He describes his first legal issue in http://www.wassom.com/ doritos01.html.

CDD) filed a Complaint and Request for Investigation with the Federal Trade Commission (FTC) against PepsiCo and its subsidiary, FritoLay. The ultimate point of the Complaint is to argue that Frito-Lays campaign deceives teens into eating too many unhealthy snacks, thus contributing to the childhood obesity problem. In AR space, we can add information on political preferences for example, or information on sexual habits. This is a really serious topic, because we dont have a system yet to track ownership of this space around us, we dont have to give our personal approval, and many of us are unaware of the virtual information, which is placed in the air around us. That is why we will pay close attention to this topic in the next issue of AR[t]. Burglary-AR: http://www.augmentedplanet.com/2010/01/ the-case-against-augmented-reality/.

God commanded Moses to inform all the Israelites to mark their doorposts with lambs blood by which the Lord will pass over them sparing all the Israelite first-borns: they will not suffer the destroyer to come into your houses and smite you. Although AR markers are still in use, in the last years they havent been required for every AR application. Nowadays, AR often uses spatial information derived from the worldwide satellite-based global positioning system. Using GPS (and compass) it is possible to post additional location-based information all around us even inside our own house without us even noticing. With smartphones and apps such as Layar, we are able to see location-based information around us. For example, we can see who tweets around us. And we can even get directions to reach the tweeter via an app like Maps. AR Artist Sander Veenhofs works serve as a good example for AR art that uses locationbased information. Without the board of MoMA in New York knowing, he placed virtual art in the museum and even added a virtual 7th floor. He did not only augment the MoMA, but also the Pentagon and the White House. Through this art invasion, he showed clearly that there are no physical borders anymore. (see http://www.sndrv.nl/moma/ and http://

Its on. For real, this time. A newly filed legal complaint raises non-imaginary (although certainly still-untested) legal theories concerning an actual, commercial use of augmented reality. AR litigation is now a cold, hard reality. And the result of this initial salvo could have a huge impact on AR campaigns across the board. On October 19, 2011, four consumer advocacy groups (the Center for Digital Democracy, Consumer Action, Consumer Watchdog, and The Praxis Projectwho Ill refer to collectively as

AR can be used to display information that might be useful to burglars.


47

Chasing virtual spooks, losing real weight


Augmented running and a side trip into the history of audio augmented reality
Hanna Schraffenberger

A strange voice tells me to run. My heartbeat rises as I follow the instructions without giving them a second thought. The voices manner of speaking reminds me of my parents TomTom. The only difference: instead of telling me to take a turn, I am instructed to accelerate, slow down, to run or if I am lucky to walk. I am running with my new mobile app and virtual trainer. The app tracks every move, knows when my heartbeat rises, and is supposed to help me gain speed and lose weight. Today, I run to clear my head after a mentally exhausting but physically unchallenging day. However, trying to catch my breath, my thoughts return to work. More precisely, I pore over my research topic, non-visual augmented reality. In Augmented Reality (AR), virtual content is added to our real environment. Most often, this

happens visually. By now, probably all of us have seen some three dimensional objects popping up upon designated markers, virtual pink bunnies above augmented cereal boxes or walking directions superimposed on real streets. However, AR does not have to be visual. Sound, in particular, has already brought forth some fascinating AR applications and artworks such as Edwin van der Heides Radioscape and Theo Watsons Audio
1

it AR. What convinces me is that visitors can relate their own sounds and messages to those left earlier by others; thereby establishing connections between the virtual and the real. I imagine walkers, cyclists and other runners leaving their sound-trails behind on the road, leaving it up to me to add my own sounds and follow their steps, which are spread across time and space. My favorite mobile app, RjDj, can also be considered AR sound art. The app remixes the sounds of the surroundings and provides you with a soundtrack to your life that blends in, makes use of and accompanies your environment. Although it is certainly no typical AR application, the relation between the sounds of the real environment and those produced by the app is so strong that often, they seem to melt into a single soundscape.

I will have to try this app while running. I can already hear the sound of my steps on the asphalt evolving, blending into a rhythmical soundscape, slowly displaced by the wind of heavy breathing, interrupted by pitched variations of my sudden greetings whenever I meet another runner. While RjDj and successor apps like Inception and Dimensions3 are a rather recent phenomenon, the idea of remixing the sonic environment is not new. The artist Akitsugu Maebayashi has worked with similar concepts for a long time. His portable Sonic Interface4 was built in 1999 years before mobile phones gained comparable sound-processing abilities. The custom built device consists of a laptop, headphones and microphones and uses delays, overlapping repetitions and distortions in order to recompose ambient sounds

Space .
2

Entering the latter, visitors can hear the sounds left by previous visitors, spatialized, as if they were actually still there. At the same time, they can leave their own audio messages at any point within a room. It is not just the fact that the physical space is augmented with the ghost-like presence of previous visitors that makes me term

48

49

References
in urban space. The resulting soundscapes break the usual synchronicity between what one hears and what one sees. Unsurprisingly, Maebayashi is not the only one who has been exploring soundbased augmentations of the environment early on. In fact, audio augmentations of our environment have quite a history of their own. Unfortunately, they are less known in the context of AR and are often not even considered to be part of AR history. Walk!, my virtual trainer gives in to my exhaustion and I slow down. However, my thoughts keep racing. Quickly, they approach the early 1990s: Tom Caudell is believed to have coined the term Augmented Reality. It describes a head-worn display that superimposes visual information onto real objects5. In Caudells case, the new AR system helps workers assemble cables into an aircraft at Boeing. What usually goes unnoticed is that around the same time, Janet Cardiff started After Toms instruction, my music fades back in. The song is intended to get me to run even faster. After my footsteps have Stop!, apparently my position has changed enough. My run is over. The result: 583 kcal burned, 5 miles run and the revelation that the combination of the virtual and the real encompasses much more than just adding virtual visual objects to the real physical environment. There is a whole field of augmented activities as well! I cannot wait to jam with virtual bands, to try augmented eating or to take an augmented nap. As if to approve, my heart rate makes a last excited jump. Who knows, in the future, Tom might learn from existing AR. He might then have a look at my environment and direct my turns so that I discover new routes, point out sights or, when needed, help me find a shortcut home. Considering current developments in lightweight AR glasses, I guess it cannot be long until we can also see our virtual competitor passing by, are asked to design avatars repreWhen another runner passes me slowly, my heart rate drops. I wonder whether it might be his heart rate that is mistakenly reported back to me. I am astonished, that without the sensors help, I cannot even accurately perceive such basic and vital senting our personal best time in races against other runners and are challenged to chase visual virtual spooks. I would not mind that. And I bet that that is when augmented running will be truly considered to be AR. 9  The course Perceptualization which is taught by Edwin van der Heide and Maarten Lamers as part of the MSc Media Technology at Leiden University discusses such translations of information to our human modalities. See http://mediatechnology.leiden.edu/ programme/curriculum/perceptualization/ and http://www.maartenlamers. com/PZ/
50 51

everyday sounds. Yet, the effect is similar; they create a new reality within existing realms, a form of augmented reality. 8 Clearly, the developments in non-visual AR were in no way inferior to the development of its visual counterpart. Taking slow steps, I imagine being on such a walk right now Listening to instructions on which route to take, where to look, superimposed footsteps here, sounds recorded there, on this path earlier, maybe altered with special effects. I imagine those sounds mixing in with the naturally present sounds of the river, bikes, and the occasional mopeds passing by. Run!, my trainer, whom I decide to call Tom, puts an abrupt end to this walk. The fact that AR sound art like Cardiffs and Erens walks are not usually mentioned in the context of AR leaves me wondering what else we miss.

facts as my very own heart rate. Maybe this is farfetched, but with respect to that, the running app relates to the kind of AR applications which allow us to perceive things about the world that we normally cannot perceive, such as seeing heat, feeling magnetic fields or hearing ultra-high frequencies.9 So why are virtual Tom and his colleagues not considered to be AR? Perhaps because there are also numerous differences between running apps and common AR applications. To begin with, this running app does not augment the environment. Rather, it augments an activity my running. And to be honest, despite the fact that Tom follows my every move chasing a virtual competitor or running with a virtual trainer it still feels like they are running on my phone while I have to tackle the real road. What is more, locationbased AR applications usually display content related to the users absolute position in the world. Tom, on the other hand, is only interested in the change of my position over time. 5 T. P. Caudell, and D. W. Mizell, Augmented Reality: An Application of Heads-Up Display Technology to Manual Manufacturing Processes,Proceedings of 1992 IEEE Hawaii International Conference on Systems Sciences, 1992, pp 659-669. 6  Janet Cardiff, Forest walk (1991), http://www.cardiffmiller.com/artworks/walks/forest.html 7  Janet Cardiff, Introduction to the Audio Walks, http://www.cardiffmiller. com/artworks/walks/audio_walk.html 8  Cilia, Erens, The Audible Space, http://www.cilia-erens.nl/cilia-erens2/?lang=en 4 Akitsugu Maebayashi, Sonic Interface (1999), http://www2.gol.com/users/ m8/installation.html and http://www. v2.nl/archive/works/sonic-interface 3 For RjDj, Inception and Dimensions see http://rjdj.me/ 2 Theo Watson, Audio Space (2005), http://www.theowatson.com/site_ docs/work.php?id=15 1 An article about Radioscape by Edwin van der Heide is featured in this magazine on pp. 18-23

Stop!, apparently my position adapted to the new it hits me: these has changed enough. My run is over. rhythm instructions about how
recording her so-called audio walks. Those walks are designed for a certain walking route and confront the listener with instructions such as Go towards the brownish green garbage can. Then theres a trail off to your right. Take the trail, its overgrown a bit. Theres an eaten-out dead tree. Looks like ants.6. While the listener navigates the space, he gets to listen to edited mixes of pre-recorded sounds, which blend in with the present sounds of the environment. Cardiffs virtual recorded soundscapes mimic the real physical one in order to create a new world as a seamless combination of the two.7 By superimposing an additional virtual world onto our existing one, and thereby creating a new, mixed reality, Cardiffs sound art explores one of the key concepts of AR. And Cardiff is not alone with this idea; as early as 1987, Cilia Erens introduced sound walks, soundscapes and sound panoramas in the Netherlands. In contrast to Cardiff, she forgoes spoken content and uses largely unmixed fast to run, the information about my heart rate, distance covered and calories burned and options such as racing against a virtual running partner in real physical space this is just like AR. In fact, my virtual running trainer shares most of the characteristics commonly found in AR applications. It adds another layer of content to my running. It is interactive and operates in real-time. Just like many other GPS based AR applications, it reacts to my position in the world.

Most importantly, Tom fulfills my own, personal requirements for an AR experience: something is added (the instructions), something is augmented (the running), and most importantly, there is a relationship between the two.

Augmented Reality: A Story


Dirk Vis
Augmented Reality (AR) is used for many literary purposes: secret pages can contain additional information that is only visible on smartphones; promotional texts can be linked to gabel stones and bus stops, a book can be scattered all over the world virtually, etcetera etcetera. However, AR can also be used as a medium to tell a story. An AR-story consists of separate parts, that each tell a story of their own, which are integrated into a wider story-arc and which interact with the physical reality of the reader. The text below is an exploration of the narrative possibilities of AR.
All of your decisions are stored: all of your purchases, your travel destinations and your doubts are recorded. Not only far away on Googles server farms, but also nearby, where they can be seen best: in your face. Mike writes at night. He is working on the code and the interface for an application. He writes non-stop. He is making an application that calculates how your face will look when you get older. He has forgotten whether it was his idea or Adas. They invented it as a joke, just as they came up with everything else. But this night, in a single night, he will write the code that will earn him money. The afternoon before, Ada and he were standing by his door. She was on the sidewalk; he on the doorstep. In her hands she held her sketchbook. It was the last thing she still had to collect, the last thing left of her in his house. - People are scared of the idea, he said, you are too that everything is recorded by Google, by cameras, but they forget you forget, that every misstep is already stored in the place where it hurts the most: in your face. Im going to build that application: The future face one. - Youre crazy, she said. Theres a reason nobody can see how we will look when we are older. True youth cannot know what it means to be old. Reading is a complex task, but the reality of it is simple: the reader sees a text on paper, in clay or in pixels, and reads. Imagine if the text had eyes and scanned the reader back as he read. The text records the movements of the reader and responds. The reader finds his way through the text with movements of the head. The story is different and the effect of the story is different because the position, the facial features and the expressions of the readers differ. Mike tests his work on himself that night. The program scans his face. The color and texture of his skin are linked to his personal data from the cloud. His medical records, credit card information and favourite music are combined, supplemented and extrapolated with the facial scan. He writes his code with fervour. He is constructing the interface, which the user will read later. Sometimes he is afraid of what he does; that things might arise that hed rather not see,

much like how Ada sees things that she would rather not. Illusions, nocturnal animals and fears that remain invisible to others, she sees. She has started to draw them. From the porch door he touched Adas cheek. Probably for the last time, he thought. - Your skin registers the amount of wine you drink, cigarettes you smoke, men you use. Cigarettes make your radiance fade. Literally. You may think, my light will dim as I get old anyway, but old women can shine. You say that your genes determine how you look, but you decide whether to smoke and so you too decide your future face. Whatever you do, even if its nothing at all, it is engraved in your face. The city is quiet. Only the taxi drivers and bats are still awake. Mike holds his face in front of the test version of his application. The camera scans the bags under his eyes as he writes: The system compares colors, patterns and textures with data from others to determine your lifestyle and future face. Ada was still standing right there on the sidewalk. Mike held his face close to hers. - You think you can get away with this. He saw the skin around her eyes, of which he knew every square millimeter. - ...but your wrinkles will tell your tale. It will be written plainly on your face for all the world to see, no matter how small it is. In Saudi Arabia adultery carries the death penalty; some tribes paint a purple spot on the face of the adulteress which remains for months. Im going to make that program that we thought of once, that shows your future face. Somewhere on your face it will

be printed that you wanted to be alone, that you could not make the decision yourself, and that you cheated on me. You think you have solved it and got rid of it, but everyone will read it in your eyes, your skin, your nose: do not trust her. Ada and Mike stared silently at each other. He touched the tip of her nose with his forefinger. That was his favourite place. She had asked him once to draw lines of altitude on her face, like they are on a map. Her nose was the highest mountain. The lines gave a cross-sectional view that made her face look like a layer cake. - I know you hate my programming. You think I make gadgets that contribute nothing to society, but imagine if you could see your future face, the effects of a night out, if you could be your own Tamagochi, shape your own true face. Plastic surgery will shrivel by comparison. Mike lets the program scan a height map of his face, in order that a virtual 3D model can be made. He continues his work inflamed. The light is returning outside. Based on his personal data from the Internet, the program calculates how his face will look like in five, ten or twenty years. His nose is finished first. Noses never stop growing, as if they record all the smells of a lifetime and all the faces that came close. The program creates a virtual model and as Mike looks into the camera, it superimposes the computer-generated future face over his own. Its as if hes talking to a 60 year old Mike. - Finish your application, one Mike says to the other. He says it himself, but it looks as if his elder counterpart is speaking. Finish your program, show Ada and then go outside.

Dirk Vis
Dirk Vis (1981) studied Beeld & Taal (Image & Language) at the Rietveld Academy and Design at the Sandberg Institute. He published Bestseller (2009) in limited edition, made electronic poems with K. Michel, is editor of De Gids and teaches Interactive Media at the Royal Academy of Fine Arts. www.dirkvis.net

52

53

Image by Daniel Disselkoen

Unspecialize! The more you know the less you see


Image by Mareike Bode

Im looking out of the window of a tram. Daniel Disselkoen, the graduate from the Royal Academy of Art Im about to meet, lives in the Statenkwartier a part of The Hague I dont know yet. Its a nice district and although there is nothing spectacular about this neighborhood, I enjoy the view of streets I havent seen before. A few minutes later I meet Daniel in a small caf and he tells me more about this area. He grew up in this neighborhood. After studying law and philosophy in Groningen for four years, he moved back to the very same street he was born in. That was

During my studies I took the same tram to art academy on an almost daily basis. On this ride there simply was nothing new to see and hence no motivation to look outside. I realized how easily it becomes boring when you live in a place you know so well.
Leaving the familiar behind, Daniel spends several months abroad. First in America, at the Minneapolis College of Art and Design, then in Japan, doing an internship at the advertisement agency Wieden+Kennedy.

A portrait of Daniel Disselkoen


Hanna Schraffenberger
54

four years ago and marked the beginning of his graphic design studies at the art academy. Just recently he graduated. It is his knowledge of the city that has motivated his search for new perspectives and inspired several of his works.

When you dont know a place you look around and spot all those new and interesting things. But the better you get to know the area the less you look around. While commuting in Japan I
55

paid a lot of attention to my surroundings. I realized that there are three main groups of passengers: those staring at their phone, those who read, and those who sleep or look down. Its a general phenomenon that when people know the area they dont look outside.
Upon returning home, Daniel decides to take matters into his own hands and sets out to make his regular route to the academy interesting again. The result is Man-eater, the simplest augmented reality game Ive ever seen. Without the use of phones, headsets or computers, merely relying on two stickers, the game adds an additional layer on top of our view, changes our focus and allows us to experience our well-known surrounding in a new way. The first sticker shows a little monster, the socalled Man-eater. It is placed on the window of the tram. The second sticker is a manual, placed on the headrest right in front of the potential player. The manual is hard to overlook and states the four simple rules:

The online world calls the game the real-world version of Pac-Man. However, the arcade game was not what inspired Daniel.

I used to play this game with dots on the window. I think there are quite a lot of people who played a game like this when they were kids. Those people enjoy rediscovering it now. Others like it because it is entirely new to them.
His story is convincing. Indeed, Man-eater makes me relive my own rides to school, pretending the bird poop on the window was Super Mario who had to jump from passing car to car. I like the beautiful and unexpected update to the childhood game and appreciate that Daniel turned it into a visually appealing version. However, reliving childhood memories wasnt Daniels original intention with the game. His goal is to make people notice the outside world again. Besides the fact that he doesnt want to push
Image by Daniel Disselkoen

1. Close one eye and look to the right 2. With the Man-eater, eat as many heads of pedestrians as possible 3. Time to play: between two stops of the tram 4. If you havent eaten enough heads, start again at the next stop
The first level challenges you to eat at least 3 heads, the third level asks for 12. Daniel is aware that this wont always be possible.

Traveling by tram, I just think this is a great moment for looking around and experiencing your environment in a new way. With the game, I want to provide a fresh perspective, give them a new experience of the city they know so well.
Contrary to what one might think, Daniel doesnt think that people always have to look around.

people to do something in the tram, Daniel has more reservations about whether or not to place the stickers.

Very often I dont like street art like graffiti and stickers. Too many times its just somebody tagging his name. I think when you push your work in public places, you have to be mindful of your audience.
Daniels unobtrusive solution: in case the stickers are not appreciated or even considered vandalism, they can easily be removed due to his use of removable glue. However, not all aspects of the projects realization are so commendable. In order to test how his project is perceived and get the natural reaction of those playing the game, he filmed the tram passengers sitting next to the sticker without them knowing about it.

passengers were not playing the game. Before playing, people looked around cautiously to see whether someone is watching them. Only when they didnt feel observed, they felt comfortable playing it. Consequently, I couldnt look at them the whole time. So I just sat there and filmed them with my phone.
The dilemma whether and how to observe people with or without their knowledge is also well known in science. Once you tell somebody that you are observing him, he might behave differently. If you dont tell, there are ethical considerations to take. Daniels approach paid off but also gave rise to doubts about his method.

Sometimes the circumstances outside make the game completely unplayable with those rules. For example, if there is simply no one outside then you cant play, or if there are far too many people it will be too easy. But I think the simplicity is also the beauty of the game.

I dont necessarily think it is bad that people dont look outside. I enjoy those rides where you do nothing at all. The last thing I want is to force commuters into another thing they have to do while they travel. I think in the tram or in a train your mind can be satisfied with the fact that you are traveling. So you are free to do nothing at all, your mind can reverberate, you can just let your thoughts travel as well. But for those that dont look around anymore because they dont expect something interesting to see, I made the Man-eater to let them explore the familiar in a new way.

I wanted to know how people react by shooting first, and ask for permission to use the footage afterwards. At first, it seemed like the

From my short glances, it seemed like they were not playing the game. Only when I checked out the video footage, I noticed the little movement of their heads they were playing it after all. However, a few people noticed that I was filming them and apparently felt really bad about it. They stopped playing; sat somewhere else or even stepped out of the tram. I felt terrible about it.
57

56

The underlying question I asked myself is Can we change the way we look at our environment and how we experience it? With Man-eater, I have shown it is possible. With the new app, I take the concept of providing a new experience of the surroundings even further. This game can be played everywhere. I would really like it if people started using it in their room, continued outside, looked around, and then were driven to also discover different places.
I am curious how he intends to alter my view of the world this time.

lities and mobility, it will use the camera as well as information about the users position, make use of the Internet connection, use the fact that people always have it with them and incorporate social aspects. Hearing about the technical elements such as spatial awareness and the use of the camera, I expect an AR application. However, this much is for sure: Daniel did not start out with the intent to make an AR app.

Image by Daniel Disselkoen

His close observations of commuters, his interest in their behavior and reactions to the game not only lead to some valuable insights regarding the success of Man-eater, but also inspired his newest project.

Just like I had noticed before in Japan, we also have those different types of commuters: those reading, those looking at their phone or tablet, and those sleeping or looking down. During my observations, I noticed its the people staring at their phone or tablet who did not play Maneater at all. They just sat in the tram, looking at their phone, not even noticing a sticker right in front of them. That observation made me want to create something that turns the tablet or phone into a device that makes you look around and take part in your surrounding, rather than isolate yourself from it.
Indeed, his newest project does just that. Or to be precise: it will do just that, as soon as it is published in the App Store. Right now, Daniel is in contact with programmers, who will turn his concept into an actual app. Until then, the details remain a secret. However, from what I could deduce, the upcoming app will be focused on a similar idea.

I noticed it is not only knowing an area well that makes you perceive less about it. Knowledge in general makes you see your environment in a certain way. Im a designer and the more Ive learned about design, the more I started to perceive the world in terms of design. Of course, I am exaggerating when I say that, looking at the world, I see typefaces, patterns, grids, packages, posters and advertisements. At the same time, I miss other things about the world. For example, I might only see the package and not its content. To generalize and exaggerate a bit more: a biologist might spot plants, an architect might focus on buildings and a psychologist might perceive more about people. What you perceive about the world is influenced by what you are specialized in. My app is intended to free people from their specialized view of the world and will provide them with another way to look at it.

I dont have a strong opinion about what AR is. I dont have a fixed definition of it. For me, AR is a bit like labels or branding in advertisement. Its something you put on top, and suddenly, people experience a product in a different way. I would never start out thinking I should make something with Augmented Reality, and then come up with a project. But in retrospect, I can say that the Man-eater can be seen as an Augmented Reality project. I dont know whether the new project will fit your definition of AR, but it will definitely let people perceive reality differently.
In contrast to Man-eater, which manages to augment the outside with as little as two stickers, this app sounds rather technical. I wonder to what degree technology served as an inspiration.

Image by Daniel Disselkoen

know everything from one subject, it becomes harder and harder to see the big picture and to come up with a new idea.
Ordering our second round of coffees, I have a pretty clear idea that with his work, Daniel wants to create something new and intends to confront us with another perspective on the world an ambition he shares with many artists. However, given all our talking about apps, games, travelling and trams, I am not sure yet whether he considers his games works of art.

When you dont know a place you look around and spot all those new and interesting things.
To achieve this, the app will basically use everything a phone or tablet has to offer: it will make use of the modern phones computational capabi-

Of course, you do need to know something about the material you are working with. But its only because I realized that people dont look around anymore when they are busy with their phones and tablets, that I thought it was an interesting subject and medium to use. I wanted to know more about it, so I got a tablet. Knowing its functionality and possibilities allowed me to come up with the final idea. But the inspiration was not the technology, but my observation of people being completely immersed in it. For me technology is not a target, but a tool. I think when you know too much about something technical, you dont come up with ideas that create something new. You then might come up with a technical improvement or technical innovation or something like that. If you

I would call it applied arts. I like it if what I do also has a specific function which is not abstract. Of course the work and shape can be abstract. But I like it when I can see if it works. If people look outside, play the Man-eater and enjoy it, it works. This concrete functionality is an important aspect of my work.
Finishing our drinks, we talk about his current work: right now he is designing a crazy bridge, which presumably will never be built. We talk about his other works, including documentaries about Mall-Walkers and 47-second long interviews

58

59

held in the middle of Japans busiest intersection. We talk about the commercial potential of his app, and calculate the hours needed to realize it. Quickly, the time has come to pay our coffees. Usually, at this point in an interview, I ask my dialog partners a last question about their plans for the future. I already know that Daniel will be working hard on getting his app done. I ask about his plans anyway.

I think I might want to work in advertising. It is one of the little areas where you can come up with an idea and it doesnt matter which medium you use. You can adjust the medium to your idea. Ideally, I want to first come up with an idea, then choose the suitable medium and then show it to a client. A good agency where you can work like that thats a place where I want to work.
I thank Daniel for the interview. On my way back home, theres no Man-eater to keep me company. The effect of seeing something new is wearing off and my interest in looking outside is fading quickly. So I take a look around inside the tram. Daniels right: there are the ones with the phones, the readers and the sleepers. But there are also quite a few people gazing out the window. Just when I get suspicious, I notice the way they pronounce Scheveningen. Theres no doubt about it: they are German tourists. Im not sure whether locals really lost their interest in the outside world. However, I have to yield a point to Daniels observations. What we perceive is shaped by what we know. Knowing German makes me spot Germans on the tram. What I have learned about Daniel is most probably similarly shaped by my knowledge and specialization. Given my own background in creative science, I see the scientist in Daniel: a young observer, driven by the question how we can change what we perceive about the world. I see his games as a series of exdifferently, I cant wait to take part and try it out myself. Daniels website: www.danieldisselkoen.nl
Image by Daniel Disselkoen

periments. And if his app will let me perceive things

60

61

Augmented Prototyping: Augmented Reality to support the design process


Jouke Verlinden
In the last decade, new Augmented Reality techniques emerged to visualize and interact with physical shapes and related artifact knowledge. This enables so-called Augmented Prototyping approaches that combine physical models with principles of projection or video mixing to establish a dynamic prototype at a relatively low cost. For example, Nam and Lee documented the prototyping setup for an interactive tour guide: a physical mockup was equipped with microswitches to control a computer simulation while the output is merged either by a see-through Head Mounted Display (see Figure 1, left) or by projector (Figure 1, right). In both cases, physical cues such as proportions, key layout and grasping control can be combined with alternate screen graphics and workflow. this combination of the physical and digital realms offers both a highly dynamic model that can cover engineering and aesthetic aspects simultaneously. The physical interaction allows a tactile and haptic dialogue between participants and artefact model. This article draws upon research presented in my doctoral research, where I apply AR techniques to the field of industrial design. It illustrates the benefits and systems by summarizing a number of installations found in academia. It will then elaborate on the investigation of what the true impact of such techniques could be in practice.

Publication
Klinker et al. (2002) Fata Morgana Cheok et al., (2002) Fiorentino et al.(2002) Spacedesign Bimber et al., (2001) Augmented Engineering Bandyopadhyay et al. (2002) Dynamic shader lamps Verlinden (2003a)

System function
Presentation

Objective
Presentation of concept cars Generating curves and surfaces Surface modeling

Interactivity
Virtual object on turntable, user moves around Index finger is tracked, creation of control points in air Free-form surface modeling, inclusion of physical models. Supporting sketching on mirror, grasp physical objects Moving object and paintbrush, selecting color from virtual palette Moving object on turntable, Change texture/paint by menu. Change texture/paint

AR display type
HMD video mixing HMD video mixing HMD video mixing Semitransparent screen Fixed multiple projectors (projectorbased AR) Fixed projector

Geometric modeling Geometric modeling Interactive Painting

Several scenarios

Interactive Painting

Painting on physical objects

Interactive Painting Interactive Painting Layout design

Exploring component features on CNC or clay models 3D sketching and RP Factory planning: layout check, collaborative reviews Interactive simulation forUrban Architecture reflections/ shadows/ wind Support automotive modeling Nightclub layout with pedestrian flow simulation

Verlinden (2003b) Rauterberg et al. (1998) Built-it Underkoffer and Ishii, (1999) URP Frnd et al. (2003)

Fixed projector Tabletop projection Tabletop projection Video mixing HMD

Moving objects in 2D plane

Layout design

Moving objects in 2D plane Moving components in 3D space (rotation, translation, scaling) Small-scale physical objects Full scale cardboard cabinets and voice control Operating the switches while grasping the object. Sketching screens and screen transitions, operating the switches while grasping the object Operating switches

IAP as design support in literature


Since the inception of AR, speculative design support scenarios for IAP have been devised. For example, Bimber et al. (2001) predicted five application scenarios without much detail: augmented design review, hybrid assembling, hybrid modelling/ sketching, visual inspection of moulded parts and hybrid ergonomic design. We surveyed the existing IAP applications that cover a mixture of domains and employ various display types (head-mounted displays, video mixing or see-through and so forth). A detailed characterisation of these IAP systems can be found in Table 3, including domain, objective, and interactivity. In terms of design domains, the applications cover information appliances, automotive, architecture and factory planning, while some systems propose a general-purpose IAP system.

Layout design

Verlinden (2004a)

Layout design

Fixed projector

Verlinden (2004a)

Layout design Simulate Information Appliances Simulate Information Appliances Simulate Information Appliances Simulate Information appliances

Kitchen layout, full scale

Multiple fixed projectors Video mixing HMD and fixed projector Embedded screen and Fixed projector

Nam and Lee (2003)

Usability assessment of a digital tour guide

Figure 1. Two Augmented Prototyping scenarios (Nam and Lee, 2003). Interactive Augmented Prototyping (IAP) requires imaging technologies (output) and sensor technologies (input) to establish an interactive spatial experience. Furthermore, AP technology embraces existing physical prototyping methods and can include virtual prototyping/ simulation tools. In comparison to traditional physical prototyping,

Nam (2005)

Dialogue definition and evaluation

Kanai (2005)

Usability assessment of a remote control

Fixed projector

Verlinden (2004b)

Handheld voice recorder

Handheld Mockup with projection

Fixed projector

Table 1 : Topical overview of Augmented Prototyping Research.


62 63

Five different scenarios were identified by inspecting the design activities: presentation, geometric modeling, interactive painting, layout design and simulating information appliances. These are discussed in the following subsections. Figure 3. Full-scale car mockup with projection (htas, 2009).

on the components themselves, cf . Figure 6. The Built-it system supports the layout of assembly lines in a similar way; simple data are projected on top of the blocks while a large view on the wall shows the resulting manufacturing plant in a 3D perspective. Both systems exhibit the potential of using physiFigure 5. blueBrush (van den Berg, 2006). cal design components as user interfaces the parts are managed by direct manipulation and a design can be reconfigured by multiple hands/ Interactive painting systems such as dynamic shader lamps indicate the advantages of digital drawing on physical objects (Bandyopadhyay et al., 2001). Based on Raskars shader lamps technique, a white object is illuminated by a collection of video projectors from different angles. A tracked wand acts as a drawing tool. When in contact with the objects surface, strokes are captured and rendered in an airbrush effect. As it copies natural drawing on objects, this establishes an easy-to-use interface that has been positively evaluated by kids and graphic artists. A restriction of such interactive painting systems is that the shape of the physical object cannot differ from that of the virtual object the haptic and visual display of the virtual object will then be misaligned with the physical object. Some geometric modelling tools that originated from VR were adapted to AR. For example, Cheok et al. (2002) presented a see-through AR system that tracks the index finger by ARToolkit (Kato and Billinghurst, 1999). The user can generate curves and surfaces that float in the air. As opposed to regular VR, this enables awareness of phenomenological space, which is relevant for most product design activities. However, the system provides no tactile feedback as no physical objects are included. Furthermore, interaction is difficult to scale up to multiple users at a single location, as the movement envelopes of such tracking systems are small. A similar system was presented by Fiorentino et al. (2002), who adapted their free-form VR modelling application to work with see-through display technologies and infrared 3D tracking. Again, interaction takes place in mid-air and although physical objects can be included, these are only used to project texture maps. Figure 6. URP system displaying an interactive wind simulation (Underkoffer & Isshi, 1999). users simultaneously. Furthermore, the light reflection and other simulation modules support in combination with this tangible interface show the combination of physical spatial reasoning and computational simulation. In contrast, an augmented modelling system developed at a German car manufacturer deployed virtual components on a physical global shape (Frnd et al., 2003). Modelling operations were limited to component placement (translation, orientation, scaling), while a Pinch Glove supported the dialogue.

Presentation
The main takeaway of most AR systems is the fact that product visualizations can be shared with other stakeholders in the design process (e.g. higher management or potential users). Such presentation systems have been specifically devised in automotive design. For example, Klinker et al. (2002) investigated the presentation of (virtual) concept cars in a typical showroom by observing the behavior of designers and by presenting some proof-of-concept examples. The resulting system is shown in Figure 2.

Interactive painting

Geometric modelling

Figure 2. Example of a presentation system (Klinker, 2002).

Instead of using head-mounted displays, projector-based AR have also been documented that employ mockups with and projectors, for example the system shown in Figure 3. This setup is on demonstration at the High-Tech Automotive Campus in Helmond, the Netherlands and comprises of a full-scale model of a racing car and 3 video projections that are carefully aligned. In both cases described above, the As such, the interaction is passive, merely to inspect designs that are modeled in separate applications. Figure 4. Dynamic shader lamps system in use (Bandyopadhyay et al., 2001). More intricate progress in software techniques allows sketching on arbitrary surfaces from various distances (Cao et al., 2006). Specific applications have been developed for customizing ceramic plates, cf Figure 5.

Layout design
Layout design systems like URP (Underkoffler and Ishii, 1999) and Built-it (Rauterberg et al., 1998) offer a number of fixed physical components that can be reconfigured on a planar surface. In URP, the components represent buildings, the augmentation focuses on the simulation of light reflection, shadows and wind and simulation results such as flow fields are directly projected in 2D

Simulating information appliances


Examples of this type of design support are primarily proposed for the design of information appliances, referring to consumer products that include electronics, e.g., mobile phones, MP3 players, etc. In this case, augmentation is used to overlay graphics or other types of visual feedback to a physical mode and simulate navigational behaviour. Much emphasis is put on measuring button interaction to assess the usability of a de-

64

65

sign by capturing and time-stamping click events. As presented in the introduction of this article, Nam and Lee documented the design evaluation of a hand-held information appliance. They used ARToolkit to track the position and orientation of the object; the optical marker is attached to the back of the object. In this case, a video-based AR HMD and a projector-based AR are compared and evaluated; the projector-based AR display reportedly performs more accurately in the interaction tests. Nam (2005) expanded this simulation to the interaction modelling of both screen and buttons for a hand-held tour guide by the use of state transition graphs. Although the aspect of button interaction is well elaborated, the modelling of shape and its features is limited (e.g., the location of the buttons). Support to track layout modifications is lacking and this has to be performed manually. Kanai et al. (2007) developed a usability assessment tool to rapidly test interaction with mockups by RFID technology and video projection. The level of interactivity with the prototype is even higher when the user can model and interact with a product simulation: simple tags can be glued on foam mockups and the user dons a glove with a RF antenna. Figure 7. thermostat mockup with RFID based interaction (Kanai et al., 2007).

Assessing the impact of IAP in industry


To assess the possible impact of such systems, we interviewed 13 top design and engineering agencies in the Netherlands (Verlinden et al. 2010). We targeted senior project managers in various domains, ranging from well-known interior and furniture designers to studios in automotive and product design. With each participant, we showed a short demonstration of IAP and a 90-minute semi-structured interview. We asked their opinion regarding the strengths and weaknesses of IAP. The responses are summarized in Table 4, ordered by the number of participants that expressed these. The most prominent strengths are in line with the envisioned benefits: allowing to explore multiple design variants and to communicate with the client and other stakeholders. On the other hand, primary weaknesses signify the effort needed to realize such prototypes and the resulting quality.

Strength Strong idea, Fills a gap in current design process, different than present methods. Exploring/Presenting many variations rapidly without making it costly. Flexible with fabrics and lay-out, styling-tool. Bring ideas rapidly to the customer .

# 5 4 3 3 1 1 1

Weakness Might take too much effort to realize (labor intensive, complex) . Quality of the projection vs regular finished models (e.g. wood finish). Physical model is required, also needs updated when applied in a new situation. Only 2,5 D (just a skin). Requires a lot of knowledge/might be difficult. Tactility not the same as end version (interior). User can occlude the projection. Cannot support 1:1 scale for large products/

# 4 3 3 2 2 1 1 1 1

Characterizing the IAP support scenarios


The aforementioned design support scenarios can be characterized by three activities: browsing, interaction with the artefacts behaviour and alteration of the model. In the table below, a brief summary is shown.

Might be spectacular esp. with a handheld version. Good for acquisition/marketing.

Type

Browsing View model from all sides either handheld or fixed to the environment View model from all sides either handheld or fixed to the environment View model from all sides either handheld or fixed to the environment View layout of multiple components Manual handling (physical mockups)

Interaction with behavior

Alteration of model

Easy to use.

Presentation

Adaptation of textures/ materials/ annotations Creation of new geometry (curves/surfaces) Change assembly (2D or 3D) Button/screen layout

setups. Possibly wrong perception of the product by client.

Interactive painting

Table 3: Expressed IAP strengths and weaknesses and their occurrences.

Geometric modeling

Real-time rendering of environment (simulations) Digital product interaction

Then we asked about the envisioned benefits of the IAP to their own design process. The results are depicted in Figure 4. The participants mention as most important benefits: Communication with external parties, Facilitate user testing process, facilitate concept development, improve insights, and reduce errors/mistakes. Concerning the method, three main issues need addressing:

i) does variability in a design matches the physical virtual division, ii) what is the trade-off between speed versus quality, iii) how does this affect the emancipation of other stakeholders. The first is related to product domain and activity the fit with some (like website or kitchen design) is less obvious. The second concern touches the need for speed versus the need for credible prototypes,

Layout design Simulating information appliances

Table 2. Characterization of IAP design support scenarios from literature.

66

67

which differ based on customer and studio tradition. The last demarcates the role of the designer in the design process and the power that clients and other stakeholders have in the process to explore alternatives that might be undesirable.

turers, engineers and other experts. It remains difficult to bridge the differences in knowledge, skills and attitudes among the stakeholders. When considering decision making during design process, physical models can also be regarded as boundary objects - as interfaces between the stakeholders in transition between design phases (Smulders et al., 2008). These boundary objects encompass both product specifications and argumentation, providing a platform to create shared insight and to freeze the status of a product design for later use. The authors argue that miscommunication and misinterpretation are often related to the characteristics of the used boundary objects, which should bridge the interfaces between design phases and the related discourse domains. There is some evidence that the inclusion of prototypes enhances the performance of collaborative engineering teams (Yang, 2004)(McGarry, 2005). Virtual prototypes such as 3D renderings and virtual reality models do provide a good insight; yet have challenges in providing a proper perception of context, scale and proportions (Kuutti et al., 2001). Because the end result is typically a physical object, a materialization of the idea is better approachable than technical drawings or specifications and often more eco-

During the presentation, audio and video feeds of all IAP systems are captured, while interaction with the pens and navigation through the presentation are captured as notes; input events are stored as well in the segment index, similar to the Where Were We System (Minneman and Harrison,1993). The augmented prototyping technologies allow a fusion of combine audio, video, model changes, and user annotations. This results in a large, data warehouse of multimedia indexed by semantic tags (decisions, tool usage, camera switching and the like). These sessions can be inspected later to refine and reflect on the decisions and planned design activities. To our knowledge, this endeavour is the first to capture experiences by recording all channels of augmented reality sessions. In our current implementation, we have made a handheld system, based on a picoprojector, a webcam and a small UMPC tablet (Figure 10), running a customized version of ARToolkit. The AR cube the desktop version- is packed in a flightcase equipped with a ultra-short throw projector and a specialized IR tracking system (Personal Space Technologies) and a large TabletPC running VRmeer based on OpenSceneGraph. It can be placed on a table and be up and running in less than 10 minutes (Figure 11). Figure 10. Working prototype of our handheld IAP system Figure 9. Interaction concept of IAP as a design review system.

Figure 8: Envisioned benefits of the I/O Pad (n=13). The results show that almost all design and engineering firms extensively use models during design reviews, and they consider IAP to have a potential to avoid miscommunication. However, they perceive the inclusion of such new technology as time consuming which might not be worth the risk of adopting such systems.

nomical to fabricate. Based on the functions that other IAP systems portray, we hypothesized that a key element of augmentation is not just to enrich a physical mockup with additional product information but to add support for design reviews. Design reviews represent formal discussions between the stakeholders of a design process, and are key in decision making during the design process (Huet et al., 2007). Our initial concept of the IAP Design Review system was been devised to support synchronous, co-located meetings that typically do not employ advanced recording techniques. The interaction concept is shown in Figure 9: allowing to host presentations and discussions on design alternatives while using handheld and large projector-based AR systems to add information to the models as described in the previous sections.

Conclusions
Although some commercial AR solutions have emerged, most inspiration can be drawn from the IAP installations created in academia. As design support tools, these systems showcase the power of tangible computing as natural and embodied interaction. By inspecting the collection three different interaction characteristics to support design emerged: browsing, interaction with the artefacts behaviour, and alteration of the model. In assessing the impact of IAP, Dutch design studios were surveyed. Their reactions are positive; Most of the proposed hardware solutions are already available. However, software is regarded as the missing link, while the proper configuration of the overall solution concept is difficult to grasp at this moment. Based on this feedback and other empirical studies, a design review concept was developed, incorporating two different projector-based AR systems: handheld and a larger desktop model. Pilot studies and additional demonstrations show promising results. Figure 11. AR Cube in use with several 3D printed objects.

Augmenting the design discourse


Product design is never a solitary process often fellow designers are involved in projects, while the act of design requires collaboration with client, prospective users, marketers, manufac-

68

69

References
 Bandyopadhyay, D., Raskar, R. and Fuchs, H. (2001) Dynamic shader lamps: painting on movable objects , proceedings International Symposium on Augmented Reality (ISMAR), pp.207216.  Kato, H. and Billinghurst, M. (1999) Marker  Bimber, O., Stork, A. and Branco, P. (2001) Projection-based augmented engineering , Proceedings of International Conference on Human-Computer Interaction (HCI2001), Vol. 1, pp.787791.  Bochenek, G.M., Ragusa, J.M. and Malone, L.C. (2001) Integrating virtual 3-D display systems into product design reviews: some insights from empirical testing , Int. J. Technology Management, Vol. 21, Nos. 34, pp.340352.  Kuutti, K., Battarbee, K., Sde, S., Mat  Cao, X., Balakrishnan, R. (2006) Interacting with dynamically defined information spaces using a handheld projector and a pen, ACM UIST Symposium on User Interface Software and Technology, p.225-234.  Cheok, A.D., Edmund, N.W.C. and Eng, A.W. (2002) Inexpensive non-sensor based augmented reality modeling of curves and surfaces in physical space , proceedings ISMAR02, pp.273274.  Fiorentino, M., de Amicis, R., Monno, G. and Stork, A. (2002) Spacedesign: a mixed reality workplace for aesthetic industrial design, Proceedings ISMAR02, pp.8696.  Frnd, J., Gausemeier, J., Matysczok, C. and Radovski, R. (2003) Cooperative design support within automobile advance development using augmented reality technology , Proceedings of CSCW in Design, pp.492497.  Kanai, S., Horiuchi, S., Shiroma, Y., Yokoyama, A. and Kikuta, Y. (2007) An integrated Nam, T-J. (2005) Sketch-based rapid prototyping platform for hardware-software integrated interactive products , Proceedings of CHI05, pp.16891692. Nam, T-J. and Lee, W. (2003) Integrating hardware and software: augmented reality  Minneman, S.L., Harrison, S.R. : Where Were We:Making and Using Near-Synchronous, PreNarrative Video, proc. ACM Multimedia 93, pp. 207-214 (1993) Verlinden, J., de Smit, A. and Horvth, I. (2004a) Case-based exploration of the augmented prototyping dialogue to support design , Proceedings of TMCE 2004, pp.245 254. Verlinden, J., van den Esker, W., Wind, L. and Horvth, I. (2004b) Qualitative compa  Bandyopadhyay et al. Dynamic Shader lights http://www.youtube.com/ watch?v=qfWdMZIo4Cg  Fiorentinos Spacedesign http://www.youtube.com/watch?v=GOMx_sytCmU  McGarry, B. (2005). Things to think with: understanding interactions with artefacts in engineering design. PhD Thesis. University of Queensland, School of Information Technology and Electrical Engineering. Verlinden, J.C., de Smit, A., Peeters, A.W.J. and van Gelderen, M.H. (2003b) Development of a flexible augmented prototyping system , Journal of WSCG, Vol. 11, No. 3, pp.496503. Verlinden AP videos http://youtu.be/F-3BrBfigEw  Tek-Jin Nam Sketch-based rapid prototyping http://www.youtube.com/ watch?v=WK4h9Goa7qc telmki, T., Keinonen, T., Teirikko, T. and Tornberg, A. (2001) Virtual prototypes in usability testing , Proceedings of the 34th Annual Hawaii International Conference on System Sciences (Hicss-34), 36 January, Vol. 5. Verlinden, J.C., de Smit, A., Horvth, I., Epema, E. and de Jong, M. (2003a) Time compression characteristics of the augmented prototyping pipeline , Proceedings of Euro-uRapid03, p.A/1.  Bluebrush prototype van den Berg http:// www.youtube.com/watch?v=DvVl9-C2RU0 Van den Berg (2006) Project Light Blue, website http://studiolab.ide.tudelft.nl/vandenberg/lightblue.html  Fleld et al. Built-it video vor CHI98 at http://www.ibiblio.org/openvideo/video/ chi/chi98_03_m1.mpg  Underkoffers URP and other Luminous Room Demos http://vimeo.com/2235474 tracking and HMD calibration for a videobased augmented reality conferencing system , Proceedings of International Workshop on Augmented Reality (IWAR 99), pp.8594.  Klinker, G., Dutoit, A.H., Bauer, M., Bayer, J., Novak, V. and Matzke, D. (2002) Fata Morgana a presentation system for product design , Proceedings of ISMAR 02, pp.76 85.  Underkoffler, J. and Ishii, H. (1999) Urp: a luminous-tangible workbench for urban planning and design , Proceedings of CHI99, pp.386393.  Smulders, F.E., Lousberg, L., Dorst, K. (2008), Towards different communication in collaborative designInternational Journal of Managing Projects in Business, Vol. 1(3), pp. 352-367.  Yang, M.Y. (2004) An examination of prototyping and design outcome proceedings DETC 04, Paper No. DETC2004-57552. environment for testing and assessing the usability of information appliances using digital and physical mock-ups , Lecture Notes in Computer Science, Vol. 4563, pp.478487.  Rauterberg, M., Fjeld, M., Krueger, H., Bichsel, M., Leonhardt, U. and Meier, M. (1998) BUILD-IT: a planning tool for construction and design , Video Program of CHI98, pp.177178. Verlinden, J.C., Horvath, I., Nam, T-J. (2009) Recording augmented reality experiences to capture design reviews, International Journal on Interactive Design and Manufacturing, Volume 3, Number 3 / August, 2009, pp. 189200. based prototyping method for digital products , Proceedings of CHI03, pp.956957. rison of virtual and augmented prototyping of handheld products , Proceedings of Design 2004, pp.533538.

Links to youtube videos of these systems

70

71

Image by Oculus VR

The revival of Virtual Reality?


Wim van Eck
In the early 90s, virtual reality (VR) was the next big thing. Popularised by movies such as The Lawnmower Man, it was expected that soon everybody would be emerged in virtual worlds using head-mounted displays (HMDs) and CAVEs. The industry invested enormous amounts of money, but never managed to deliver affordable hardware which would live up to the peoples high expectations. The technology simply wasnt ready yet. VR soon became a niche product only used by industries which could afford it, such as the army and research institutes. Since Augmented Reality (which is even more technologically impressive) seems to have been widely accepted, we can safely say goodbye to VR. Or can we? In August this year, a VR enthusiast named Palmer Luckey started a Kickstarter project which promises a truly immersive virtual reality headset, the Oculus Rift. While currently the best consumer headsetsonly have a 45 degree horizontal and a 52 degree diagonal field of view, Palmers design succeeds in offering a stunning 90 degrees horizontaland 110 degrees diagonal field of view. This means that instead of having the equivalent of a large screen at a couple of meters distance, you can actually hardly see the edges of the screen anymore. Combined with an ultra-low latency head tracker with 6 degrees of freedom, you should finally be able to experience VR as it was originally meant to be, at an affordable price of $275. Palmers Kickstarter project was quickly backed by some of the most influential game developers such as John Carmack (id Software), Gabe Newell (Valve) and Cliff Bleszinski (Epic Games), who were completely sold after experiencing an early prototype of the Rift. After a month the project was backed by almost 10.000 people and raised $2,437,430 ten times more than the $250,000 goal. The first batch of headsets is mostly meant for developers so they are able to already develop projects for it, while a further improved consumer version of the Rift is expected to be on sale at the end of 2013. Popular game engines Website: http://oculusvr.com At the AR Lab, we are also very curious about this headset. Not only for VR purposes, but also to see if we can turn it into an Augmented Reality headset, just like how we turned the Sony HMZT1 into Marty (see page 14-17 of this magazine). We backed the project and we will receive two prototypes of the headset in December. We will keep you posted on our experiences! such as Unity and the Unreal Engine have promised support for the Rift, and the first compatible games are currently being announced. Many are already saying the Rift will be the next big innovation in gaming.

72

73

Contributors
Wim van Eck
Royal Academy of Art (KABK) w.vaneck@kabk.nl

Maarten Lamers
Leiden University lamers@liacs.nl

Esm Vahrmeijer
Royal Academy of Art (KABK) e.vahrmeijer@kabk.nl

Vincent Hui
Ryerson Universitt Department of Architectural Science Vincent Hui teaches a variety of courses within the Department of Architectural Science at Ryerson University in Toronto, Canada, ranging from design studios to advanced architectural computing, and digital fabrication.

Wim van Eck is the 3D animation specialist of the AR Lab. His main tasks are developing Augmented Reality projects, supporting and supervising students and creating 3d content. His interests are, among others, real-time 3d animation, game design and creative research.

Maarten Lamers is assistant professor at the Leiden Institute of Advanced Computer Science (LIACS) and board member of the Media Technology MSc program. Specializations include social robotics, bio-hybrid computer games, scientific creativity, and models for perceptualization.

Esm Vahrmeijer is the graphic designer and webmaster of the AR Lab. Besides her work at the AR Lab, she is a part time student at the Royal Academy of Art (KABK) and runs her own graphic design studio Ooxo. Her interests are in graphic design, typography, web design, photography and education.

Edwin van der Heide


Leiden University evdheide@liacs.nl

BArbara Nordhjem
http://nordhjem.net

Ferenc Molnr
Photographer info@baseground.nl

Jouke Verlinden
Delft University of Technology j.c.verlinden@tudelft.nl

Edwin van der Heide is an artist and researcher in the field of sound, space and interaction. Besides running his own studio hes part-time assistant professor at Leiden University (LIACS / Media Technology MSc programme) and heading the Spatial Interaction Lab at the ArtScience Interfaculty of the Royal Conservatoire and Arts Academy in The Hague.

Ferenc Molnris a multimedia artist based in The Hague since 1991. In 2006 he has returned to the KABK to study photography and thats where he started to experiment with AR. His focus is on the possibilities and on the impact of this new technology as a communication platform in our visual culture.

Jouke Verlinden is assistant professor at the section of computer aided design engineering at the Faculty of Industrial Design Engineering. With a background in virtual reality and interaction design, he leads the Augmented Matter in Context lab that focuses on blend between bits and atoms for design and creativity.

Barbara Nordhjem is a PhD student at the Visual Neuroscience Group at the University Medical Center in Groningen. She is interested in visual perception and how humans are able to extract the most useful information from the environment in different situations.

Martin Sjardijn
http://www.sjardijn.com/

Pieter Jonker
Delft University of Technology P.P.Jonker@tudelft.nl

Maaike Roozenburg
Royal Academy of Art (KABK) m.roozenburg@kabk.nl

Dirk Vis
Royal Academy of Art (KABK) dirkvis@arlab.nl Martin Sjardijn is a painter, sculptor, digital and conceptual artist. He was born in The Hague, The Netherlands where he also studied Fine Arts at the Royal Academy of Art and for some years Cultural Sciences and Philosophy. At the Royal Academy of Art he teaches virtual modeling for autonomous art.

Pieter Jonker is Professor at Delft University of Technology, Faculty Mechanical, Maritime and Materials Engineering (3ME). His main interests and fields of research are: real-time embeddedimage processing, parallel image processing architectures, robotvision, robot learning and Augmented Reality.

Yolande Kolstee
Royal Academy of Art (KABK) Y.Kolstee@kabk.nl

Maaike Roozenburg is a designer. She develops concepts, projects and products on the border of heritage, design and visual communication. She founded Studio Maaike Roozenburg in witch she combines a historical fascination with high tech materials and techniques and traditional crafts. Besides her work at the studio, Roozenburg teaches at the Post Graduate Course of Industrial Design at the Royal Academy of Art.

Dirk Vis (1981) studied Beeld & Taal (Image & Language) at the Rietveld Academy and Design at the Sandberg Institute. He published Bestseller (2009) in limited edition, made electronic poems with K. Michel, is editor of De Gids and teaches Interactive Media at the Royal Academy of Fine Arts.

Special thanks
We would like to thank Mariana Kniveton, Reba Wesdorp, Tama McGlinn and last but not least the Stichting Innovatie Alliantie (SIA) and the RAAK (Regionale Aandacht en Actie voor Kenniscirculatie) initiative of the Dutch Ministry of Education, Culture and Science.

Guest ContributoRS Esther de Graaff


www.estherdegraaff.nl

Hanna Schraffenberger
Leiden University hkschraf@liacs.nl Hanna Schraffenberger works as a researcher and PhD student at the Leiden Institute of Advanced Computer Science (LIACS) and at the AR Lab in The Hague. Her research interests include interaction in interactive art and (non-visual) Augmented Reality.

Yolande Kolstee is head of the AR Lab since 2006. She holds the post of Lector(Dutch for researcher in professional universities) in the field of Innovative Visualisation Techniques in higher Art Education for the Royal Academy of Art, The Hague.

Esther de Graaff studied art history with a focus on contemporary art. Now she works as a project manager, curator, writer and coordinator. She supports artists, curators and institutions in organizing various projects, both in content and in organization. It is her mission to bring culture at the hart of our live.

Next Issue
The next issue of AR[t] will be out in the second quarter of 2013.

74

75

You might also like