You are on page 1of 27

Augmented-Reality

Ankit Bhatnagar (0702EC081004) Bharat Dhanotiya (0702EC081010) Bhawik Kotadia (0702EC081012)

Augmented-Reality
Augmented- Reality Definition

Augmented Reality vs. Virtual Reality


Visual Display Systems for AR Video Keying and Image Registration System Design Issues Augmented Reality Application

Augmented Reality Definition


Augmented Reality is a growing area in virtual

reality area. An Augmented Reality system generates a composite view for the user. Its a combination of the real scene viewed by the user and a virtual scene generated by the computer that augments the scene generated by the computer that augmented the scene with additional information.

Augmented Reality Definition


Typically, the real-world visual scene in an AR

display is captured by video or directly viewed. Most current AR displays are designed using see-through HMDs which allow the observer to view the real world directly with the naked eye. If video is used to capture the real world, one may use either an opaque HMD or screenbased system to view the scene.

AR vs. VR
Virtual

Reality: a computer generated, interactive, three-dimensional environment in which a person is immersed.(Aukstakanis and Blatner, 1992)
Virtual Environment is a computer generated three

dimensional scene which requires high performance computer graphics to provide an adequate level of realism. The virtual world is interactive. A user requires realtime response from the system to be able to interact with it in an effective manner. The user is immersed in this virtual environment.

AR vs . VR
VR: the user is completely immersed in an

artificial world and becomes divorced from the real environment. The generated world consists entirely of computer graphics. VR strives for a totally immersive environment. The visual, and in some systems aural and sense are under control of the system. In contrast, an AR system is augmenting the real world sense of presence in that world. The virtual images are merged with the real view to create the augmented display.

AR vs. VR
For some applications , it may be desirable to use as

much as possible real world in the scene rather creating a new scene using computer imagery. For example, in medical applications, the physician must view the patient to perform surgery, in telerobotics the operator must view the remote scene in order to perform tasks. A main motivation for the use of AR relates to the computational resources necessary to generate and update computer-generated scene. In VR, The more complex the scene, the more computational resource needed to render the scene. AR can maintain the high-level of detail and realistic shading that one finds in the real world.

AR vs. VR
NO

simulator sickness. Vertigo, dizziness introduced by sensory mismatch within display environment can be a problem when one uses an HMD to view a virtual world. If the task is to show an annotation to the real world.

Visual Display System for AR


Hardware for display visual images

A position and orientation sensing system


Hardware for combining the computer graphics

and video images into one signal The associated system software

Visual Display System for AR


There are two main ways in which the real

world and the computer generated imagery may be combined to form an augmented scene.
Direct viewing of the real world with overlaid

computer generated imagery as an enhancement.In this case, the the real world and the CG images are combined optically. Combining the camera-captured video of the real world with CG imagery viewed using either an opaque HMD, or a screen-based display system.

Visual Display System for AR


Two basic types of AR system Opaque HMD or screen-based AR.
These systems can be used to view local or remote video views of real world scenes, combined with overlaid CG.The viewing of a remote scene is an integral component of telepresence applications.

Transparent HMD AR. This system allows the observer to view the real world directly using half-silvered mirrors with CG electronically composited into the image. An advantage id that the realworld can be directly viewed and manipulated.

Visual Display System for AR

Video Keying
Relevant when an opaque HMD with video input

is used to create an AR scene. Video and synthetic image are mixed using a video keyer to form an integrated scene. Video Keying is a process that is widely used in television, film production and CG. (weather

report) When using video keying to design AR scenes, one signal contains the foreground image and the other one contains the background image. The keyer combines the two signal to produce a combined video which

Video Keying
Keying

can be done using component video signals.

composite

or

A composite video signal contains information about

color, luminance, and synchronization, thus combining three piece of information into one signal. With component video, luminance synchronization are combined, but chroma information is delivered separately.

Video Keying
Chroma keying involves specifying a desired

foreground key color. Foreground areas containing the keying color are then electronically replaced with the background image. This results in the background image being replaced with the fore ground image in areas where the background image contains chroma color. Blue is typically used for chroma keying (Chromakey blue) rarely shows up in human skin tones.

Video Keying
If a video image of the real world is chosen as the

foreground image, parts of the scene that should show the computer-generated world are rendered blue. In contrast, if video of the real world is chosen as the background image, the computer generated environment will be located in the foreground.

Video Keying

Video Keying
A luminance keyer works in a similar manner

to a chroma keyer, however, a luminance keyer combines the background image wherever the luminance values are below a certain threshold. Luminance and chroma keyers both accomplish the same function but usa of a chroma keyer can result in a sharper key and has greater flexibility, whereas a luminance keyer is typically lower resolution and had less flexibility.

Z-keying

Z-keying
Figure is a schema of the z-key method. The

z-key method requires images with both depth information (depth map) as inputs. The z-key switch compares depth information of two images for each pixel, and connects output to the image which is the nearer one to the camera. The result of this is that real and virtual objects can occlude each other correctly. This kind of merging is impossible by the chroma-key method, even if it is accompanied with some other positioning devices such as magnetic or acoustic sensor, since these devices provide only a gross measurement of position.

Image Registration
Its required that the computer generated

images accurately register with the surroundings in the real world. In certain applications, image registration is crucial. In terms of developing scenes for AR displays, the problem of image registration, or positioning of the synthetic objects within the scene in relation to real objects, is both a difficult and important technical problem to solve.

Image Registration
With applications that require close registration,

accurate depth information has to be retrieved from the real world in order to carry out the calibration of the real and synthetic environments. Without an accurate knowledge of the geometry of the real world and computer-generated scene, exact registration is not possible.

System Design Issues


Frame rate, update rate, system delays, and

the range and sensitivity of the tracking sensors. Frame rate is a hardware-controlled variable determining the number of images presented to the eye per second. AR displays which show stereo images alternatively to the left and right eye typically use a scan rate doubler to transmit 120 frames per second so that each eye has an effective frame rate of 60 Hz.

System Design Issues


Sensor sensitivity The head-tracking requirements for AR displays. A tracker must be accurate to a small fraction of a degree in orientation and a few millimeters in position. Errors in head orientation(pitch, roll, yaw) affect image registration more so than error in position(x, y, z), leading to the more stringent requirements for head-orientation tracking. Positional tracking errors of no more than 1 to 2 mm are maximum for AR system.

System Design Issues


Integrated Mental Model
Mental Model of synthetic envoronment Virtual world stimuli Mental Model of real envoronment

Real world stimuli

Auditory, haptic, visual

Auditory, haptic, visual

Augmented-Reality Application
Medical

Entertainment
Military Training Engineering Design Robotics and Telerobotics Manufacture, Maintenance and Repair Consumer Design

Reference
http://www.cs.rit.edu/~jrv/research/ar/

Virtual Environments and Advanced Interface

Design, edited by Woodrow Barfield, Thomas A.Furness III

You might also like