You are on page 1of 26

PHYSICS ASSIGNMENT NICOL PRISM A Nicol prism is a type of polarizer, an optical device used to produce a polarized beam of light

from an unpolarized beam. It is made in such a way that it eliminate one of the ray by Total Internal Reflection i.e, O-ray is eliminated and only E-ray is transmitted through the prism.See polarized light. It was the first type of polarizing prism to be invented, in 1828 by William Nicol (17701851) of Edinburgh. It consists of a rhombohedral crystal of Iceland spar (a variety of calcite) that has been cut at an angle of 68 with respect to the crystal axis, cut again diagonally, and then rejoined as shown using, as a glue, a layer of transparent Canada balsam. Unpolarized light enters through the left face of the crystal, as shown in the diagram, and is split into two orthogonally polarized, differently directed, rays by the birefringence property of the calcite. One of these rays (the ordinary or o-ray) experiences a refractive index of no = 1.658 in the calcite and it undergoes total internal reflection at the calcite-glue interface because its angle of incidence at the glue layer (refractive index n = 1.55) exceeds the critical angle for the interface. It passes out the top side of the upper half of the prism with some refraction as shown. The other ray (the extraordinary ray or e-ray) experiences a lower refractive index (ne = 1.486) in the calcite, and is not totally reflected at the interface because it strikes the interface at a sub-critical angle. The e-ray merely undergoes a slight refraction, or bending, as it passes through the interface into the lower half of the prism. It finally leaves the prism as a ray of plane polarized light, undergoing another refraction as it exits the far right side of the prism. The two exiting rays have polarizations orthogonal (at right angles) to each other, but the lower, or e-ray, is the more commonly used for further experimentation because it is again traveling in the original horizontal direction, assuming that the calcite prism angles have been properly cut. The direction of the upper ray, or o-ray, is quite different from its original direction because it alone suffers total internal reflection at the glue interface as well as a final refraction on exit from the upper side of the prism. Nicol prisms were once widely used in microscopy and polarimetry, and the term "using crossed Nicols" (abbreviated as XN) is still used to refer to the observing of a sample placed between orthogonally oriented polarizers. In most instruments, however, Nicol prisms have been replaced by other types of polarizers such as Polaroid sheets and GlanThompson prisms.

Basic Principle The basic principle behind Nicol Prism is based on its unique behaviour on the event of incidence of light rays on its surface. When an ordinary ray of light is passed through a calcite crystal, it is broken up into two rays:

An Ordinary ray which is polarized and has its vibrations perpendicular to the principle section of the crystal and An extra-ordinary ray which is polarized and whose vibration is parallel to the principle section of the prism.If by some optical means, one of the two rays eliminates, the ray emerging through the crystal will be Plane polarized. In Nicol Prism, ordinary ray is eliminated and Extra-ordinary ray, which is plane polarized, is transmitted through the prism.

Construction A calcite crystals length is three times its breadth. Let ADFGBC be such a crystal having ABCD as a principle section of the crystal with BAD = 700.

The end faces of the crystal are cut in such a way that they make angles of 68 0 and 1120 in the principle section instead of 710 and 1090. The crystal is then cut into two pieces from one blunt corner to the other along two pieces from one blunt corner to the other along a plane perpendicular to the extra ordinary rays. 1. Refractive index of Calcite for O ray, 2. Refractive index of Canada balsam, 3. Refractive index Calcite of E ray,

Thus we see that the Canada Balsam is optically denser than calcite for E ray and rarer for O ray. Finally the crystal is enclosed in a tube blackened inside. Unpolarised light incidence When a ray SM of unpolarised light parallel to the face AD is incident on the face AB of the prism, it splits up into two refracted rays, the ordinary ray and the extra ordinary. Both of the O and E ray are plane polarized the vibrations of O ray being perpendicular to the principal section of the crystal; while that of E ray being in the principal section. The ordinary ray in going from calcite to Canada Balsam travels from optically denser medium to a rarer medium.

As the length of calcite crystal is large, the angle of incidence at Calcite Balsam surface for the ordinary ray is greater than the critical angle. Therefore when O ray is incident on Calcite Balsam surfaces it is totally reflected and is finally absorbed by the side AD which is blackened. The extra ordinary ray travels from an optically rare medium to a denser medium, therefore it is not affected by the Calcite Balsam surface and it is therefore transmitted through the prism. This E ray is plane polarized and had vibration, in the principal section parallel to the shorter diagonal of the end face of the crystal. Thus by Nicol prism we are able to get a single beam of place polarized light. Thus Nicol prism can be used as a polarizer. Limitations When the angle of incidence at the crystal surface is increased, the angle of incidence at Calcite Balsam surface decreases. When the angle S0MS becomes greater than 14o, the angle of incidence of Calcite Balsam surface becomes less than the critical angle. In this position ordinary ray is also transmitted through

the prism along with extraordinary ray so light emerging from Nicol prism will not be plane polarized. When angle of incidence at crystal surface is decreased, the extraordinary ray makes less angle with the optic axis, as a result its refractive index increase, because the refractive index of calcite crystal for E ray is different in different directions through the crystal being maximum when the E ray travels at right angles to the optic axis and minimum when E ray travels along with O ray and no light emerges from the prism Nicol prism as analyzer

Consider two Nicol prisms arranged coaxially one after another. When a beam of unpolarized light is incident on the first prism P, the emergent beam is plane polarized with its vibrations in principal section of first prism. This prism is called polarizer. When principal section of both prisms are parallel then intensity of emergent light is maximum. But when the principal sections are at right angles to each other the, intensity of emergent light is minimum i.e., there no light it transmitted through the second prism. Here first prism produced plane polarized light and 2nd prism detects and analyses it.

STIMULATED EMISSION

In optics, stimulated emission is the process by which an atomic electron (or an excited molecular state) interacting with an electromagnetic wave of a certain frequency may drop to a lower energy level, transferring its energy to that field. A photon created in this manner has the same phase, frequency, polarization, and direction of travel as the photons of the incident wave. This is in contrast to spontaneous emission which occurs without regard to the ambient electromagnetic field. However the process is identical in form to atomic absorption in which the energy of an absorbed photon causes an identical but opposite atomic transition: from the lower level to a higher energy level. In normal media at thermal equilibrium, absorption exceeds stimulated emission because there are more electrons in the lower energy states than in the higher energy states. However when a population inversion is present the rate of stimulated emission exceeds that of absorption, and a net optical amplification can be achieved. Such a gain medium, along with an optical resonator, is at the heart of a laser or maser. Lacking a feedback mechanism, laser amplifiers and superluminescent sources also function on the basis of stimulated emission. Stimulated emission was a theoretical discovery by Einstein [1] within the framework of quantum mechanics, wherein the emission is described in terms of photons that are the quanta of the EM field. Stimulated emission can also be described classically however, without reference to either photons, or the quantum-mechanics of matter.[2] Overview Electrons and how they interact with electromagnetic fields are important in our understanding of chemistry and physics. In the classical view, the energy of an electron orbiting an atomic nucleus is larger for orbits further from the nucleus of an atom. However, quantum mechanical effects force electrons to take on discrete positions in orbitals. Thus, electrons are found in specific energy levels of an atom, two of which are shown below:

When an electron absorbs energy either from light (photons) or heat (phonons), it receives that incident quanta of energy. But transitions are only allowed in between discrete energy levels such as the two shown above. This leads to emission lines and absorption lines. When an electron is excited from a lower to a higher energy level, it will not stay that way forever. An electron in an excited state may decay to a lower energy state which is not occupied, according to a particular time constant characterizing that transition. When such an electron decays without external influence, emitting a photon, that is called "spontaneous emission". The phase associated with the photon that is emitted is random. A material with many atoms in such an excited state may thus result in radiation which is very spectrally limited (centered around one wavelength of light), but the individual photons would have no common phase relationship and would emanate in random directions. This is the mechanism of fluorescence and thermal emission. An external electromagnetic field at a frequency associated with a transition can affect the quantum mechanical state of the atom. As the electron in the atom makes a transition between two stationary states (neither of which shows a dipole field), it enters a transition state which does have a dipole field, and which acts like a small electric dipole, and this dipole oscillates at a characteristic frequency. In response to the external electric field at this frequency, the probability of the atom entering this transition state is greatly increased. Thus, the rate of transitions between two stationary states is enhanced beyond that due to spontaneous emission. Such a transition to the higher state is called absorption, and it destroys an incident photon (the photon's energy goes into powering the increased energy of the higher state). A transition from the higher

to a lower energy state, however, produces an additional photon; this is the process of stimulated emission.

SPONTANEOUS EMISSION Spontaneous emission is the process by which a light source such as an atom, molecule, nanocrystal or nucleus in an excited state undergoes a transition to a state with a lower energy, e.g., the ground state and emits a photon. Spontaneous emission of light or luminescence is a fundamental process that plays an essential role in many phenomena in nature and forms the basis of many applications, such as fluorescent tubes, older television screens (cathode ray tubes), plasma display panels, lasers (for startup - normal continuous operation works by stimulated emission instead) and light emitting diodes.

Theory Spontaneous transitions was not explainable within the framework of the old quantum theory, that is a theory in which the atomic levels are quantized, but the electromagnetic field is not. In fact, using the machinery of the usually called "first-quantized" quantum mechanics and computing the probability of spontaneous transitions from one stationary state to another, one finds that it is zero. In order to explain spontaneous transitions, quantum mechanics must be extended to a "second-quantized" theory, wherein the electromagnetic field is quantized at every point in space. Such a theory is known as a quantum field theory; the quantum field theory of electrons and electromagnetic fields is known as quantum electrodynamics.

In quantum electrodynamics (or QED), the electromagnetic field has a ground state, the QED vacuum, which can mix with the excited stationary states of the atom (for more information, see Ref. [2]). As a result of this interaction, the "stationary state" of the atom is no longer a true eigenstate of the combined system of the atom plus electromagnetic field. In particular, the electron transition from the excited state to the electronic ground state mixes with the transition of the electromagnetic field from the ground state to an excited state, a field state with one photon in it. Spontaneous emission in free space depends upon vacuum fluctuations to get started.[2][3] Although there is only one electronic transition from the excited state to ground state, there are many ways in which the electromagnetic field may go from the ground state to a one-photon state. That is, the electromagnetic field has infinitely more degrees of freedom, corresponding to the different directions in which the photon can be emitted. Equivalently, one might say that the phase space offered by the electromagnetic field is infinitely larger than that offered by the atom. Since one must consider probabilities that occupy all of phase space equally, the combined system of atom plus electromagnetic field must undergo a transition from electronic excitation to a photonic excitation; the atom must decay by spontaneous emission. The time the light source remains in the excited state thus depends on the light source itself as well as its environment. Imagine trying to hold a pencil upright on the end of your finger. It will stay there if your hand is perfectly stable and nothing perturbs the equilibrium. But the slightest perturbation will make the pencil fall into a more stable equilibrium position. Similarly, vacuum fluctuations cause an excited atom to fall into its ground state. In spectroscopy one can frequently find that atoms or molecules in the excited states dissipate their energy in the absence of any external source of photons. This is not spontaneous emission, but is actually nonradiative relaxation of the atoms or molecules caused by the fluctuation of the surrounding molecules present inside the bulk.

RUBY LASER A ruby laser is a solid-state laser that uses a synthetic ruby crystal as its gain medium. The first working laser was a ruby laser made by Theodore H. "Ted" Maiman at Hughes Research Laboratories on May 16, 1960.[1][2] Ruby lasers produce pulses of visible light at a wavelength of 694.3 nm, which is a deep red color. Typical ruby laser pulse lengths are on the order of a millisecond.

Design A ruby laser most often consists of a ruby rod that must be pumped with very high energy, usually from a flashtube, to achieve a population inversion. The rod is often placed between two mirrors, forming an optical cavity, which oscillate the light produced by the ruby's fluorescence, causing stimulated emission. Ruby is one of the few solid state lasers that produce light in the visible range of the spectrum, lasing at 694.3 nanometers, in a deep red color, with a very narrow linewidth of 0.53 nm.[3] The ruby laser is a three level solid state laser. The active laser medium (laser gain/amplification medium) is a synthetic ruby rod that is energized through optical pumping, typically by a xenon flashtube. Ruby has very broad and powerful absorption bands in the visual spectrum, at 400 and 550 nm, and a very long fluorescence lifetime of 3 milliseconds. This allows for very high energy pumping, since the pulse duration can be much longer than with other materials. While ruby has a very wide absorption profile, its conversion efficiency is much lower than other mediums.[3]

In early examples, the rod's ends had to be polished with great precision, such that the ends of the rod were flat to within a quarter of a wavelength of the output light, and parallel to each other within a few seconds of arc. The finely polished ends of the rod were silvered; one end completely, the other only partially. The rod, with its reflective ends, then acts as a FabryProt etalon (or a Gires-Tournois etalon). Modern lasers often use rods with antireflection coatings, or with the ends cut and polished at Brewster's angle instead. This eliminates the reflections from the ends of the rod. External dielectric mirrors then are used to form the optical cavity. Curved mirrors are typically used to relax the alignment tolerances and to form a stable resonator, often compensating for thermal lensing of the rod.[3][4] Ruby also absorbs some of the light at its lasing wavelength. To overcome this absorption, the entire length of the rod needs to be pumped, leaving no shaded areas near the mountings. The active part of the ruby is the dopant, which consists of chromium ions suspended in a sapphire crystal. The dopant often comprises around 0.05% of the crystal, and is responsible for all of the absorption and emission of radiation. Depending on the concentration of the dopant, synthetic ruby usually comes in either pink or red.[3][4] Applications One of the first applications for the ruby laser was in rangefinding. By 1964, ruby lasers with rotating prism q-switches became the standard for military rangefinders, until the introduction of more efficient Nd:YAG rangefinders a decade later. Ruby lasers were used mainly in research.[5] The ruby laser was the first laser used to optically pump tunable dye lasers and is particularly well suited to excite laser dyes emitting in the near infrared.[6] Ruby lasers are rarely used in industry, mainly due to low efficiency and low repetition rates. One of the main industrial uses is drilling holes through diamond.[5] Ruby lasers have declined in use with the discovery of better lasing media. They are still used in a number of applications where short pulses of red light are required. Holographers around the world produce holographic portraits with ruby lasers, in sizes up to a meter square. Because of its high pulsed power and good coherence length, the red 694 nm laser light is preferred to the 532 nm green light of frequency-doubled Nd:YAG, which often requires multiple pulses for large holograms.[7] Many non-destructive testing labs use ruby lasers to create holograms of large objects such as aircraft tires to look for weaknesses in the lining. Ruby lasers were used extensively in tattoo and hair removal, but are being replaced by alexandrite and Nd:YAG lasers in this application. HELIUM NEON LASER A heliumneon laser or HeNe laser, is a type of gas laser whose gain medium consists of a mixture of helium and neon inside of a small bore capillary tube, usually excited by a DC electrical discharge.

Construction and operation The gain medium of the laser, as suggested by its name, is a mixture of helium and neon gases, in approximately a 10:1 ratio, contained at low pressure in a glass envelope. The gas mixture is mostly helium, so that helium atoms can be excited. The excited helium atoms collide with neon atoms, exciting some of them to the state that radiates 632.8 nm. Without helium, the neon atoms would be excited mostly to lower excited states responsible for non-laser lines. A neon laser with no helium can be constructed but it is much more difficult without this means of energy coupling. Therefore, a HeNe laser that has lost enough of its helium (e.g., due to diffusion through the seals or glass) will most likely not lase at all since the pumping efficiency will be too low[5] . The energy or pump source of the laser is provided by a high voltage electrical discharge passed through the gas between electrodes (anode and cathode) within the tube. A DC current of 3 to 20 mA is typically required for CW operation. The optical cavity of the laser usually consists of two concave mirrors or one plane and one concave mirror, one having very high (typically 99.9%) reflectance and the output coupler mirror allowing approximately 1% transmission.

The red HeNe laser wavelength of 633 nm has an actual vacuum wavelength of 632.991 nm, or about 632.816 nm in air. The wavelength of the lasing modes lie within about 0.001 nm above or below this value, and the wavelengths of those modes shift within this range due to thermal expansion and contraction of the cavity. Frequency-stabilized versions enable the wavelength of a single mode to be specified to within 1 part in 108 by the technique of comparing the powers of two longitudinal modes in opposite polarizations.[6] Absolute stabilization of the laser's frequency (or wavelength) as fine as 2.5 parts in 1011 can be obtained through use of an iodine absorption cell.[7]

Energy level diagram of a HeNe laser The mechanism producing population inversion and light amplification in a HeNe laser plasma [8] originates with inelastic collision of energetic electrons with ground state helium atoms in the gas mixture. As shown in the accompanying energy level diagram, these collisions excite helium atoms from the ground state to higher energy excited states, among them the 23S1 and 21S0 long-lived metastable states. Because of a fortuitous near coincidence between the energy levels of the two He metastable states, and the 3s2 and 2s2 (Paschen notation[9]) levels of neon, collisions between these helium metastable atoms and ground state neon atoms results in a selective and efficient transfer of excitation energy from the helium to neon. This excitation energy transfer process is given by the reaction equations: He*(23S1) + Ne1S0 He(1S0) + Ne*2s2 + E and He*(21S) + Ne1S0 + E He(1S0) + Ne*3s2 where (*) represents an excited state, and E is the small energy difference between the energy states of the two atoms, of the order of 0.05 eV or 387 cm1, which is supplied by kinetic energy. Excitation energy transfer increases the population of the neon 2s2 and 3s2 levels manyfold. When the population of these two upper levels exceeds that of the corresponding lower level neon state, 2p4 to which they are optically connected, population inversion is present. The medium

becomes capable of amplifying light in a narrow band at 1.15 m (corresponding to the 2s2 to 2p4 transition) and in a narrow band at 632.8 nm (corresponding to the 3s2 to 2p4 transition at 632.8 nm). The 2p4 level is efficiently emptied by fast radiative decay to the 1s state, eventually reaching the ground state. The remaining step in utilizing optical amplification to create an optical oscillator is to place highly reflecting mirrors at each end of the amplifying medium so that a wave in a particular spatial mode will reflect back upon itself, gaining more power in each pass than is lost due to transmission through the mirrors and diffraction. When these conditions are met for one or more longitudinal modes then radiation in those modes will rapidly build up until gain saturation occurs, resulting in a stable continuous laser beam output through the front (typically 99% reflecting) mirror. The gain bandwidth of the HeNe laser is dominated by Doppler broadening rather than pressure broadening due to the low gas pressure, and is thus quite narrow: only about 1.5 GHz full width for the 633 nm transition.[6][10] With cavities having typical lengths of 15 cm to 50 cm, this allows about 2 to 8 longitudinal modes to oscillate simultaneously (however single longitudinal mode units are available for special applications). The visible output of the red HeNe laser, long coherence length, and its excellent spatial quality, makes this laser a useful source for holography and as a wavelength reference for spectroscopy. A stabilized HeNe laser is also one of the benchmark systems for the definition of the meter.[7] Prior to the invention of cheap, abundant diode lasers, red HeNe lasers were widely used in barcode scanners at supermarket checkout counters. Laser gyroscopes have employed HeNe lasers operating at 0.633 m in a ring laser configuration. HeNe lasers are generally present in educational and research optical laboratories. Applications Red HeNe lasers have many industrial and scientific uses. They are widely used in laboratory demonstrations in the field of optics in view of their relatively low cost and ease of operation compared to other visible lasers producing beams of similar quality in terms of spatial coherence (a single mode gaussian beam) and long coherence length (however since about 1990 semiconductor lasers have offered a lower cost alternative for many such applications). A consumer application of the red HeNe laser is the LaserDisc player, made by Pioneer. The laser is used in the device to read the optical disk. HOLOGRAPHY Holography is a technique which enables three-dimensional images to be made. It involves the use of a laser, interference, diffraction, light intensity recording and suitable illumination of the recording. The image changes as the position and orientation of the viewing system changes in exactly the same way as if the object were still present, thus making the image appear three-dimensional.

The holographic recording itself is not an image; it consists of an apparently random structure of either varying intensity, density or profile. How holography works

Recording a hologram

Reconstructing a hologram

Close-up photograph of a hologram's surface. The object in the hologram is a toy van. It is no more possible to discern the subject of a hologram from this pattern than it is to identify what music has been recorded by looking at a CD surface. Note that the hologram is described by the speckle pattern, rather than the "wavy" line pattern. Holography is a technique that enables a light field, which is generally the product of a light source scattered off objects, to be recorded and later reconstructed when the original light field is no longer present, due to the absence of the original objects.[20] Holography can be thought of as somewhat similar to sound recording, whereby a sound field created by vibrating matter like musical instruments or vocal cords, is encoded in such a way that it can be reproduced later, without the presence of the original vibrating matter. Laser Holograms are recorded using a flash of light that illuminates a scene and then imprints on a recording medium, much in the way a photograph is recorded. In addition, however, part of the light beam must be shone directly onto the recording medium - this second light beam is known as the reference beam. A hologram requires a laser as the sole light source. Lasers can be precisely controlled and have a fixed wavelength, unlike sunlight or light from conventional sources, which contain many different wavelengths. To prevent external light from interfering, holograms are usually taken in darkness, or in low level light of a different colour from the laser light used in making the hologram. Holography requires a specific exposure time (just like photography), which can be controlled using a shutter, or by electronically timing the laser.

Apparatus A hologram can be made by shining part of the light beam directly onto the recording medium, and the other part onto the object in such a way that some of the scattered light falls onto the recording medium. A more flexible arrangement for recording a hologram requires the laser beam to be aimed through a series of elements that change it in different ways. The first element is a beam splitter that divides the beam into two identical beams, each aimed in different directions:

One beam (known as the illumination or object beam) is spread using lenses and directed onto the scene using mirrors. Some of the light scattered (reflected) from the scene then falls onto the recording medium. The second beam (known as the reference beam) is also spread through the use of lenses, but is directed so that it doesn't come in contact with the scene, and instead travels directly onto the recording medium.

Several different materials can be used as the recording medium. One of the most common is a film very similar to photographic film (silver halide photographic emulsion), but with a much higher concentration of light-reactive grains, making it capable of the much higher resolution that holograms require. A layer of this recording medium (e.g. silver halide) is attached to a transparent substrate, which is commonly glass, but may also be plastic. Process When the two laser beams reach the recording medium, their light waves intersect and interfere with each other. It is this interference pattern that is imprinted on the recording medium. The pattern itself is seemingly random, as it represents the way in which the scene's light interfered with the original light source but not the original light source itself. The interference pattern can be said to be an encoded version of the scene, requiring a particular key that is, the original light source in order to view its contents. This missing key is provided later by shining a laser, identical to the one used to record the hologram, onto the developed film. When this beam illuminates the hologram, it is diffracted by the hologram's surface pattern. This produces a light field that is identical to the one originally produced by the scene and scattered onto the hologram. The image this effect produces in a person's retina is known as a virtual image. Holography vs. photography Holography may be better understood via an examination of its differences from ordinary photography:

A hologram represents a recording of information regarding the light that came from the original scene as scattered in a range of directions rather

than from only one direction, as in a photograph. This allows the scene to be viewed from a range of different angles, as if it were still present. A photograph can be recorded using normal light sources (sunlight or electric lighting) whereas a laser is required to record a hologram. A lens is required in photography to record the image, whereas in holography, the light from the object is scattered directly onto the recording medium. A holographic recording requires a second light beam (the reference beam) to be directed onto the recording medium. A photograph can be viewed in a wide range of lighting conditions, whereas holograms can only be viewed with very specific forms of illumination. When a photograph is cut in half, each piece shows half of the scene. When a hologram is cut in half, the whole scene can still be seen in each piece. This is because, whereas each point in a photograph only represents light scattered from a single point in the scene, each point on a holographic recording includes information about light scattered from every point in the scene. Think of viewing a street outside your house through a 4 ft x 4 ft window, and then through a 2 ft x 2 ft window. You can see all of the same things through the smaller window (by moving your head to change your viewing angle), but you can see more at once through the 4 ft window. A photograph is a two-dimensional representation that can only reproduce a rudimentary three-dimensional effect, whereas the reproduced viewing range of a hologram adds many more depth perception cues that were present in the original scene. These cues are recognized by the human brain and translated into the same perception of a three-dimensional image as when the original scene might have been viewed. A photograph clearly maps out the light field of the original scene. The developed hologram's surface consists of a very fine, seemingly random pattern, which appears to bear no relationship to the scene it recorded.

Physics of holography For a better understanding of the process, it is necessary to understand interference and diffraction. Interference occurs when one or more wavefronts are superimposed. Diffraction occurs whenever a wavefront encounters an object. The process of producing a holographic reconstruction is explained below purely in terms of interference and diffraction. It is somewhat simplified but is accurate enough to provide an understanding of how the holographic process works. For those unfamiliar with these concepts, it is worthwhile to read the respective articles before reading further in this article.

Plane wavefronts A diffraction grating is a structure with a repeating pattern. A simple example is a metal plate with slits cut at regular intervals. A light wave incident on a grating is split into several waves; the direction of these diffracted waves is determined by the grating spacing and the wavelength of the light. A simple hologram can be made by superimposing two plane waves from the same light source on a holographic recording medium. The two waves interfere giving a straight line fringe pattern whose intensity varies sinusoidally across the medium. The spacing of the fringe pattern is determined by the angle between the two waves, and on the wavelength of the light. The recorded light pattern is a diffraction grating. When it is illuminated by only one of the waves used to create it, it can be shown that one of the diffracted waves emerges at the same angle as that at which the second wave was originally incident so that the second wave has been 'reconstructed'. Thus, the recorded light pattern is a holographic recording as defined above. Point sources

Sinusoidal zone plate If the recording medium is illuminated with a point source and a normally incident plane wave, the resulting pattern is a sinusoidal zone plate which acts as a negative Fresnel lens whose focal length is equal to the separation of the point source and the recording plane. When a plane wavefront illuminates a negative lens, it is expanded into a wave which appears to diverge from the focal point of the lens. Thus, when the recorded pattern is illuminated with the original plane wave, some of the light is diffracted into a diverging beam equivalent to the original plane wave; a holographic recording of the point source has been created. When the plane wave is incident at a non-normal angle, the pattern formed is more complex but still acts as a negative lens provided it is illuminated at the original angle.

Complex objects To record a hologram of a complex object, a laser beam is first split into two separate beams of light. One beam illuminates the object, which then scatters light onto the recording medium. According to diffraction theory, each point in the object acts as a point source of light so the recording medium can be considered to be illuminated by a set of point sources located at varying distances from the medium. The second (reference) beam illuminates the recording medium directly. Each point source wave interferes with the reference beam, giving rise to its own sinusoidal zone plate in the recording medium. The resulting pattern is the sum of all these 'zone plates' which combine to produce a random (speckle) pattern as in the photograph above. When the hologram is illuminated by the original reference beam, each of the individual zone plates reconstructs the object wave which produced it, and these individual wavefronts add together to reconstruct the whole of the object beam. The viewer perceives a wavefront that is identical to the wavefront scattered from the object onto the recording medium, so that it appears to him or her that the object is still in place even if it has been removed. This image is known as a "virtual" image, as it is generated even though the object is no longer there. Hologram classifications There are three important properties of a hologram which are defined in this section. A given hologram will have one or other of each of these three properties, e.g. we can have an amplitude modulated thin transmission hologram, or a phase modulated, volume reflection hologram. Amplitude and phase modulation holograms An amplitude modulation hologram is one where the amplitude of light diffracted by the hologram is proportional to the intensity of the recorded light. A straightforward example of this is photographic emulsion on a transparent substrate. The emulsion is exposed to the interference pattern, and is subsequently developed giving a transmittance which varies with the intensity of the pattern - the more light that fell on the plate at a given point, the darker the developed plate at that point. A phase hologram is made by changing either the thickness or the refractive index of the material in proportion to the intensity of the holographic interference pattern. This is a phase grating and it can be shown that when such a plate is illuminated by the original reference beam, it reconstructs the original object wavefront. The efficiency (i.e. the fraction of the illuminated beam which is converted to reconstructed object beam) is greater for phase than for amplitude modulated holograms.

Thin holograms and thick (volume) holograms A thin hologram is one where the thickness of the recording medium is much less than the spacing of the interference fringes which make up the holographic recording. A thick or volume hologram is one where the thickness of the recording medium is greater than the spacing of the interference pattern. The recorded hologram is now a three dimensional structure, and it can be shown that incident light is diffracted by the grating only at a particular angle, known as the Bragg angle.[27] If the hologram is illuminated with a light source incident at the original reference beam angle but a broad spectrum of wavelengths, reconstruction occurs only at the wavelength of the original laser used. If the angle of illumination is changed, reconstruction will occur at a different wavelength and the colour of the re-constructed scene changes. A volume hologram effectively acts as a colour filter. Transmission and reflection holograms A transmission hologram is one where the object and reference beams are incident on the recording medium from the same side. In practice, several more mirrors may be used to direct the beams in the required directions. Normally, transmission holograms can only be reconstructed using a laser or a quasi-monochromatic source, but a particular type of transmission hologram, known as a rainbow hologram, can be viewed with white light. In a reflection hologram, the object and reference beams are incident on the plate from opposite sides of the plate. The reconstructed object is then viewed from the same side of the plate as that at which the re-constructing beam is incident. Only volume holograms can be used to make reflection holograms, as only a very low intensity diffracted beam would be reflected by a thin hologram. NUMERICAL APERTURE In optics, the numerical aperture (NA) of an optical system is a dimensionless number that characterizes the range of angles over which the system can accept or emit light. By incorporating index of refraction in its definition, NA has the property that it is constant for a beam as it goes from one material to another provided there is no optical power at the interface. The exact definition of the term varies slightly between different areas of optics. Numerical aperture is commonly used in microscopy to describe the acceptance cone of an objective (and hence its light-gathering ability and resolution), and in fiber optics, in which it describes the cone of light accepted into the fiber or exiting it.

General optics In most areas of optics, and especially in microscopy, the numerical aperture of an optical system such as an objective lens is defined by

where n is the index of refraction of the medium in which the lens is working (1.0 for air, 1.33 for pure water, and up to 1.56 for oils; see also list of refractive indices), and is the half-angle of the maximum cone of light that can enter or exit the lens. In general, this is the angle of the real marginal ray in the system. Because the index of refraction is included, the NA of a pencil of rays is an invariant as a pencil of rays passes from one material to another through a flat surface. This is easily shown by rearranging Snell's law to find that constant across an interface. is

In air, the angular aperture of the lens is approximately twice this value (within the paraxial approximation). The NA is generally measured with respect to a particular object or image point and will vary as that point is moved. In microscopy, NA generally refers to object-space NA unless otherwise noted. In microscopy, NA is important because it indicates the resolving power of a lens. The size of the finest detail that can be resolved is proportional to /2NA, where is the wavelength of the light. A lens with a larger numerical aperture will be able to visualize finer details than a lens with a smaller numerical aperture. Assuming quality (diffraction limited) optics, lenses with larger numerical apertures collect more light and will generally provide a brighter image, but will provide shallower depth of field. Numerical aperture is used to define the "pit size" in optical disc formats.[1]

Numerical aperture versus f-number

Numerical aperture of a thin lens. Numerical aperture is not typically used in photography. Instead, the angular aperture of a lens (or an imaging mirror) is expressed by the f-number, written f/# or , which is defined as the ratio of the focal length to the diameter of the entrance pupil:

This ratio is related to the image-space numerical aperture when the lens is focused at infinity.[2] Based on the diagram at the right, the image-space numerical aperture of the lens is:

thus

, assuming normal use in air (

).

The approximation holds when the numerical aperture is small, but it turns out that for well-corrected optical systems such as camera lenses, a more detailed analysis shows that is almost exactly equal to even at large numerical apertures. As Rudolf Kingslake explains, "It is a common error to suppose that the ratio [ ] is actually equal to , and not ... The tangent would, of course, be correct if the principal planes were really plane. However, the complete theory of the Abbe sine condition shows that if a lens is corrected for coma and spherical aberration, as all good photographic objectives must be, the second principal plane becomes a portion of a sphere of radius f centered about the focal point, ..."[3] In this sense, the traditional thin-lens

definition and illustration of f-number is misleading, and defining it in terms of numerical aperture may be more meaningful. Working (effective) f-number The f-number describes the light-gathering ability of the lens in the case where the marginal rays on the object side are parallel to the axis of the lens. This case is commonly encountered in photography, where objects being photographed are often far from the camera. When the object is not distant from the lens, however, the image is no longer formed in the lens's focal plane, and the fnumber no longer accurately describes the light-gathering ability of the lens or the image-side numerical aperture. In this case, the numerical aperture is related to what is sometimes called the "working f-number" or "effective f-number." The working f-number is defined by modifying the relation above, taking into account the magnification from object to image:

where is the working f-number, is the lens's magnification for an object a particular distance away, and the NA is defined in terms of the angle of the marginal ray as before.[2][4] The magnification here is typically negative; in photography, the factor is sometimes written as 1 + m, where m represents the absolute value of the magnification; in either case, the correction factor is 1 or greater. The two equalities in the equation above are each taken by various authors as the definition of working f-number, as the cited sources illustrate. They are not necessarily both exact, but are often treated as if they are. The actual situation is more complicated as Allen R. Greenleaf explains, "Illuminance varies inversely as the square of the distance between the exit pupil of the lens and the position of the plate or film. Because the position of the exit pupil usually is unknown to the user of a lens, the rear conjugate focal distance is used instead; the resultant theoretical error so introduced is insignificant with most types of photographic lenses."[5] Conversely, the object-side numerical aperture is related to the f-number by way of the magnification (tending to zero for a distant object):

Laser physics In laser physics, the numerical aperture is defined slightly differently. Laser beams spread out as they propagate, but slowly. Far away from the narrowest

part of the beam, the spread is roughly linear with distancethe laser beam forms a cone of light in the "far field". The relation used to define the NA of the laser beam is the same as that used for an optical system,

but is defined differently. Laser beams typically do not have sharp edges like the cone of light that passes through the aperture of a lens does. Instead, the irradiance falls off gradually away from the center of the beam. It is very common for the beam to have a Gaussian profile. Laser physicists typically choose to make the divergence of the beam: the far-field angle between the propagation direction and the distance from the beam axis for which the irradiance drops to 1/e2 times the wavefront total irradiance. The NA of a Gaussian laser beam is then related to its minimum spot size by

where 0 is the vacuum wavelength of the light, and 2w0 is the diameter of the beam at its narrowest spot, measured between the 1/e2 irradiance points ("Full width at e2 maximum of the intensity"). This means that a laser beam that is focused to a small spot will spread out quickly as it moves away from the focus, while a large-diameter laser beam can stay roughly the same size over a very long distance. Fiber optics

A multi-mode fiber of index n1 with cladding of index n2. A multi-mode optical fiber will only propagate light that enters the fiber within a certain cone, known as the acceptance cone of the fiber. The half-angle of this cone is called the acceptance angle, max. For step-index multimode fiber, the acceptance angle is determined only by the indices of refraction of the core and the cladding:

where ncore is the refractive index of the fiber core, and nclad is the refractive index of the cladding. While the core will accept light at higher numerical

apertures (higher angles), those rays will not totally reflect off the corecladding interface, and so will not be transmitted to the other end of the fiber. When a light ray is incident from a medium of refractive index n to the core of index ncore at the maximum acceptance angle, Snell's law at the mediumcore interface gives

From the geometry of the above figure we have:

where

is the critical angle for total internal reflection.

Substituting cos c for sin r in Snell's law we get:

By squaring both sides

Solving, we find the formula stated above:

This has the same form as the numerical aperture in other optical systems, so it has become common to define the NA of any type of fiber to be

where ncore is the refractive index along the central axis of the fiber. Note that when this definition is used, the connection between the NA and the acceptance angle of the fiber becomes only an approximation. In particular, manufacturers often quote "NA" for single-mode fiber based on this formula, even though the acceptance angle for single-mode fiber is quite different and cannot be determined from the indices of refraction alone.

The number of bound modes, the mode volume, is related to the normalized frequency and thus to the NA. In multimode fibers, the term equilibrium numerical aperture is sometimes used. This refers to the numerical aperture with respect to the extreme exit angle of a ray emerging from a fiber in which equilibrium mode distribution has been established.

You might also like