You are on page 1of 15

Advanced Rendering Techniques 1. Introduction 1.1 The theory of light 1.

2 Photons, Global Illumination and Final Gather 1.3 Knowing your render engine. 2. Lighting techniques 2.1 Interior lighting and exterior lighting, and rendering different times of day 2.3 After effects: glows, lens flares, neon etc 2.4 Studio Setup: Product rendering 2.5 HDRI rendering 2.6 Caustics 3. Materials 3.1 Unwrap UVW and Pelt Wrapping 3.2 Advanced materials 3.3 Reflections and controlling colour bleed 4. Cameras 4.1 Camera setup 4.2 Depth of field and Motion Blur 4.3 compositing 4.4 NPR (Non Photorealistic Rendering) 5. Summary and final project

1.1

The Theory of Light

Light comprises of a whole spectrum of different colours, transmitted in waves of differing wavelengths. Visible light is actually a very narrow band of the complete wavelength spectrum, which also includes X-rays, with very short wavelengths, and radio waves, which have very long wavelengths. Close to the visible range we also find ultraviolet, which is harmful to the human skin. Next come the visible colours: Violet, blue, green, yellow, orange and red, after which comes the next invisible wave called infra red, which we experience as heat. Bear in mind that the paint or ink spectrum differs significantly from the visible light spectrum. Printing would typically use the CMYK colour set (Cyan, Magenta, Yellow and Black) while computer graphics use the RGB spectrum (Red Green and Blue) The difference is that paint uses a subtractive approach to mix colours, while light uses an additive approach. Combining all paint colours produce black paint. Combining all light colours produce white light. In order to print an accurate image, we need to match up the image that we have created additively with RGB to the output, which will be subtractively comprised of CMYK. This is why it is important to have accurate Gamma settings. Another property of light which heavily influences computer graphics is the Inverse Square Law. This law explains how light fades over distance, and is applicable to all types of radiation. A light's luminosity (the energy emission per second) does not change, however, as light travels further away from tit's source, it covers more area and this is what makes it lose its intensity, fading according to the reciprocal of the square of the distance, like spreading butter over too much bread. This phenomenon is also called attenuation. Because we are working with artificial lights, the natural inverse square law tend to be too restrictive in CG, so our lights are more flexible. We can specify a start point of decay, a midpoint and end point, or even specify the light to have no decay at all. Light obeys the simple law of reflection, which states that the angle of reflection equals the angle of incidence, which is measured relative to the surface's normal at the point of incidence. The simulation of this law in CG takes place using a rendering process called raytracing. Refraction describes how light bends and obeys Snell's law, which concerns transparent and semitransparent objects. Basically this determines the extent of refraction when light passes between different materials. This bending causes the distortion that you can see by looking at a lens. The Index of Refraction (IoR) is the value that will determine how much the light will bend when passing through the object. This number is calculated by taking the speed of light in a vacuum and dividing it by the speed of light in the material. Light qualities include: Intensity This is the most obvious and perceptible quality. The light with the strongest intensity in the scene is known as the dominant light or key light and it will cast the strongest shadows. The intensity of a light is controlled by it's colour and it's multiplier, or brightness value, along with attenuation, or decay. Colour The colour of light is incredibly important in the realism of a scene. Colour is determined by time of day and season for natural light, and type of globe for artificial lights. Also, because we can manipulate artificial lights in CG, adjusting colour of light will affect the mood of the scene. Softness Soft light is widespread in the real world, due to the sky and the scattering of light it creates. In CG, we tend to use hard lights too often, as hard settings are the default of most lights. Such hard light

and stark shadows are hardly ever seen In the real world, so this needs to be adjusted for a realistic scene. Throw This is the manner in which a light's illumination is shaped or patterned, and can be due to a lampshade or light fitting around the source, or curtains over a window obstructing natural light in an interior scene, or clouds in the sky. We can achieve throw in two ways: Either by modelling the obstructing object, or specifying the shape in the light's parameters, for example using a texture map to create shadows of leaves. Animation Animation of a light can be seen in winking car indicators, flickering neon signs, fading sunset, the illumination created by a television screen or the flickering of stars. Apart from the light actually moving, the intensity, colour, softness and throw can all change, and all these changes can be animated. Shadows Shadows are incredibly important in describing a light. Shadows are important for the scene's realism, consistency, relationships of objects and overall composition. Don't think of shadows as something that things get lost in, but rather as an attribute that can be used to define and fine tune an object in a scene. Motivation Lights can be categorised by how they operate in the scene in terms of their motivation. Lights can be referred to as logical, and these are often actual light sources like a lamp or torch. Pictoral lights have usually been placed for purely aesthetic reasons, and generally add to the drama, realism and detail of a scene. 1.2 Introduction to Photons, Indirect Illumination and Final Gather 1.2.1 Photons Photons used for GI and Caustics are two dimensional points in 3D space and are bundles of red green and blue (RGB) light energy emitted from a light source .Each photon carries only a portion of the energy from the light source, the total RGB colour energy of each light is divided by the number of photons it emits. Brighter lights thus emit more photons. Photons are reflected off coloured surfaces in the scene, transporting colour energy from surface to surface. Photons coming directly from the light source have no indirect illumination value yet, thus no photons are stored at the first contact with a surface, however the effect of that photon reflecting from the surface is taken into account. The photon will have a new colour energy level based on the first surface, but will not leave any energy behind. This new colour energy is reflected in to the scene and continues along a new vector. When the photon strikes the next surface, the photon is stored on this surface and is represented by a coloured circle. Mental ray spawns a new photon with colour energy reflected from the last surface and continues along to the next surface. The colour energy of the new photon is dependent on the properties of the reflective diffuse surface and the colour energy of the incoming photon. With every new bounce, the photon is stored with information that is a combination between the colour of the incoming photon and the properties of the reflecting surface. For an object to reflect a colour, the light source and surface material must both contain that colour. Pure black surfaces do not reflect light or photon energy. In real life, however, an absolute pure black does not exist except in black holes in space. Neither does 100% saturated colours exist naturally, so normally keep saturation levels below 0.9 and closer to 0.5. 1.2.2 Overview of Indirect Illumination

With GI, you need photon generators (lights) and objects enabled to produce and use GI. By default all objects are set to receive and generate GI photons. You can disable this for individual objects by accessing the object properties.

Indirect illumination is bounced light within an environment. In real world, this accounts for most of the light you see, and simulating it accurately is essential to creating realistic renders. Indirect illumination is the light in your scene that is reflected off and refracted through objects and is light that propagates from surface to surface In the real world, nearly all objects reflect light and become sources of indirect illumination. The light reflecting from their surface 'bleeds' from that surface to other surfaces, affecting the illumination and colour of those other surfaces. Direct illumination is the light rays that shine directly onto a surface from a light source, and then bounces directly into the camera from there. It does not account for any other bouncing. Indirect illumination in mental ray only consider the after effect of direct lighting, in other words there must be at least one light source in your scene. Final Gather (FG) is a method of collecting indirect or bounced illumination from your scene. It improves realism and brightness of a rendered image. FG works by looking from a rendered surface out into the scene to collect indirect illuminatino from other surfaces. It originated as a clean up process for GI. Global Illumination (GI) is often used as a generic term for indirect illumination, however, in mental ray it is a specific type of indirect illumination. With GI, mental ray shoots photons into the scene from light sources. The photons then bounce from surface to surface picking up colour from each surface and leaving behind the light and energy collected at that point. It follows the inversesquare law of decay over distance. It is almost like painting with light. Where each photon lands, it leaves a bit of information behind, and picks up information that it will carry to the next surface it hits, and so forth until it's energy is depleted. GI is usually combined with FG to obtain the best result. Ambient Occlusion (AO) is a short range detail enhancement option for materials. AO samples the area around a rendered sample with random rays to see how much of the sample is occluded from receiving ambient light, and the sample is darkened based on how much occlusion occurs. AO is a method of darkening portions of surfaces based on how occluded the surface is from ambient lights, or alternatively how much ambient light the various surfaces of a scene would be likely to receive due to occlusion by their own forms. AO effectively describes how ambient light is distributed around objects which occlude each other and themselves. With mental ray, AO is baked into your texture, and has to be activated for each individual material in the scene. AO can also be called contact shadows.

Caustics are the result of the concentration of photon energy passing through or reflecting off a surface, such as a magnifying glass or the surface of water. 1.2.3 Final Gather FG is used alone for outdoor scenes and in combination with GI for indoor scenes. FG is seen as a three pass, low resolution pixilated 'rendering' that occurs prior to the actual rendering pass. The visible FG pass allows you to quickly evaluate your lighting solution prior to committing to the actual rendering pass. FG is an additional step that increases the number of rays used to calculate global illumination. Mental ray starts the FG process by projecting points from your viewpoint into the scene Along this eye ray, the point hits a surface, after which FG forces it to bounce away (first bounce). Each bounce is then followed, if it encounters another object, it will bounce again (second bounce) and so forth, propagating through the scene before decaying. As these FG rays travel through the scene, some decays over distance, some light is absorbed by the surface it strikes, and some of it is recoloured and continues to propagate through the scene. The FG rays follow the inverse-square law for light to produce natural decay. As enough bounces are made, the cumulative indirect light colour and intensity are stored in the FG point. The higher the FG density, the more rays mental ray shoots into the scene, the more accurate and refined the indirect lighting solution will be. FG exists in 3D space as point cloud data attached to the surfaces in the scene. This can be illustrated with the diagnostics tool. Visual diagnostics is a colour representation of data that is internal to the renderer and not directly visible in a rendered image. The FG visual diagnostics show as small green points. Each green point is a location where mental ray determined there was a change in contrast and a need for additional FG data. FG Settings The colour swatch allows you to tint the colour of the FG illumination, however the default settings usually produce the most accurate results.

FG allows you to use presets from draft to high, however, you will get more accurate results by manipulating the controls manually. Higher settings increase the level of illumination, reduce splotchies and creates softer, more realistic shadows. However, increasing settings increase rendering times. The accuracy of shadows can be created using FG in conjunction with Ambient Occlusion. It's usually not necessary to set FG settings above the medium preset. Remember that all indirect illumination settings work in conjunction with each other, each serving a specific purpose and affecting the other directly. If the settings of one tool is getting too high without the desired result, try adjusting settings of another to compensate for it. * Project Points and divide camera path by num. Segments option There are two settings in this drop down. Project FG Points From Camera Position is best for stills, and calculates FG on a full rendered frame. The second method reduces artifacts when rendering animations. Initial FG point density controls how many FG points are generated for your scene. These are the green dots shown in the diagnostics tool. The total number of FG points placed in the scene is not only tied to the initial FG Point Density setting but also to the size of your rendered image, thus, larger outputs will store more FG points. IE a large image with low settings will result the same as a small image with medium settings. Rays per FG point is the number of samples each FG point will shoot out into the scene to collect indirect illumination. The more samples, the more accurate the estimation of indirect illumination. Because some rays might hit very bright areas in the scene, you can use the Noise Filtering option in the Advanced settings to remove any rays that are exceptionally bright to help smooth the final rendering. Interpolate over number of FG points averages a specific quantity of FG points around a sample that is being rendered. The higher the value, the more diffuse the result. Diffuse bounces and weight. A direct light shoots out rays of light, which effectively counts as one bounce of indirect illumination, because a direct light source must first illuminate the location where the ray strikes a surface for it to bounce to our FG point. Each additional bounce increases rendering time, and two or three are usually sufficient. The weight value adjusts the contribution of diffuse bounces to FG points. If adding diffuse bounces helps smooth out the FG result but brightens the scene overly, the weight value allows you to adjust the additional brightness by bringing it down.

FG Advanced settings Noise Filtering typically helps reduce blotchy FG results by filtering out FG rays that are too bright compared to the majority. Trace Depth Settings: These settings limit how often a FG ray can reflect or refract in a scene. The limits are automatically adjusted as diffuse bounces are increased. Use Falloff (limits ray distance) allows you to limit how far an FG ray will travel in your scene, and speed up the process. This is particularly useful for outdoor scenes where a ray would reflect into space and take up calculating time unnecessarily. FG Point Interpolation (averaged) Obsolete.

Reusing FG and GI data Computing indirect illumination data can take up most of the rendering time, but it can be saved (cached) and reused as often as needed, which speeds up rendering time particularly for animations.

Typically you would calculate the FG data at a small image resolution, and then render the final renderings at a larger size with the read-only cached FG data previously saved. Alternatively, compute FG data at set intervals along a camera path to 'spray' the scene with FG data, and then reuse this data to reduce animation render time. A single file is best for walk through and still images. One file per frame is best for animated objects Calculate FG/GI and Skip Final Rendering is almost always used with the Reuse options to save the data to a file for later rendering, because no actual rendering is performed. Remember that FG data is view dependant, each change in view would result in mental ray having to calculate the missing data. Adjusting an object's Final Gather options Scene geometry includes properties to control how the object interacts with mental ray. To access these properties, right click on the object and select object properties. High density FG points on small populating objects on a storeroom shelf, for example, may not be useful, so rendering time can be saved by ignoring these small objects. Use the Pass Through option to ignore objects. You can also use this option to have FG pass through an object with a highly saturated colour and pass through to a less satured colour object, for example if you are getting to much green reflected off an exterior grass object into a white interior scene, you can pass the grass object. 1.2.4 Global Illumination Global illumination is a diffuse transfer of illumination between surfaces and a large scale effect that creates an indirect illumination effect often many times larger than 1 meter. This is accomplished through the transfer of illumination between surfaces using points of light energy and colour called photons, which are emitted from light sources. Photons mimic the way that indirect illumination work in the real world, and when combined with FG, they produce the highest quality images in the least amount of time. GI works from light sources that emit photons into the scene, bouncing them around to deposit indirect illumination on the surfaces they strike. This illumination is typically cleaned up using FG, and then combined with direct illumination and surface finishes to produce a final render. GI is usually essential for most interior scenes, and is not useful for exterior scenes, however, it can be very beneficial in some scenes. GI Settings Multiplier and colour swatch allows you to control the brightness of the photons, or to tint the colour of the photons. Default white produces accurate results. Maximum number of photons per sample controls the number of photons collected at render time around a particular rendered sample. The photons collected are averaged to give a final GI value to the rendered sample, and final illumination is dependant on the averaged photon's interaction with the surface finish and the direct illumination at that point. Large numbers can significantly increase rendering time, so mostly 500 photons are sufficient. Maximum Sampling Radius allows you to set a limit on the averaging and influence of a GI photon. This sets the effective size of each photon. The larger the radius, the more blurring will take place, which will result in flat looking images. Merge Nearby Photons is a memory saving setting. It merges photons that is within the

specified distance from each other, it smooths the GI result and reduce the number photons in a given area, resulting in larger effective photon radius. The photon energy in the merged photon is the sum of the photons merged. If the radius is too large, bright spots can emerge. Optimise for Final Gather (slower GI) here additional processing is performed to store brightness information about neighbouring photos for fast lookup by FG. It might cause GI to take much longer, but the additional time is usually offset since FG is speeded up. Trace Depth limits how many times photons are reflected and refracted in your scene. Average GI Photons Per Light is an average for all lights and not necessarily a total number of photons shot per individual light. This means that lights which produce more energy, and thus have more influence over the illumination of the scene, will shoot more photons and divide that energy into a greater number of smaller energy photons. This balances out photon strength between one strong light source in a scene compared to a small one, preventing uneven lighting. This is also not the number of photons emitted by a light, but rather the number of photons stored on the geometry. Mental ray will continue to shoot photons from the source until this number is reached.

To save memory, you can do the following: Use Merge Nearby Photons with a relatively small radius value Disable photon generation in lights which do not contribute to indirect illumination Disable lights in the scene that are not visible in the view

Using Visual Diagnostics Modes with GI The visual diagnostics modes for GI produce colour images with blue, cyan, green, yellow and red, and each colour represents a different quantity of photons or value for irradiance. (Irradiance refers to the per unit power emitted) Blue indicates zero irradiance, and red indicates higher values. There are two modes Photon Density is a representation of the scene with hotter colours showing higher densities. This is useful for identifying overall coverage of photons, and helps determine whether adjustments need to be made to photon quantity to ensure all surfaces have an even coverage in photons. Irradiance is defined as the area density of flux, and is measure in watts per meter squared. This shows the relative brightness of the stored photons.

Using GI

While adjusting and setting the initial GI settings, it is important to disable FG until you have a reasonable distribution of photons on the scene. FG is a cleanup process for GI and can mask irregularities in GI. The workflow is to start with good direct lighting, then set up the GI, and finally turn on FG. Correcting the bright area around lights A general problem with both GI and FG is that the area directly adjacent to a light source can become very bright and blown out in intensity. With GI, much of the problem is because of photons being bounced back into the area that is also receiving direct illumination from the light, pushing the illumination to the extreme. Remedy this by: Move the photometric light outside the fixture. Using self illuminated globes on lights

1.3

with FG can give you the soft glow you expect around the fixture, and moving the light eliminates photon reflection back to the wall. Change the light to a diffuse or spot to direct photons away from the fixture Change the object properties of the interior of the light fixture to not interact with photons.

Knowing your render engine: Introduction to mental ray

Scanline rendering: is an algorithm for visible surface determination, in 3D Computer Graphics, that works on a row-by-row basis rather than a polygon-by-polygon or pixel-by-pixel basis. All of the polygons to be rendered are first sorted by the top y coordinate at which they first appear, then each row or scan line of the image is computed using the intersection of a scan line with the polygons on the front of the sorted list, while the sorted list is updated to discard no-longer-visible polygons as the active scan line is advanced down the picture. The main advantage of this method is that sorting vertices along the normal of the scanning plane reduces the number of comparisons between edges. Another advantage is that it is not necessary to translate the coordinates of all vertices from the main memory into the working memoryonly vertices defining edges that intersect the current scan line need to be in active memory, and each vertex is read in only once. The main memory is often very slow compared to the link between the central processing unit and cache memory, and thus avoiding re-accessing vertices in main memory can provide a substantial speed-up. Raytrace rendering:

In computer graphics, ray tracing is a technique for generating an image by tracing the path of light through pixels in an image plane and simulating the effects of its encounters with virtual objects. The technique is capable of producing a very high degree of visual realism, usually higher than that of typical scanline rendering methods, but at a greater computational cost. This makes ray tracing best suited for applications where the image can be rendered slowly ahead of time, such as in still images and film and television special effects, and more poorly suited for real-time applications like computer games where speed is critical. Ray tracing is capable of simulating a wide variety of optical effects, such as reflection and refraction, scattering, and chromatic aberration. Ray tracing must be enabled to create indirect illumination, reflections, refractions, ray traced shadows (as opposed to shadow mapped shadows) and depth of field. Mental ray is a hybrid renderer in that it uses both techniques intelligently where necessary. This

gives the highest quality results as efficiently as possible. It an combine scanlnie, raytrace and the Fast rasterizer methods, the latter being a special case option for use with motion blur. Bucket rendering: The image is divided into squares, called buckets, and each processor core inthe machine takes a bucket and processes that portion of the rendering before moving onto thenext available bucket. This allows for distributed rendering to other machines. Distributed bucket rendering allocates the work to machines on a network for faster processing. 1.3.1 Understanding render settings Quality always comes with a price; the higher the settings, the better the result, however, this usually results in an increase in render time. The trick is to know your settings well enough so that you can create a quality image without using too much resources or time. This is a fine balance to achieve. A Common tab settings B File output: Do not render to .jpg, as there is always loss of quality. Typically use PNG files or TIFF, where you can select the dpi; Output size determine the actual size of the image. This also greatly affects rendering time and resource usage.

Renderer tab settings * Scanline cannot produce reflections, refractions, shadows, depth of field and indirect illumination. If this method is used during a mental ray render, mental ray automatically ray traces for those features. Scanline can prevent certain effects from operating properly. If you have memory issues, you can disable scanline renderer, however, disabling the scanline renderer also disables the Fast Rasterizer. It is recommended to disable scanline for high polygon scene as where depth of field is not used. * The Fast Rasterizer can produce soft, fast-motion blur effects. This setting disables the primary Samples Per Pixel's Spatial Contrast settings, and use Shades Per Pixel instead. This option generally gives good quality result, but at the cost of speed.

Raytrace Acceleration: BSP (Binary Space Partitioning) For mental ray to produce a rendering, it must cast a 'ray' from the camera into the scene along the line of sight of the camera. Potentially, it must test the path of that ray against all geometry in your scene to see whether that ray intersects any face. However, testing all geometry in a scene would take an enormously long time. To simplify this process, mental ray will divide geometry in the scene into small partitions, and then a ray only needs to be checked against the small partition of faces that it intersects. This process is called Binary Space Partitioning. Using the BSP settings correctly can decrease rendering time by up to a few hours for very complex scenes. A scene is divided into partition trees. A partition tree consists of faces that are generally pointing in the same direction, within +- 90 degrees and as many faces as the size parameter permits. A ray striking a part of the partition tree is compared with the rest of that tree. BSP by default has two parameters, size and depth. Increasing the Size value reduces

memory consumption; however, it may significantly increase render time because more triangles are managed in each partition, and each ray would need to consider all the faces in that partition. A larger partition size required fewer overall partitions, requires less Depth and may reduce memory consumption. Decreasing the Size parameter means more partitions, more Depth required and more memory consumed. The benefit is that there will be fewer triangles per partition and potentially a speed-up in ray-tracing performance. Too small a value can also cause delays since additional partitions must be created and managed. Increasing the depth value can reduce rendering time for large triangle count scenes; however, mental ray will use more memory and render preprocessing time. Large values may negatively affect render time. The newer BPS2 mode is recommended for large, complex scenes. It doesn't have settings, and mental ray manages the partitioning automatically. It might negatively affect smaller scenes, so be careful when you use it. Reflections and refractions: The trace Depth setting can affect both render time, quality and quantity of reflections. These settings can be increased if you find black areas inside transparent objects, but keep it to a minimum to prevent long render times Max Trace Depth: this value controls the number of reflections and refractions that a ray can take, as a combined number. EG 6 will combine 2 reflections and 4 refractions or 1 refraction and 5 reflections, depending on the requirements of the scene. Max Reflections and Max Refractions puts caps on quantities for each ray. Subset Pixel Rendering causes only the pixels on the selected object to be rendered. It still renders the object within the full context of your scene, giving full reflections, refractions and shadows on the selected objects, but the rendering is limited to the outline of the selected object. This is very useful for refining rendering and creating options of an object, for example a change of material, without having to render the entire scene. Remember that any reflections of the new material visible in other surfaces in the scene will be incorrect. Shadows and displacement: controls the way shadows are calculated for a particular ray and allow for the caching of shadow maps on the hard drive for quick reuse.

The options in the mode drop down list: Simple is the default mode and calls shadow shaders in a random order. Sort causes mental ray to call shadow shaders as a ray traverses the scene. It is provided for use by third party shadow shaders and not used in other instances. Segments is intended for use with volume shaders and might cause issues in your scene where shadows are treated as if within a volumetric effect when they are not. The rebuild option is used mainly in animations and causes shadow maps the generated on each frame. Be careful when saving the file. Once the shadows are saved, they will not adjust if you change the animation of objects or any other changes in the scene. Global Displacement Settings: Displacement works by subdividing a surface and then displacing the tessellated surface a determined distance. This depends on the value and intensity of the pixels in the displacement map.

Max Displacement limits the distance of displacement. Black pixels will have no displacement, and

while pixels will displace the surface to this specified distance. C The Processing Tab

Memory options: Placeholder objects allow mental ray to manage your individual geometry objects as empty bounding boxes (placeholders) until that geometry is needed by a bucket. It allows the translator to swap geometry in and out of the scene to conserve memory. This is very useful for distributed rendering over a network, as it greatly increases network utilisation. Instead of every machine on the network receiving a full copy of the geometry, memory usage on server machines are reduced. Mental ray map manager causes mental ray to read bitmaps from disk only when they are needed. It then converts them into a mental ray bitmap format, and they are held in memory. The advantage is hat mental ray can remove the map from memory when space is needed. Conserve Memory causes mental ray to work harder to minimise memory use, at the expense of time. If you are getting memory allocation errors, try this option. Geometry Caching saves a preprocessed translated geometry to the hard drive for fast reuse, mostly useful in animations. Material Override: allows you to replace all materials in your scene with a single material. It is typically used to accelerate test renders to check lighting solutions, or check ambient occlusion. Visual diagnostics allow you to see a rendered colour overlay that represent the behindthe-scenes value for processes to assist you in making adjustments to settings.

Gamma Correction Gamma correction is an intensity adjustment applied to an image to compensate for non-linearity in print and display device. gu This is an essential setting which must be configured at the start of a scene. Gamma correction specifically influences the brightness of a rendering. The gamma correction factor is a simple numerical value that describes the shape of a correction curve applied to the intensity of an image. Gamma correction decreases the intensity of the midtones in the bitmap images you use in surface materials, and decreases the intensity of midtones in the file you save to your computer. A value of 1.0 gives no correction. Less than 1 darkens the midtones of an image and greater than 1 brightens the midtones. To adjust Gamma settings: > Customise > Preferences > Gamma and LUT Correction. > Enable Gamma and LUT correction > blur your eyes slightly and adjust the gamma value spinner until the inner and outer squares are the same intensity on the monitor. This value will differ from monitor to monitor. > enable Affect Colour Selectors and Affect Material Editor. This is critical to give you a WYSIWYG colour display in material previews. > Select both Input Gamma and Output Gamma to 2.2. These two settings ensure that images are read from and written to your bitmap files correctly. Be careful in the following cases, where Input and Output Gamma settings must be overridden for particular images:

When a file already contains Gamma information: eg TARGE or PNG files HDRI files (High Dynamic Range Images) Bump, normal or displacement maps When using the mental ray Map Manager When saving Alpha channel information When using logarithmic exposure control When using Backburner to render to strips

Antialiasing Sample quality Sampling quality (antialiasing) settings are a group of mental ray settings that affects the quality of the output image. Aliasing is missing or incorrect information in a rendered image due to an undersampling of bitmaps and rendered elements. (uneven edges). Under-sampling causes distortion of the bitmap, especially in high contrast areas, such as a black and white checker pattern. The process of sampling takes samples at locations around and on the edges of the pixel and uses the filter setting to combine these samples into a single pixel colour. Setting the samples per pixel can be accessed from the Render Setup dialog box, under the renderer tab, or by using the slider to scroll through presets under Image Precision settings of the Render Frame Window. Remember that these settings will impact render time, do not push this up unless it is absolutely necessary. The filter settings determine the algorithm used to bled adjacent samples and pixels. Box is the default setting and is very rough, but very quick Gauss: is a based on a soft bell curve, good for animations to reduce flickering and scintillation Triangle: is fast and good for draft rendering Mitchell is based on a steel bell curve, and is best for still images Lanczos is a bell curve sharpening filter where samples furthest away from the pixel have less effect. This is great for final renders, but also the slowest method.

Sample rate visual diagnostics is a tool to help visualise exactly how Samples Per Pixel settings work. Sample rate visual diagnostics can be accessed under the diagnostics panel of the processing tab. Remember to switch this off again for normal rendering. If the minimum setting is 1, it means that each pixel will be sampled at least once. IE if the minimum is , 1 out of every 4 pixels will be sampled, or a pixel will be sampled at a quarter of its size. If the minimum is 4, each pixel will be sampled four times. Samples can never be larger than a pixel, and every pixel is always subdivided by the same amount. A setting of 1 is usually sufficient. For test renders, you need no more than ad 4. The same goes for maximum sampling. A setting of 16 will limit the amount of sampling to no more than 16 times per pixel. Areas that need sampling will receive more attention, that would be areas with high contrast in colours. Flat colour planes will be sampled less. You want mental ray to use the highest sampling setting only where it needs additional detail, and not waste time on flat looking areas.

2. Lighting techniques Lighting is arguably one of the most important aspects of your scene. Great materials and a wonderful geometry will all be for nothing if you have a poorly lit scene. Lighting an outdoor scene Using FG in conjunction with mr Sky, you generally need relatively low FG settings to obtain a good quality result. Improving dark interiors and entryways A issue usually arising when rendering an exterior scene, is that interior spaces and deep set entrances will render unnaturally dark. Adding an mr Sky Portal to the openings brings in outdoor illumination and gives a more realistic appearance to the rendering. Alternatively, you can also add photometric lights inside to fill the dark areas. Compared to strong daylight, a standard bulb will have no effect, so you need to push up the settings to compete successfully.

HDRI Rendering Using Low and High Dynamic Range Image Formats Dynamic Range in images refers to the range of colour values that the bitmap can represent. LDRI are most common and include .jpg and .png. HDRI include the Radiance Image File and OpenEXR (EXR) format. All HDRI data is stored in a floating point format and is able to represent RGB values of any number.

LDRI images used in a material should always be gamma corrected, and HDR images should not be gamma corrected for file input or output, as this is intrinsically contained already. HDRI are often used as spherical environment maps, giving brilliant colour to reflections in rendered objects. You can use HDIR to produce image-based lighting effects, accurately replicating the lighting of a scene where the HDRI was used by adding it to the sky map.

You might also like