In this tutorial we explain what is the HDR rendering and how you can control the brightness of the final render using it, easily getting rich and outstanding pictures.
This tutorial answers the following questions:
- What is HDR?
- What does LDRI mean?
- How to post-process HDRIs using Photoshop?
- How to render HDRIs from V-Ray?
- Which image format is the best to save HDR renders?
- How to deal with ovebrights in the window and near it?
- How to lighten the dark rendering?
This tutorial tells how and why we need to render images with high dynamic range. Before we start, it is important to note that current tutorial doesn't limit you in using the 3d software. It is equally useful for 3ds Max & V-Ray users as well as for anyone, who wants to meet the practical side of using HDRI techniques for rendering photorealistic images, but uses any other soft.
For justice' sake, we'd like to mention our tutorial series about the V-Ray renderer settings. There we have repeatedly referred to HDRI rendering. Now it is the time to explore the HDRI rendering in detail.
For some time past, the term HDR in computer graphics became quite popular. So, there is a little confusion about meaning of this abbreviation and its spelling. Actually, it is very simple to understand the principle. In all such terms, only the first three letters HDR are important. They mean High Dynamic Range. The all other letters that come after, do not really matter. For example, HDR may stand as an independent term that means the high dynamic range in all senses. But it can be supplemented with word 'images' what looks like 'HDR Images' or HDRIs. This phrase means the images with high dynamic range. Also, you may see a lot of publications, including this one, the word HDRI, the last letter of which means 'imaging'. All these terms are synonyms and are used depending on context.
So, the High Dynamic Range Image is the image with the high dynamic range of colors, where the brightness of one pixel may differ from other a lot. In other words, the brightness level of the most lit area of the image may very significantly differ from the darkest area of the same image.
As you can see, the explanation itself doesn't cover the literal sense, and the purpose of the HDR technology is hardly imaginable yet. But it is so just until you come across the problems of the images with low dynamic range called LDRIs. Let's look on the typical example of the LDR image problem from photography. Part of our readers has already met with such problems and the next chapter will seem very familiar to them. For all the other readers, next and further chapters will help to realize the importance of using the HDR for creating the attractive images.
Often, photographing outdoors, the shooting of the evenly lit composition just impossible, as we are unable to force the natural illumination to work in the suitable for us direction. In the result, we get the photo very bright in one place or too dark in other.
Take a look at the following photo.
This photograph shows the pirate's skull on a light-blue background of sky above the bay.
“Which bay?” you may ask. Well, the one on the shore of which the skull lays; the one which is overbrighted, or as the photographers say, overexposed by the bright sun.
The fact is that the photo camera, as well as any other light perceiving device, works in a limited brightness range. This means that the light information, the brightness of which is lower that this range is stored as black color; and the information with the higher brightness is recorded as white color.
This is exactly what has happened to our photo. The working range of the camera was shifted to the dark perception to make the main object, the skull, bright. But because of this shift the background turned out overexposed, as the camera with the current range has perceived and stored the well-lit areas far away as white color.
Now we have an overexposed photograph. What can we do to see the desired background instead of white overbrighted spot?
It would seem there is nothing easier than that: just correct the photo by darkening the background area.
Let's try this. We open the photo in Photoshop and lower the brightness to see what's behind the skull, by using the Exposure function, for example.
Unfortunately, instead of promised bay and sky, all we see is a dull gray color on the place of former white. Even the partially seen leaves at the sides haven't become clear. But why don't we see all the details in the bright area, even after making it darker?
This happens because the photo has not those details. There is a literally white color, the white pixels that were stored in the overbrighted areas, which was beyond the camera's range of perception. No manipulation or processing over this picture can reveal any of the details, hidden by the excess light.
However, as provident photographers, we have made one more shot from the same angle, but this time the perceiving range of camera was adjusted to record the bright colors.
Take a look at it.
Finally, we can see the bay, the sky; even the contours of the vegetation on the sides become evident. But the pirate's skull is too dark, or as it called in photography, underexposed. The photo camera with current brightness range stored the dark color values of the foreground objects as a black color.
If we try to alter the problematic area of the photo, same as we did in the previous case, but this time by a lightening, this wouldn't give us a good result all the same.
At the best, we discover the rough lines of the main object and, as a rule, the noise and artifacts always hiding in the dark zones. We must admit that here is nothing similar to what we expect from the good photograph.
Some of you already know how we can get the desired image, having both photos. Apparently, we can now combine these photos together: the first one would give is the well-lit skull, and the second one can provide us with a detailed background.
Made that, we get a satisfactory image that contains well seen dark and light areas, despite the initial technical limits of the photo camera.
Let's analyze what has happened when we've combined the two photos with low dynamic range to see all details in the single image.
For convenience, but without any specifics in numbers or real analogy, we can imagine the brightness range of the base composition, to which the camera can be adjusted, as a gradient scale. The showed photos are situated on this scale roughly; one in the dark area and the second in the light. The curly brackets coming out from the each photo show the limited range, covered by the camera at one moment.
By combining these two limited ranges of available brightness levels, we get one wide range. Now we can take the details of a relatively dark foreground and the details of relatively bright background from the mutual high dynamic range to get the single uniformly lit image.
So this is the essence and usefulness of the HDR. Having the info from the initially different brightness ranges, we can use it for getting single image with all details we want.
Returning to our photo example, you could suppose that LDR images can be combined by trite cutting and pasting the some details over the other. Well, we can do it this way, but such technique of the lighting ranges merging is irrational thus can be hardly called professional. Another thing is to merge few LDR images to a single HDR image. This is the method used in computer graphics nowadays.
To record and store the light data of a high dynamic range the usual picture formats aren't suitable. The problem here is that the regular LDR image formats can store only the limited range of brightness levels due to their technical peculiarity. For example, the well-known JPEG stores the brightness data in 8 bits so its dynamic range is just 1 to 256 (28=256). That means that usual 8-bit format can store only 256 levels of brightness. There also are 16-bit formats, which can store much more levels of brightness, but they are limited too and cannot provide the enough brightness reserve for further recovering of the needed details.
Quite another situation is with another type of formats, the 32-bit images. They can store the unlimited dynamic range of the brightness levels. 32 bits allow storing colors as a so-called floating point numbers; that means the infinite dynamic range. That is the reason to store the HDR images in a 32-bit graphic file formats.
Now we can continue to follow out photo example, since we know what HDRI is literally.
To take as much as possible light data from the shot scene, we have made a number of LDR photos with different exposure setting.
Having few LDR shots, made with camera that was adjusted to different exposures, only thing left to do is to merge their brightness data to a single HDR image. This is very simple task, as there is lot of software capable to marge to HDR automatically or semi-automatically. We have used Adobe Photoshop's included functionality Merge to HDR in 32 bit mode that can be found in File > Automate > Merge to HDR. Thereby, we got a HDR version of our photograph.
Now that we have the HDR image, we got to do an afterwork on it. Namely, we need to select the excessively dark areas and light them, and vice versa, select the overbrights to recover the hidden details. Such method of LDR images merging is preferential, as it allows doing accurate soft selections of problematic areas without any muddle with layers, files, and their mutual arranging. The HDR workflow makes a post-processing of images a very simple and clear task that can be perfectly done by almost anyone.
At a first glance, the technique of obtaining the HDRIs from renderer may seem similar to those used in photography. So, it would seem that we need to render few regular visuals from the same camera with different exposure and then merge them to a single HDR image. Again, we can do it this way, but the advantage of working with modern rendering engine is that it isn't burdened by photo technology drawbacks. The renderer does its calculations in high dynamic range, i.e. it is HDR initially. And the main feature coming from this fact is that the render engine allows saving the results directly to HRDI format.
What does it give to us? This feature leaves us from the exhausting quest of collecting the number of images with different exposures, as it is with photos. Moreover, the dynamic range of stored brightness values by two, three, or ten LDR images is still limited. In the same time, the HDR render is limited only by factual brightness levels present in actual scene. It contains all the brightness levels, just like it is in a real world.
As you may guess, the HDR methods used to recover the hidden details of the photograph in 2d editor, described in previous chapters, are suitable for renders as well. Which 3d artist, who had to render an interior scene, haven't faced the problem when the light source in a window creates large overbrights on a ceiling, window opening and curtains? Such problem cannot be solved without altering the lighting of the entire scene and making the illumination of other areas worse as this often done by “dull” settings of color mapping. Who hasn't met the situation when one area of the final rendering comes out too dark and its lightening is impossible without re-rendering the whole image after changing the scene lights power?
HDR rendering leaves these problems behind.
Yes, the rendering seen in frame buffer still has all the drawbacks as before. But all of them can be easily eliminated in a few minutes by editing the saved HDR render in Photoshop. The post-processed render would no longer have overexposed spots and excessively dark zones.
We can often see the delusion claiming that a rendering is completely not like the real life situation. For example, some say that described above overbright and excessive dark problems are solely the renderer's fault. They say a renderer that creates such images is simply not physically correct and we should do a better work under the scene lighting. But, in fact, those who say so are wrong. Moreover, the limited dynamic range effect is present not only in computer imaging.
Even when we ourselves look with a naked eye on a very bright object in a dark surrounding, our eyes have to adapt to a brightness of this object to see it normally. But as we begin to look on a darker part of our surroundings, the eyes again adapt to a new level of illumination.
You can perform an experiment: go outdoors and look at the clear sky for a moment, then back to room and look around. You may seem that there is very dark indoors, despite the room was well lit a moment ago. This happens because the eye, adapted to a very bright sky, cannot normally perceive the illumination of a relatively dark room. In other words, the dynamic range of human eye perception is also limited. Later, the eye, of course, will adapt to a room illumination but this process will take some time.
Try and remember how the typical pirate from the tales and fantasy films looks like. This is one-legged man with a hook instead of a hand, who wears the most typical attribute of a seacock, the black eyepatch. Many think this patch covers the injury, in which the pirate lost his eye during the battle for boarding another vessel.
There is no doubt that there were one-eyed pirates but there is also an explainable assumption about pirate's eyepatch. It could be an instrument equally important as a spyglass or compass for two-eyed sailors as well.
The eyepatch is useful when one needs to see well in a dark hold after lowering from the bright ship's deck. Coming down, the sailor moves the black patch away to open the covered eye that is immediately ready for perceiving details in a dark environment.
This fact once again shows that not only the usual photography or regular rendering show the limited brightness range. The human eye, at the same time, perceives the world in LDR. By covering the one eye from the high brightness levels, we can use it for seeing dark things without a need to wait the adaptation. Using this trick, the sailor compounds different dynamic ranges of brightness for seeing details better. The same we are doing using HDR techniques in imaging.
So, when on your image the main scene looks greatly lit, for example the room interior, and the window is burned by sunlight, do not blame yourself for incorrect lighting setup. In most of cases this is normal situation and it is physically correct.
However, despite the theoretical correctness, this is not always look aptly or artistically how it is expected from the professional photograph or 3d rendering. Then the skills in HDR images handling become useful.
V-Ray has the only function responsible for dynamic range of the render, it is Clamp output. This function can be found in a V-Ray:: Color mapping rollout on V-Ray tab of a Render Setup window. Clamp output cuts off the high brightness values that come beyond the RGB space and thus makes the calculation of some render effects better. But it is true only if we render in a usual dynamic range, the regular LDR image. In the case of HDR, the enabled clamping will do a disservice for us by cutting off the real full brightness values. That is why we need turn off the Clamp output checkbox for rendering HDRI with full 32 bit colors.
This is the all V-Ray setup for performing HDR rendering. But, before rendering HDRIs we need to make clear one important thing.
The feature of all HDR formats is that the data they store is scene-based. Thus, the brightness values of each pixel in HDR format is saved linearly, just as it was originally. If we draw an analogy with the photography, then as the sensor of digital camera has perceived some brightness level, so it will be stored to a 32-bit file. You may seem it is an obvious thing without the extra explanation. But the problem is hidden in the fact that when we organize the correct and comfort workflow in 3ds Max and V-Ray, we always apply the gamma-correction to our images on a rendering stage. More on this subject please read in an article why we need Gamma 2.2 in 3ds Max. Despite that we save to a 32-bit format the gamma-corrected image, Photoshop for example, considers it linear. Consequently, our image will be a subject to a gamma-correction once more. In a result, instead of expected normal image as it was in V-Ray frame buffer, we will see very bright picture.
Those who have thoroughly read our previous tutorials, namely the tutorial about antialiasing and color mapping settings in vray, may notice that Vray has Don't affect colors (adaptation only) function, which is designed for simple switching to a linear (gamma 1.0) workflow. Well why wouldn't we use it in such appropriate moment? Unfortunately, this function is too straightforward. It turns off the influence of the Gamma parameter as well as the Color mapping type on our rendering. This fact entirely eliminates the possibility of trouble-free switching to theoretically correct 1.0 gamma setting in HDR rendering. In an overwhelming number of cases, the obtaining of satisfactory photorealistic images using the standard Linear multiply color mapping method is unreasonably hard.
On practice, the obviousness of working with the usual color mapping settings is determinant. That is why, after we open the gamma-corrected render saved to a 32-bit format we need to perform the reverse gamma-correction in Photoshop. How it is done is explained in following chapters.
But first off let's look into the settings of the 32-bit file format available in 3ds Max for saving renders.
After the rendering is complete, we need to save it to HDRI file format. For example, HDR render can be saved in a modern developing format called OpenEXR. When saving to this format, 3ds max shows the window with a lot of options, the number and exact name of which vary in different 3ds Max versions. Despite the differences, the main options are available in all versions and they must be set as follows.
Format is an option that sets how precisely the color data is stored. Choose Float or Full Float – 32 bit per channel to save the highest accuracy of HDR colors. Other values may reduce the size of the saved file at the cost of slight color errors.
Compression defines the type of file compression. The compression reduces the file size by standard compression methods. In this option we got to select method ZIP of any type. The part of other compression methods doesn't compress the file too much, and the other part (lossy) compresses the file with significant quality loss. Neither the one nor the other fits our needs.
Type (or just the channels list) is a list of color channels that will be stored in a file. We need to choose R, G, B, and Alpha or RGBA as a single option. Turning off the any of the mentioned channels or selection the Mono option will limit the color information of our file, so their choosing is not reasonable in a majority of imaginable situations.
All other options should stay as they are originally; their default values are correct and need altering only in a special cases, describing of which is not a subject of current publication.
Now, since the click on the OK button, we have the HDR render in a true 32-bit format.
Let's back to the purpose, for which we made the HDR file. We have made it for the following editing and recovering needed details that couldn't be rendered in a limited dynamic range of the LDR image. For obvious demonstration of such situation in 3d, we have created a simple scene and rendered it. The scene itself is an attic with a bed in front of the window.
As you can see, as a result of the geometrical features of this room and small size of the window, that is the only light source in this composition, the whole interior is badly lit and, at the same time, there is a great white spot on all adjoining window surfaces. It is especially noticeable on the window frame and recess in the left wall. This is a typical problem of LDR renderings. To solve it, the HDR image must be edited as was described above. This time we'll look into this in detail. You can follow all the described actions using the original files, attached among the main steps further.
To start postwork with our render, we must save the render to an EXR format and open it in Photoshop.
Please notice that if your rendering was made with gamma 2.2, the reverse gamma-correction must be done before doing any processing. This can be easily done by applying the adjustment layer Layer ▹ New Adjustment Layer ▹ Exposure ▹ OK and moving the slider called Gamma Correction to the right until it reaches value 0.45.
This action restores the look of our HDR image to the one we saw at the renderer's output. This will be our starting point for post-processing. If you have used the linear workflow with gamma 1.0 and your rendering looks ok right after opening, you don't need to perform the described reverse correction, just skip this paragraph.
Here is the sample of initial image, containing the HDR rendering and a reverse gamma-correction layer (as our render was made with gamma 2.2). Feel free to download and study it.
One thing to mention about files is that *.exr format cannot save the data about adjustment layers and masks. But this is not a problem, because the edited EXR can be saved to native Photoshop formats *.tif or *.psd, which fully support the work with layers and the 32-bit color depth as well since Adobe Photoshop CS2 version . That is why the above and following files are saved in TIFF 32 bit format.
The description of the all possible options in post-processing of photographs comes beyond the subject of this tutorial and is more 2d than 3d. However, to make you easily follow directions, we have explained the one most usable and obvious way of altering the problematic zones of the 3d rendering in Photoshop.
First thing we need to do is to examine the bad areas of our HDR image. Let's start with excessively bright areas. On presented here example the most significant overbright is at the actual window opening, so we need to dim that area. At the very bottom of the Edit ▹ Fill ▹ Use: Black function. The darken effect completely disappears. Now, having fully black mask, we can begin to literally paint the darkening, restoring the effect in a problematic zones. Take a soft brush of a white color, set its Opacity at the panel above to nearly 5%, and paint over the bright places, periodically releasing the mouse button. The more times you draw over the white spots by a brush, the more they darken. Probably, during this process you may seem that overall darkening effect is need to be more or less significant. Feel free to alter the Exposure value on the ADJUSTMENTS tab, as it is available at any moment. After finishing the painting darkening, we have our rendering free from too bright areas.panel find the black/white circle icon called Create new fill or adjustment layer, click it and choose Exposure item in the appeared list. The adjustment layer is created. Right after we do this, the parameters such as Exposure, Offset, and Gamma Correction become available in the tab of the appeared panel. By moving the slider Exposure to the left, we darken the entire image and also recover the missed details in bright spots. Adjust the Exposure until all desired details show up in the brightest area of the rendering. Once have done that, we need to limit the effect of this layer and remove the darkening from the other normally lit (or dark) render areas. Having the last Exposure layer selected, fill its mask with a black color using
You can look at the adjustment layer made for obtaining the above result by opening the following source file.
Like this, in just a few minutes we have completely removed the ugly overbright on the rendering without any manipulations with initial scene lighting setup or renderer's color mapping settings.
Take a look at the picture again; there are quite dark areas behind the wooden ceiling beam and under the bed. They just beg us to light them up. Let's continue the work under the HDR 3d rendering.
This is done similarly. Add one more Exposure adjustment layer and alter the brightness of dark areas. To light the dark places on HDR rendering, the most convenient parameter is Gamma Correction. Fill the layer mask with black color, softly develop problematic areas.
The file below contains source adjustment layers, including the last one that lights the unpresentable dark zones.
That's all. Only thing left to do is to convert our 32-bit image to a usual LDR format. This is made by command Image ▹ Mode ▹ 8 Bits/Channel. In an appeared warning about color depth change, use the button Merge. Then, in the HDR Toning panel choose Method: Exposure and Gamma and click OK.
You can see how we can simply deal with any overbright and the shadow density in a needed area of the image without re-rendering. In most cases, even the accurate tuning of the exposure on the 3d step is not needed. This frees us from messing with the Multiplier in Color mapping or film speed of VRayPhysicalCamera. All this can be done after rendering, in 2d editor. Moreover, often, the dynamic range of HDRIs is enough to slightly alter the brightness of the light sources and the objects they light.
HDR rendering is the full control over the brightness of your 3d generated images.
We hope that after reading this tutorial you are able to use HDRI for enhancement the visual quality of your photorealistic 3d renderings with ease.