Normalmaps, How to Make Good Ones

From The DarkMod Wiki
Revision as of 15:12, 27 July 2007 by Springheel (talk | contribs) (New page: Copied from http://homepage.ntlworld.com/gerald.hunt1/website/nfhwta.htm, by Greg Hunt First off, to explain what normalmaps are in terms of an artist's perspective. Normalmaps alter the ...)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

Copied from http://homepage.ntlworld.com/gerald.hunt1/website/nfhwta.htm, by Greg Hunt

First off, to explain what normalmaps are in terms of an artist's perspective. Normalmaps alter the perception of a lit object via Per-pixel lighting rather than the bog standard vector lighting used on a low-polygon model without a normalmap. This technique requires an RGB texture which contains the information needed to create surface normals in a texture map. This texture containing surface normal data is called a normal map. The red, green and blue channels of the normal map dictate how lighting effects the X, Y, and Z axis' of the normal vector orientation.

(img to come)

Let us assume the above image is a greyscale heightmap, the lighter the value, the higher the point of elevation from the surface, the darker the value, the deeper the point of recess from the surface. When used in a computer game engine, this would appear convincing if the surface was perpendicular to your sight and if the lighting was static. However, take this situation away and the bumpmap loses it's realistic impact due to a lack of vector orientation information.


(img to come)

This is how the same bumpmap looks when converted into a normalmap. In a normalmap texture, each pixel represents a normal (mathematical term) and you can freely dictate the direction you want it go. Each direction is represented by a colour and this is where the RGB channels work together in order to create the illusion of geometry dictated by the lighting. In the image above we can see that the light blue pixels (R 127, G 127, B 255) represent normals that are perpendicular to your vision. The pink pixels represent normals that are facing to the right. Green pixels represent normals that are facing down. Purple pixels represent normals that are facing upwards, and dark blue/green pixel are normals facing to the left. Yet in the original greyscale image, they all share the shame RGB value therefore the normals (which don't actually exist in the greyscale image but this is purely logic) all react the same to the lighting even if the light source is situated West of this elevation.


So here is where the process begins, start back at the beginning to when we had the heightmap, duplicate this so we have 3 layers of the same image.

- Use a Gaussian Blur filter with a pixel radius of 6 on the bottom layer image, use the Filter->Other->Offset tool and offset it by half the texture's resolution, making sure that "Wrap Around" is checked. Repeat this step and Gaussian Blur it once more. You should have offset the image twice and blurred this image 3 times now.

- On the 2nd layer image Repeat the previous step but using a Gaussian Blur filter with a pixel radius of 3, you can leave the top layer alone for now.

- Run the bottom layer through the nvidia plug-in with a scale value of 15-30 (depending on the texture), run the 2nd layer through with a scale value of 9-12 and also check "Du/Dv" in the filter type options, do the same again with the top layer but with a scale value of 5.

- Set the top and the 2nd layer's Blending mode to "Overlay" then tweak the opacity to how you see fit, then flatten the image.