HUMAN SKIN SIMULATION
Class
GAM400B
Instructors
Jon Sanchez
Language
C++ and Vulkan
About
The idea of this class was to do a personal study of a graphics technique while learning Vulkan to implement it. I decided to approach the human skin rendering technique in real time presented in the book GPU GEMS 3, which discussed topics about subsurface scattering and realistic rendering of the skin. It was my first time using Vulkan, so I also took this as an opportunity to learn how modern graphics APIs are different from OpenGL and what advantages they have to make a faster rendering.
Features
- Simulation of human skin (based on 3 layers).
- Subsurface scattering simulation using a combination of diffusion profiles.
- Texture-space diffusion and stretch maps.
- Pre-scattering and post-scattering.
- Translucent shadow maps (TSM).
- Skin reflectance simulation using the Beckmann distribution function.
Brief explanation
To make the skin look realistic while staying efficient, we will simulate it using a 3-layer model (the oily layer, the dermis, and the epidermis). When light reaches the skin, the first layer reflects an approximate of 6% of it, and it is a very rough surface, so we need a very detailed normal map and a very accurate BRDF model. The model that we were going to use is the one explained by Kelemen/Szirmay-Kalos, which offers more reliable reflections at specific angles than the Phong model. The model consists of the following:
- A Fresnel term that gets computed using the Fresnel-Schlick approximation with an assumed index of refraction of 1.4 (which gives a reflectance at a normal incidence of 0.028).
- The half vector instead of the normal vector, as it gives better results for grazing angles.
After that, instead of the famous Cook-Torrance model, we are going to use the Beckmann distribution function explained in the book, which is the following:
- θ will be the angle between the normal and the half vector.
- α will be the roughness.
However, this is expensive to compute in runtime, so the authors computed and stored the results in a texture by varying the cosine of the angle made by the half and normal vector, and the roughness over the UV coordinates.
Once assembled, the resulting BRDF will be the following:
To start with subsurface scattering, we need to define one of the core concepts that is used in this method, Diffusion Profiles. In physics, it is the outcome of an experiment that consists of a white beam hitting a flat surface perpendicularly. The light that gets reflected, ergo not absorbed, in the function of the angle and distance is the result of this experiment, and also, it is important to notice that every color has its own profile. This method gets the measured diffusion profiles for a 3-layer skin model developed in a previous paper, and approximates the rendering of curved skin by only taking into account the distance between two points of the surface. To compute the shape of the diffusion profiles, we will use a sum of Gaussian functions. We will take advantage of the property that indicates that after adding two Gaussians, the resulting function will still be a Gaussian. Here are the ones used in this method to approximate the 3-layer model (note that the weights add up to 1, as we want to average the light to white and to decide the color of the skin based on a texture):
| Variance (mm^2) | Red | Green | Blue |
|---|---|---|---|
| 0.0064 | 0.233 | 0.455 | 0.649 |
| 0.0484 | 0.100 | 0.336 | 0.344 |
| 0.187 | 0.118 | 0.198 | 0 |
| 0.567 | 0.113 | 0.007 | 0.007 |
| 1.99 | 0.358 | 0.004 | 0 |
| 7.41 | 0.078 | 0 | 0 |
After diffusion profiles are defined, we are going to use a technique called Texture-Space Diffusion, which consists of unwrapping a 3D model onto a 2D plane to store light computations into a texture called an irradiance map.
Once the computations are stored, we will convolute that lightmap to simulate the light scattering, and the shape of the blur will be the shape of the diffusion profile. We will have one irradiance map for each Gaussian that was used to get the shape of the profile, the first one being without blur. After performing all the blurs, we will combine them with the same linear combination that we used to approach the Gaussians to the diffusion profile.
As the human head we are rendering is curved, the distance of the surfaces in world space is not the same as the distance in texture space because of UV distortion, so we need to compute something called a stretch map to use it when blurring the irradiance maps. The computation is easy; we unwrap the 3D model as before, but this time we will compute the inverse of the partial derivatives of the world space coordinates to get the results in U and V (we may need to scale the result between 0 and 1). One possible problem could be that the stretch map values will be constant through the triangles of the mesh, so we may need to blur these textures to avoid creating artifacts, and we will do a blur per Gaussian to, so we will end up having the same number of stretch maps and irradiance maps.
To decide the color of the skin, we will depend on the diffuse texture of it. The problem is, when do we apply it? We have two base options, and we will end up using a combination of them.
First, we will try to compute the irradiance maps using white color, then we will scatter the light, and then we will multiply the result by the diffuse texture. We will avoid blurring high-frequency details, but we will not get color bleeding with this.
The second option is to compute the irradiance maps with the diffuse map color, but we will blur high-frequency details with this approach.
We will combine the two methods, and we will take into account the diffuse color for both stages (irradiance map computation and final assembly) using a mix value of our choice:
One of the things that we have to take into account for a realistic output is that we have to respect energy conservation, so we will make the diffuse term depend on the specular term in the following way:
Same as with the Beckmann texture, this is expensive to compute every frame, so we will store the results in a texture and access it using the angle of the normal and the light vector, and with the roughness.
Finally, one of the things that we still need to model is what happens when the light passes completely through a surface, as it may happen with thin parts like the ears and the nostrils; that is why we will use a modification of translucent shadowmaps. When computing the shadowmaps, we will store the Z coordinate in perspective space, the absolute value of the Z coordinate in the camera space, and the UVs. With this data, when we are computing the irradiance maps we can sample the TSM and check if we are rendering a point that is backfacing or frontfacing the light, and if we are a backfacing point we can store a thickness value (the distance between the points where the light enters and where the light exits the surface) in the alpha channel of the irradiance map. When computing the lighting for this specific case, we would need to take into account the scattered light of the neighborhood from the point where the light enters, and as we are already computing the scattering with the irradiance maps, this is as easy as a weighted sum in the final assembly.
Here is the paper I wrote about it with a more in-depth explanation of the process: Human Skin Simulation
References
Screenshots