Weighted Gaussian blurring for denoising

Simple denoising for real time raytracers (or any noisy application) using weighted Gaussian blurring

Denoising example

Idea

Real time Path/Raytracers give very noisy images which do not look nice. While there are currently significantly better techniques using for example machine learning, I will be discussing a much simpler way of denoising using a weighted gaussian blur.

Basic blurring

This technique relies on selectively blurring the noisy data, so let's start of with just blurring the whole screen.

For this example I will be using a 2 pass Gaussian blur where we first blur all the pixels horizontally and then vertically, for our case this is beneficial since later it will be harder to weight it correctly if we use a square kernel

For the first step we will loop over all the pixels in the image and average them with their horizontal neighbors. In this average we use a Gaussian distribution which creates a nicer blur than directly taking the average

This weight follows the standard Gaussian distribution which is calculated as

Gaussian formula

where x is the distance of the pixel and σ is the standard deviation which controls the spread of the blur

This is a pretty lengthy formula with a square root and exponentials so it is better to precalculate this and put it in a array

While I dont recommend calculating this at runtime an implementation of this formula in code would look something like this.

const float PI = 3.14159265;
const float sigma = 1.0; // This controls the spread of the Gaussian

const float weights[33];

for (int i = 0; i < 33; i++) { // In this case i is the distance used in the formula
    // We can also write this as 1.0 / (sqrt(2.0*PI*sigma*sigma)) which is identical but slower to calculate
    float coeff = 1.0 / (sqrt(2.0 * PI) * sigma);
    float exponent = -(i * i) / (2.0 * sigma * sigma);
    weights[i] = coeff * exp(exponent); // exp(x) does e^x
}

Now lets look at the code that would implement the actual blurring, I will be using a GLSL shader but you can use anything you like.

First we start of by blurring into one direction from the center of every pixel

const float weights[33] = float[]( 
 0.0702, 0.0699, 0.0691, 0.0676, 0.0657, 0.0633, 0.0605, 0.0573, 
 0.0539, 0.0502, 0.0464, 0.0426, 0.0387, 0.0349, 0.0312, 0.0277, 
 0.0244, 0.0213, 0.0184, 0.0158, 0.0134, 0.0113, 0.0095, 0.0079, 
 0.0065, 0.0053, 0.0043, 0.0035, 0.0028, 0.0022, 0.0017, 0.0013, 0.0010
);

vec2 offset = 1.0 / vec2(resolutionX,resolutionY); // step in uv for moving 1 pixel
const int size = 32; // size of blur this should be weights size -1 (for the center pixel)

vec2 direction = vec2(1,0); // currently just horizontal
vec4 centerColor = texture(c, uv);
float totalWeight = 0.0;
vec3 totalColor = vec3(0.0);

//Loop over all the pixels in the +x direction
for(int i = 0; i <= size; i++) {
    float weight = weights[i]; // Get our current weight
    vec2 neigborUV = uv + direction*i*offset; // Get the uv coords
    if (uv.x > 1.0 || uv.x < 0.0 || uv.y > 1.0 || uv.y < 0.0) break; // Dont sample outside of screen
    totalColor += texture(c,neigborUV).xyz * weight; // Add neigbor color to total
    totalWeight += weight; // Add weight to total
}

color = totalColor / totalWeight; // calculate average by dividing by the total weight used

This gets us a blurred image in one direction, then to get a full horizontal blur, we can just repeat the for loop in the negative direction

by replacing direction with the negative of direction

vec2 neigborUV = uv + -direction*i*offset; // Get the uv coords (negative)

If you did it correctly you should get something like this:

Before

⬆ Input | Output ⬇

After

To get the full blur you have to run the filter over the image twice switching the direction from horizontal to vertical

so:

vec2 direction = vec2(1,0); // currently just horizontal

Would become

vec2 direction;

if(horizontal == true) direction = vec2(1,0); // in the second pass we set horizontal to false

else direction = vec2(0,1);

You cant just add 4 for loops to the first pass since it wouldn't have the correct data to blur the image correctly.

So you need to draw this first horizontal blur to a texture and then blur it again, but vertically.

This gives us the full blur:

Blurred

Weighing

While simply blurring gets rid of any noise that would be there it also gets rid of any coherent scene.

To get around this we need to weight our blur by more information. The best candidate for this is information from the G-buffer (geometry buffer). This includes scene normals, world positions and depth.
I will only be using the normals for this example but you will get better results by using a mixture of them and tweaking their influence until it looks good.

Normals

As you can see big changes in normals give a good estimate of all the edges in our scene which we do not want to blur.
So we will add a parameter to our function that weighs our blur by the change in normals:

const float edgeFallSpeed = 6.0; <changed>
vec4 centerColor = texture(c, uv);
float totalWeight = 0.0;
vec3 totalColor = vec3(0.0);
float edgeWeight = 1.0;  <changed>
vec3 lastNormal; <changed>

//Loop over all the pixels in the positive direction
for(int i = 0; i <= size; i++) {
    
    float weight = weights[i]; // Get our current weight
    vec2 neigborUV = uv + direction*i*offset; // Get the uv coords
    if (uv.x > 1.0 || uv.x < 0.0 || uv.y > 1.0 || uv.y < 0.0) break; // Dont sample outside of screen
    vec3 normal = texture(N, neighbourUV).xyz; // Get the current normal <changed>
    if (i != 0) {// at the center we have nothing to compare too so skip it <changed>
        float diff = 1.0-dot(lastNormal, normal);  // Get the difference with the normal of the last pixel <changed>
        edgeWeight -= diff * edgeFallSpeed; <changed>
        if (edgeWeight < 0.0) break; // stop sampling if we dont contribute anymore <changed>
        weight *= edgeWeight; <changed>
    } <changed>
    lastNormal = normal;
    totalColor += texture(c,neigborUV).xyz * weight; // Add neigbor color to total
    totalWeight += weight; 
}

color = totalColor / totalWeight;

You can see here why we split the blurring into each direction separately, so that we can stop the blur separately in each direction.

This will make our blur stop at hard edges and blur less at steep changes which will make edges visible again.

Blur with clear edges

Like mentioned before, you can improve this by adding more data for weighing, like the difference in depth or world position of a pixel

When applied to a noisy image it dramatically reduces noise

Noisy image denoised

Choosing the right data

As keen eyed people might have noticed is that while this technique removes noise it also removes any details like textures.
There are some ways to get around that.

One good solution is to weight the blurring by the difference in color, in my case the noise is mainly on the lighting causing the pixels to become brighter or darker. This means that the color changes a lot, but the hue of the color doesn't. We can check the difference in hue either by calculating the actual hue or as a simpler approach threat the color as a direction and check for the change in direction.

We can do this using the dot product

float colorDiff;
// Cant check black pixels so assume they are the same color
if(length(lastColor) < 0.001 || length(color) < 0.001) colorDiff = 0.0;
else 
{
    colorDiff = 1.0-abs(dot(normalize(lastColor),normalize(color)));  // Get the difference in color (hue)
}
Colors slightly preserved

A better solution is to not blur your detailed textures at all. In most cases not all your data is noisy, in my case only the lighting data is noisy while the colors/textures of the frame aren't.

Because of this if you split your noisy data from your unnoisy data you can denoise only the noisy data and recombine it with the rest of the data to get a better final image

Final denoising
Back to Articles
Denoising example

Idea

When putting meshes/objects together in games the seam between them can look very jarring. This is especially the case with objects used for terrain. By merging and blending edges together seams look way smoother and more realistic.

Preview

But how do we blend objects together in a game engine? First lets think about how you would create a transition in 2d.
To create a transition in 2d you would overlap the 2 images and then smoothly blend the edge so it transitions in between the 2 images

Preview

In 3d this creates a problem though since we cannot overlap our pixels like that without very complex rendering of our scene.
To solve this problem we mirror both sides of the image across and then blend, That way we do have color on the other side of the seam.

Preview

Seam detection

This technique relies on knowing where the seams between meshes are, so we need a system to be able to detect this.
We could rely on g-buffer info such as checking if the normals of the surface changes but that would also smooth intended seams on meshes themselves. The technique I used is generating a ID for every object, there are a lot of ways to do this since i am implementing this in Unity the simplest way is adding a render pass.

For this pass I set every objects shader to a shader that hashes its position into the color as an id then draw that pass to a render texture

// ObjectIDShader
// HLSL:

float Hash(float3 p)
{
    p = frac(p * 0.3183099 + 0.1);
    p *= 17.0;
    return frac(p.x * p.y * p.z * (p.x + p.y + p.z));
}

half4 frag(Varyings input) : SV_Target
{
    float3 normal = normalize(input.normalWS);
    float3 objectPosition = unity_ObjectToWorld._m03_m13_m23;
    float hashValue = Hash(objectPosition);
    return half4(hashValue, 0, 0, 1);
}

If you draw this to a screen This should get you something like this:

Preview

Now that we have a id per object detecting seams is pretty straight forward. In a post process shader we run a kernel that detects if the id changes

// SeamDetectionPostProcess
// HLSL:

int _KernelSize;
float _KernelRadius;
half4 Frag(Varyings input) : SV_Target
{	
    float2 UV = float2(input.uv.x, 1-input.uv.y); // In my case i had to flip the uv

    // Depth and color of the current pixel
    float4 sceneColor = SAMPLE_TEXTURE2D(_CameraOpaqueTexture, sampler_CameraOpaqueTexture, UV);
    float sceneDepth = Linear01Depth(SAMPLE_TEXTURE2D(_ObjectDepthTexture, sampler_ObjectDepthTexture, UV), _ZBufferParams);
    
    // The ID we set in the first pass
    float4 objectIDColor = SAMPLE_TEXTURE2D(_ObjectIDTexture, sampler_ObjectIDTexture, UV);
    float4 color = sceneColor;

    //Kernel, loop over all the pixels in a radius
    for(int x = -_KernelSize; x <= _KernelSize; x++) {
        for(int y = -_KernelSize; y <= _KernelSize; y++) {
            // Divide by scenedepth to keep sample size consistent over distance
            float2 offset = float2(x,y)*_KernelRadius/sceneDepth/_KernelSize;
            float2 SampleUV = UV + offset; // UV location of the pixel being tested

            // Test the id of the offset pixel against the middle pixel
            float4 id = SAMPLE_TEXTURE2D(_ObjectIDTexture, sampler_ObjectIDTexture, SampleUV);

            if (id.x != objectIDColor.x) {
                color = float4(1,0,0,1); // Set the color to red if there is a seam
            }
        }
    }
    return color;
}

With this you should basically have a outline shader:

Preview

Blending

Now comes the actual blending,
To be able to smoothly blend the materials we would need some overlapping texture data which you dont really have access to in a post process. To work around this we mirror whats on the other side of the seam.

For this we use our kernel shader, We find the point of symmetry and mirror the pixels across. To find this point we have to find the closest distance at which the object id changes

// SeamMirrorPostProcess
// HLSL:

int _KernelSize; 

float _KernelRadius;

half4 Frag(Varyings input) : SV_Target
{	
    float2 UV = float2(input.uv.x, 1-input.uv.y); // In my case i had to flip the uv

    // Depth and color of the current pixel
    float4 sceneColor = SAMPLE_TEXTURE2D(_CameraOpaqueTexture, sampler_CameraOpaqueTexture, UV);
    float sceneDepth = Linear01Depth(SAMPLE_TEXTURE2D(_ObjectDepthTexture, sampler_ObjectDepthTexture, UV), _ZBufferParams);
    // The ID we set in the first pass
    float4 objectIDColor = SAMPLE_TEXTURE2D(_ObjectIDTexture, sampler_ObjectIDTexture, UV);

    float2 seamLocation = float2(0,0); // Default to current pixel if we don't find seam <changed>
    float minimumDistance = 9999999; <changed>

    // Kernel, loop over all the pixels in a radius
    for(int x = -_KernelSize; x <= _KernelSize; x++) {
        for(int y = -_KernelSize; y <= _KernelSize; y++) {
            // Divide by scenedepth to keep sample size consistent over distance
            float2 offset = float2(x,y)*_KernelRadius/sceneDepth/_KernelSize;
            float2 SampleUV = UV + offset; // UV location of the pixel being tested

            // Test the id of the offset pixel against the middle pixel
            float4 id = SAMPLE_TEXTURE2D(_ObjectIDTexture, sampler_ObjectIDTexture, SampleUV);
            if (id.x != objectIDColor.x) 
            {
                float squareDistance = dot(offset,offset); // Same as x*x+y*y <changed>
                if (squareDistance < minimumDistance)  <changed>
                { <changed>
                    minimumDistance = squareDistance; // Found new closest seam <changed>
                    seamLocation = offset; <changed>
                } <changed>
            }
        }
    }
    // Mirror the pixel on the other side of the seam
    float4 otherColor = SAMPLE_TEXTURE2D(_CameraOpaqueTexture, sampler_CameraOpaqueTexture, UV + seamLocation * 2); <changed>
    return otherColor; <changed>
}

This should produce a trippy effect wherein both sides of the seam are mirrored, I split the image with and without the effect for clarity

Preview

If instead of directly mirroring we lerp the colors based on distance from the seam it becomes a nice transition,
Since we mirror on both sides of the seam we should also adjust the lerp so that the mix is 50% at the seam and 0% at the edge of our search radius

// SeamBlendPostProcess
// HLSL:
int _KernelSize;
float _KernelRadius;

half4 Frag(Varyings input) : SV_Target
{	
...
Kernel
...

// Get the pixel on the other side of the seam
float4 otherColor = SAMPLE_TEXTURE2D(_CameraOpaqueTexture, sampler_CameraOpaqueTexture, UV + seamLocation * 2 );

// This is the maximum distance where our kernel can find a seam <changed>
float maxSearchDistance = (KernelRadius)/sceneDepth;  <changed>
// We subtract 0.5 to make the transition line up on the seam <changed>
float weight = saturate(0.5 - sqrt(minimumDistance ) / maxSearchDistance); <changed>
return lerp(sceneColor, otherColor, weight); <changed>
}

This should give us a nicely blended seam:

Preview

Conditions

There is one simple problem with our transitions.
They are happening even if the objects arent next to eachother so the top of the rocks are also transitioned.
We can fix this by adding a simple check for depth to our weight

// SeamBlendPostProcess
// HLSL:

int _KernelSize;
float _KernelRadius;
float _DepthFalloff; <changed>

half4 Frag(Varyings input) : SV_Target
{	
    ...
    Kernel
    ...

    // Get the pixel on the other side of the seam
    float4 otherColor = SAMPLE_TEXTURE2D(_CameraOpaqueTexture, sampler_CameraOpaqueTexture, UV + seamLocation * 2 );

    float4 otherDepth = Linear01Depth(SAMPLE_TEXTURE2D(_ObjectDepthTexture, sampler_ObjectDepthTexture, UV + seamLocation * 2 ), _ZBufferParams); <changed>
    float depthDiff = abs(otherDepth-sceneDepth); <changed>

    // This is the maximum distance where our kernel can find a seam
    float maxSearchDistance = (KernelRadius)/sceneDepth; 

    // We subtract 0.5 to make the transition line up on the seam
    float spatialWeight = saturate(0.5 - sqrt(minimumDistance ) / maxSearchDistance); <changed>
    float depthWeight = saturate(1.0 - depthDiff / _DepthFalloff*_KernelRadius); <changed>
    float totalWeight = (depthWeight+spatialWeight)*0.5; <changed>
    return lerp(sceneColor, otherColor, totalWeight);
}

This creates our final effect

Preview

Improvements

In this article I went over a simpler way to implement this but it is very slow and has some artifacts.

There are a lots of improvements still left to be done which i will not cover in this article, here are some examples:

  • Compute, There are a lot of performance benefits available if you port this to compute, for example by running a cheap masking pass you can only run the expensive search on pixels with edges.
  • Better searching, A search kernel is a very limited way of searching and there are way better ways like a spiral pattern that refines over multiple iterations.
  • Custom id's using material property blocks you can set custom id's for objects allowing you to control what blends with what or even blend sizes.
  • Normal-based filtering,adding a check for normal difference, This fixes transitioning happening around corners or weird angles.
  • Multiple-seam support,adding support for detecting multiple seams this fixes incorrect blending when more than 2 objects intersect at a point

Source code/Implementation

An open implementation of this code is available for free on the Unity Asset Store:

Free & Open - Asset Store

There is also a paid version which implements most of the improvements mentioned above and more

Pro (Paid) - Asset Store
Back to Articles