@MaximeHeckel

Moebius-style post-processing and other stylized shaders

March 26, 2024 / 33 min read

Last Updated: March 26, 2024

Physically Based Rendering (PBR) is often the defacto stylistic choice for 3D developers as tools/frameworks, for the most part, optimize for realistic materials, shadows, and lighting out-of-the-box. However, in an industry constantly pushing for realism, adopting Non-Photorealistic Rendering (NPR) or stylized shaders while still following physical principles can really make your project stand out.

There's been a recent resurgence in distinct artistic directions through the use of stylized shaders in many games. At least since Zelda BOTW came out, it's becoming more prevalent. While Hi-Fi Rush or the Persona series are great examples of Non-Photorealistic Rendering, there are a few indie games out there (e.g. Sable and Synergy) doing wonders by emulating a flavor of a hand-drawn art style which I particularly enjoy from legendary artist Jean Giraud aka Moebius.

So far, I've only ever tried reproducing physically accurate lighting effects or base my scene in the "real world" with no styling at all, and I wanted to see whether reproducing some aspect of Jean Giraud's visual art in a WebGL environment was possible. Luckily, @UselessGameDev made a great video on reproducing the Moebius style. In it, he deconstructs the key characteristics that make the Moebius style so unique and rebuilds each of them in Unity. After carefully studying the subjects covered in that video and spending a couple of days heads-down on the problem, I managed to build what is to me a very satisfying Moebius post-processing pass on top of a React Three Fiber scene. It features not only all the elements of Moebius' style (outlines, crosshatched shadows, and specular) but also elements usually featured in his more sci-fi-focused comic books: desert, spaceships, and alien planets looming in the distance.

In this article, we will deconstruct this art style and reproduce it step-by-step as a post-processing pass that you can put on top of any React Three Fiber scene. We will learn how to output hand-drawn looking outlines by leveraging an edge detection filter and stylize our final render while preserving physical elements like lighting and shadows with an extra touch of creativity. Finally, we'll look at some of the shortcomings of this render pipeline and how I worked around them in my own creations.

Setting up the stage for post-processing

We want to apply the Moebius style to a React Three Fiber scene as a whole, not just as a material of the individual meshes that compose it. Knowing that, we'll need to use a post-processing pass that can apply this style to an already rendered scene. A post-processing pass acts like an "image filter" on top of your scene to apply any modifications you'd like. @react-three/postprocessing has many effects like Bloom, Vignette, or Depth of Field built in, which you can find on many popular 3D websites. However, none of the prebuilt ones are even close to what we want to achieve, so we'll have to build it ourselves by implementing our own Pass based on a custom shader.

Unfortunately @react-three/postprocessing does not allow for custom passes, so we'll need to go "vanilla" and use more Three.js flavored code for this part. You may be familiar with how Three.js projects set up their post-processing pipeline using effectcomposer and the many flags that we need to turn on for each pass to take effect, but thanks to @react-three/drei's Effects component, this entire setup is now a bit simpler:

Our scene and a skeleton post-processing pass

1
import { Effects } from '@react-three/drei';
2
import { extend } from '@react-three/fiber';
3
import { useRef } from 'react';
4
import { Pass } from 'postprocessing';
5
import * as THREE from 'three';
6
7
class MoebiusPass extends Pass {
8
constructor(args) {
9
super();
10
}
11
12
dispose() {}
13
14
render(renderer, writeBuffer, readBuffer) {}
15
}
16
17
extend({ MoebiusPass });
18
19
const MoebiusScene = () => {
20
const mesh = useRef();
21
const ground = useRef();
22
23
const lightPosition = [10, 10, 10];
24
25
return (
26
<>
27
<directionalLight
28
castShadow
29
position={lightPosition}
30
intensity={4.5}
31
color="#fff"
32
target={ground.current}
33
/>
34
<mesh castShadow receiveShadow position={[-1, 2, 1]}>
35
<sphereGeometry args={[1, 32, 32]} />
36
<meshStandardMaterial color="orange" />
37
</mesh>
38
<mesh
39
castShadow
40
receiveShadow
41
rotation={[0, Math.PI / 3, 0]}
42
position={[2, 0.75, 2]}
43
>
44
<boxGeometry args={[1.5, 1.5, 1.5]} />
45
<meshStandardMaterial color="hotpink" />
46
</mesh>
47
<mesh
48
ref={ground}
49
receiveShadow
50
rotation={[-Math.PI / 2, 0, 0]}
51
position={[0, 0, 0]}
52
>
53
<planeGeometry args={[10, 10, 100, 100]} />
54
<meshStandardMaterial color="white" />
55
</mesh>
56
<Effects>
57
<moebiusPass />
58
</Effects>
59
</>
60
);
61
};
62
63
// ...

You can see in the code snippet above that:

  1. ArrowAn icon representing an arrow
    We created a MoebiusPass class that extends from Pass. This class will let us define everything we need for our Moebius style.
  2. ArrowAn icon representing an arrow
    We "extended" React Three Fiber's namespace with our MoebiusPass allowing us to use it as a standard JSX component.
  3. ArrowAn icon representing an arrow
    Finally, we used our newly created pass inside the Effects component.

Now that we wired this together, let's create a custom shader for our post-processing pass to render on the screen. We'll first create a very simple shader to make sure that everything functions as intended:

Sample post-processing pass using a custom shader material

1
//...
2
import { FullScreenQuad } from "three-stdlib";
3
4
const moebiusShader = {
5
uniforms: {
6
tDiffuse: { value: null },
7
},
8
vertexShader: \`
9
varying vec2 vUv;
10
11
void main() {
12
vUv = uv;
13
14
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
15
}
16
\`,
17
fragmentShader: \`
18
varying vec2 vUv;
19
uniform sampler2D tDiffuse;
20
21
void main() {
22
vec2 uv = vUv;
23
vec4 color = texture2D(tDiffuse, uv);
24
25
gl_FragColor = vec4(color.r + 0.2, color.g, color.b, color.a);
26
}
27
\`,
28
}
29
30
class MoebiusPass extends Pass {
31
constructor(args) {
32
super();
33
34
this.material = new THREE.ShaderMaterial(moebiusShader);
35
this.fsQuad = new FullScreenQuad(this.material);
36
}
37
38
dispose() {
39
this.material.dispose();
40
this.fsQuad.dispose();
41
}
42
43
render(renderer, writeBuffer, readBuffer) {
44
this.material.uniforms.tDiffuse.value = readBuffer.texture;
45
46
if (this.renderToScreen) {
47
renderer.setRenderTarget(null);
48
this.fsQuad.render(renderer);
49
} else {
50
renderer.setRenderTarget(writeBuffer);
51
if (this.clear) renderer.clear();
52
this.fsQuad.render(renderer);
53
}
54
}
55
}
56
//...

Much like I explored last year in my blog post on render targets, our pass consists of a simple FullScreenQuad that overlays the scene and a material that takes the underlying scene as a texture (tDiffuse), and modifies it. In this case, you should see that our base scene has now a red tint, as we increased the intensity of the red channel in the fragment shader of our post-processing pass.

Drawing outlines with shaders for our React Three Fiber scene

With our post-processing pass now implemented, we can start making our way toward building some of the key aspects of the Moebius style. One of the first striking elements of this art style is its consistent outlines, as most of Jean Giraud's creations are drawings. Thus, we will need to find a way to draw outlines around all the objects of our React Three Fiber scenes and to do that, we'll need to implement an edge detection filter.

An edge detection filter identifies points in our scene where the brightness changes sharply, for instance, between the pixels of an object in the foreground and the background, to then place our outline around each object pretty accurately. In our case:

  • ArrowAn icon representing an arrow
    We'll implement a Sobel Filter, a type of edge detection filter.
  • ArrowAn icon representing an arrow
    We'll feed our filter both the Depth data and Normal data of our scene and combine the result obtained.

Getting the Depth Buffer of our scene

Why would we need the "depth data" of our scene? This data will tell us on which plane the objects of our scene belong and thus find any "depth discontinuities" and highlight where the outer outlines of the overall scene should be.

First things first, let's work our way towards getting the depth information of our scene. We can use our good ol' useFBO hook to create a render target, but this time, we will also pass a depthTexture onto it and flip the depthBuffer option to `true.

Setting up our depthTexture

1
const depthTexture = new THREE.DepthTexture(
2
window.innerWidth,
3
window.innerHeight
4
);
5
6
const depthRenderTarget = useFBO(window.innerWidth, window.innerHeight, {
7
depthTexture,
8
depthBuffer: true,
9
});

We then use our render target within our useFrame hook to render the current scene onto it and pass its resulting depthTexture to our post-processing pass.

Passing the Depth data of the scene to our post-processing pass

1
const MoebiusScene = () => {
2
//...
3
const { camera } = useThree();
4
5
useFrame((state) => {
6
const { gl, scene, camera } = state;
7
8
gl.setRenderTarget(depthRenderTarget);
9
gl.render(scene, camera);
10
11
gl.setRenderTarget(null);
12
});
13
14
return (
15
<>
16
{/*...*/}
17
<Effects>
18
<moebiusPass
19
args={[
20
{
21
depthRenderTarget,
22
camera,
23
},
24
]}
25
/>
26
</Effects>
27
</>
28
);
29
};

We can now try to visualize our depth texture by rendering it through the fragment shader of our custom post-processing pass. However, we'll first need to know how to "read" this texture within our fragment shader, as this is not your typical sampler2D data that you can read with texture2D in GLSL. Luckily, @mattdesl provided a great example on how to read depth textures in one example that he built for the Three.js documentation.

With this function, we obtain the value of the "depth" of the pixel represented by a grayscale color, which we can then return as an output of our fragment shader:

Fragment shader returning the depth of every pixel represented by a grayscale color

1
#include <packing>
2
varying vec2 vUv;
3
uniform sampler2D tDiffuse;
4
uniform sampler2D tDepth;
5
uniform float cameraNear;
6
uniform float cameraFar;
7
8
float readDepth( sampler2D depthTexture, vec2 coord ) {
9
float fragCoordZ = texture2D( depthTexture, coord ).x;
10
float viewZ = perspectiveDepthToViewZ( fragCoordZ, cameraNear, cameraFar );
11
return viewZToOrthographicDepth( viewZ, cameraNear, cameraFar );
12
}
13
14
void main() {
15
vec2 uv = vUv;
16
17
float depth = readDepth( tDepth, vUv );
18
19
gl_FragColor = vec4(vec3(depth), 1.0);
20
}

The demo below showcases the output of our post-processing pass returning the "depth data" of our scene 👇

Getting the Normal Buffer of our scene

While the depth data gives us information on the outer boundaries of our scenes, the Normal data will help us find the inner outlines within our scene:

  • ArrowAn icon representing an arrow
    If our edge detection filter finds higher differences in normals, that will mean there's a very sharp corner, thus a very pronounced outline.
  • ArrowAn icon representing an arrow
    If the difference in normals is not that significant, we'll get a more subtle outline or no outline under a certain threshold.

If you read my blog post on Caustics, you might be familiar with the exercise of "capturing the normals" in a render target. While this is a valid technique, it is a bit too "brute force" for our post-processing use case: we'd have to manually apply a NormalMaterial for each mesh composing our scene 😮‍💨. But we don't have time for that and have better things to do! Instead, we'll override the entire scene with a NormalMaterial (or CustomNormalMaterial in our case because we'll need to tweak it in the later parts) using the scene.overrideMaterial property:

Example of usage of the overrideMaterial property of the scene object

1
scene.matrixWorldNeedsUpdate = true;
2
scene.overrideMaterial = NormalMaterial;

As with the depth texture, we'll define a dedicated render target for our render target, render our scene outputting its Normal data, and pass the result as an argument of our post-processing pass.

Passing the Normal data of our scene to our post-processing pass

1
const MoebiusScene = () => {
2
const normalRenderTarget = useFBO();
3
//...
4
5
useFrame((state) => {
6
const { gl, scene, camera } = state;
7
8
gl.setRenderTarget(depthRenderTarget);
9
gl.render(scene, camera);
10
11
const originalSceneMaterial = scene.overrideMaterial;
12
gl.setRenderTarget(normalRenderTarget);
13
14
scene.matrixWorldNeedsUpdate = true;
15
scene.overrideMaterial = CustomNormalMaterial;
16
17
gl.render(scene, camera);
18
scene.overrideMaterial = originalSceneMaterial;
19
20
gl.setRenderTarget(null);
21
});
22
23
return (
24
<>
25
{/*...*/}
26
<Effects>
27
<moebiusPass
28
args={[
29
{
30
depthRenderTarget,
31
normalRenderTarget,
32
camera,
33
},
34
]}
35
/>
36
</Effects>
37
</>
38
);
39
};

Finally, if needed, we can visualize the Normal data by returning it as output of our custom post-processing shader pass. There are no tricks here: we can use the standard texture2D function to read the data of our "Normal texture."

Fragment shader from our post-processing pass outputting the Normal data of the scene

1
varying vec2 vUv;
2
uniform sampler2D tNormal;
3
4
void main() {
5
vec2 uv = vUv;
6
7
vec3 normal = texture2D(tNormal, vUv).rgb;
8
9
gl_FragColor = vec4(normal, 1.0);
10
}

The resulting scene is visible below 👇 with just a few lines of code, we got the Normal data of our entire scene!

Edge detection with Sobel filters

We have all the data necessary to detect edges and draw the outlines of our scene! As stated in the introduction of this section, we'll use an edge detection filter and, more specifically, a Sobel filter for this task.

As we read the pixels of our scene in our post-processing pass, we'll use this filter to detect sharp changes in intensity (intensity here can be a difference in depth or normals). The Sobel filter is a specific filter that has a few specificities that will help us:

  • ArrowAn icon representing an arrow
    It's a convolution filter: we'll perform the edge detection by sliding over each pixel a small 3x3 matrix (also known as kernel).
  • ArrowAn icon representing an arrow
    The matrix itself has specific values (also known as weights) specifically set to detect differences in pixel intensity.
  • ArrowAn icon representing an arrow
    Those weights are applied on the pixel itself and its neighbors.
  • ArrowAn icon representing an arrow
    It has two kernels: one to detect edges along each x/y-axis.

The matrices are defined as follows:

Sx = [[-1, 0, 1], [-2, 0, 2], [-1, 0, 1]]

Sy = [[-1, -2, -1], [0, 0, 0], [1, 2, 1]]

You may ask how we're supposed to apply this matrix/kernel to our pixel. Luckily, the math involved is not too complex here, and we can visualize it easily:

  • ArrowAn icon representing an arrow
    For each pixel, we'll "overlay" the center of the kernel
  • ArrowAn icon representing an arrow
    We'll multiply each value of the kernel with the value of the pixel it overlays
Diagram showcasing the process of applying the Sobel matrix on a given kernel

By adding up the result of this operation, we get a gradient or "rate of change," which corresponds to the difference in intensity between pixels processed by the filter (overlayed by the matrix to keep the explanation as visual as possible). If that output is non-zero and above a certain treshold; we detected a potential edge! The widget below aims to showcase this with some simple examples:

Diagram showcasing how each of x-direction/y-direction Sobel Matrices respectively detect vertical/horizontal edges

Code-wise, this is how we can define and apply the Sobel filter in our fragment shader:

Applying the Sobel filter to our Depth data

1
//...
2
const mat3 Sx = mat3( -1, -2, -1, 0, 0, 0, 1, 2, 1 );
3
const mat3 Sy = mat3( -1, 0, 1, -2, 0, 2, -1, 0, 1 );
4
5
void main() {
6
vec2 texel = vec2( 1.0 / resolution.x, 1.0 / resolution.y );
7
float outlineThickness = 3.0;
8
vec4 outlineColor = vec4(0.0, 0.0, 0.0, 1.0);
9
vec4 pixelColor = texture2D(tDiffuse, vUv);
10
11
// Getting the value of each pixel of our kernel -> depth11 is our current pixel
12
float depth00 = readDepth(tDepth, vUv + outlineThickness * texel * vec2(-1, 1));
13
float depth01 = readDepth(tDepth, vUv + outlineThickness * texel * vec2(-1, 0));
14
float depth02 = readDepth(tDepth, vUv + outlineThickness * texel * vec2(-1, -1));
15
16
float depth10 = readDepth(tDepth, vUv + outlineThickness * texel * vec2(0, -1));
17
float depth11 = readDepth(tDepth, vUv + outlineThickness * texel * vec2(0, 0));
18
float depth12 = readDepth(tDepth, vUv + outlineThickness * texel * vec2(0, 1));
19
20
float depth20 = readDepth(tDepth, vUv + outlineThickness * texel * vec2(1, -1));
21
float depth21 = readDepth(tDepth, vUv + outlineThickness * texel * vec2(1, 0));
22
float depth22 = readDepth(tDepth, vUv + outlineThickness * texel * vec2(1, 1));
23
24
// Calculating the output of the filter applied to each pixels of the kernel
25
float xSobelValue = Sx[0][0] * depth00 + Sx[1][0] * depth01 + Sx[2][0] * depth02 +
26
Sx[0][1] * depth10 + Sx[1][1] * depth11 + Sx[2][1] * depth12 +
27
Sx[0][2] * depth20 + Sx[1][2] * depth21 + Sx[2][2] * depth22;
28
29
float ySobelValue = Sy[0][0] * depth00 + Sy[1][0] * depth01 + Sy[2][0] * depth02 +
30
Sy[0][1] * depth10 + Sy[1][1] * depth11 + Sy[2][1] * depth12 +
31
Sy[0][2] * depth20 + Sy[1][2] * depth21 + Sy[2][2] * depth22;
32
33
// Combining both Vertical and Horizontal output to detect all edges
34
float gradientDepth = sqrt(pow(xSobelValue, 2.0) + pow(ySobelValue, 2.0));
35
36
float outline = gradientDepth;
37
38
vec4 color = mix(pixelColor, outlineColor, outline);
39
40
gl_FragColor = color;
41
}

Here, we apply the Sobel filter to the depth data we acquired in the previous section. If we try to visualize the output of this post-processing pass, this is what we get:

We only see the outer outline of the scene! To get the complete set of outlines, we must apply the same Sobel filter to our Normal data and then combine its resulting gradient with the gradient we just obtained from the depth data. The code to do this is luckily not that different, albeit repetitive, so you may want to abstract it:

Applying the Sobel filter to both our Depth data and Normal data

1
//...
2
float luma(vec3 color) {
3
const vec3 magic = vec3(0.2125, 0.7154, 0.0721);
4
return dot(magic, color);
5
}
6
7
void main() {
8
//...
9
// Get the grayscale output of each pixel of our kernel
10
float normal00 = luma(texture2D(tNormal, vUv + outlineThickness * texel * vec2(-1, -1)).rgb);
11
float normal01 = luma(texture2D(tNormal, vUv + outlineThickness * texel * vec2(-1, 0)).rgb);
12
// ... and so on
13
14
float xSobelNormal =
15
Sx[0][0] * normal00 + Sx[1][0] * normal10 + Sx[2][0] * normal20 +
16
Sx[0][1] * normal01 + Sx[1][1] * normal11 + Sx[2][1] * normal21 +
17
Sx[0][2] * normal02 + Sx[1][2] * normal12 + Sx[2][2] * normal22;
18
19
float ySobelNormal =
20
Sy[0][0] * normal00 + Sy[1][0] * normal10 + Sy[2][0] * normal20 +
21
Sy[0][1] * normal01 + Sy[1][1] * normal11 + Sy[2][1] * normal21 +
22
Sy[0][2] * normal02 + Sy[1][2] * normal12 + Sy[2][2] * normal22;
23
24
float gradientNormal = sqrt(pow(xSobelNormal, 2.0) + pow(ySobelNormal, 2.0));
25
26
float outline = gradientDepth + gradientNormal;
27
28
vec4 color = mix(pixelColor, outlineColor, outline);
29
30
gl_FragColor = color;
31
}
Diagram showcasing how combining the outputs of the Sobel filters for both Depth and Normal data gives us the outlines for the entire scene and each meshes within it

If we run the code we just implemented, our current post-processing pipeline will output our original scene, overlayed with an outline surrounding each mesh!

Stylized outlines, shadows, and lighting patterns

From the looks of the render of our scene in the previous demo, it's clear that the outlines "alone" are not enough to convey the art style of Jean Giraud. In his video, @UselessGameDev highlights a couple of specificities from the artist's technique that makes his creations so unique:

  1. ArrowAn icon representing an arrow
    The outlines are supposed to be hand-drawn. Ours are perfectly straight, which doesn't give that "handmade" feel.
  2. ArrowAn icon representing an arrow
    The shadows in Moebius' creations follow a crosshatched pattern: the darker the shadow, the denser the pattern is. Currently, our scene still renders the standard "physically based" shadows.
  3. ArrowAn icon representing an arrow
    The artist also usually depicts specular lighting with plain white dots surrounded an outline.

In this part, we'll examine and re-implement each of these specificities in our custom post-processing shader pass and try our hand at other stylized shaders along the way.

Giving our outlines an hand-drawn look in GLSL

We want our post-processing effect to make our scene look like a drawing, and those perfect straight outlines won't cut it. Let's make them look more wiggly by applying a slight displacement using:

  • ArrowAn icon representing an arrow
    Some trigonometric functions like sin or cos.
  • ArrowAn icon representing an arrow
    A wiggle factor for us to increase or decrease the frequency and amplitude of the outline.
  • ArrowAn icon representing an arrow
    A hash function to apply some irregularities to the outline.
1
vec2 displacement = vec2(
2
(hash(gl_FragCoord.xy) * sin(gl_FragCoord.y * frequency)) ,
3
(hash(gl_FragCoord.xy) * cos(gl_FragCoord.x * frequency))
4
) * amplitude /resolution.xy;

We defined our displacement as a pair sinusoidal function along the x and y-axis. We made the frequency low at 0.08 as we want our outline to look like elongated waves. For the randomness, I picked up a simple hash function from this shadertoy project from markjarzynski . Now, we need to add this displacement vec2 to each depth and normal buffer pixels processed in our Sobel filter:

Adding displacement when reading the Depth and Normal data for our outlines to appear hand-drawn

1
//...
2
float hash(vec2 p) {
3
vec3 p3 = fract(vec3(p.xyx) * .1031);
4
p3 += dot(p3, p3.yzx + 33.33);
5
6
return fract((p3.x + p3.y) * p3.z);
7
}
8
9
void main() {
10
//...
11
vec2 displacement = vec2(
12
(hash(gl_FragCoord.xy) * sin(gl_FragCoord.y * frequency)) ,
13
(hash(gl_FragCoord.xy) * cos(gl_FragCoord.x * frequency))
14
) * amplitude /resolution.xy;
15
16
vec4 pixelColor = texture2D(tDiffuse, vUv);
17
18
// Apply the displacement when reading the value of each pixel in the kernel
19
float depth00 = readDepth(tDepth, vUv + displacement + outlineThickness * texel * vec2(-1, 1));
20
float depth01 = readDepth(tDepth, vUv + displacement + outlineThickness * texel * vec2(-1, 0));
21
// and so on..
22
23
float normal00 = luma(texture2D(tNormal, vUv + displacement + outlineThickness * texel * vec2(-1, -1)).rgb);
24
float normal01 = luma(texture2D(tNormal, vUv + displacement + outlineThickness * texel * vec2(-1, 0)).rgb);
25
// and so on...
26
27
}

This displacement is subtle, yet it changes the entire look and feel of the scene. It now starts looking more like a drawing!

Before
After
ArrowAn icon representing an arrowArrowAn icon representing an arrow
Subtle but efficient displacement applied to the outlines of our scene.

The demo below implements the displacement we just added and allows you to tweak the outline through:

  • ArrowAn icon representing an arrow
    The frequency of the sinusoid functions.
  • ArrowAn icon representing an arrow
    The amplitude of the wiggles.

Adding custom shadow patterns to our post-processing effect

To achieve a convincing Moebius style, we want to apply a custom shadow pattern to the shadows and the darker areas of the scene overall. That means that those patterns need to be coded within our post-processing fragment shader, where we'll need to:

  1. ArrowAn icon representing an arrow
    Detect whether a given pixel is part of a "dark" area.
  2. ArrowAn icon representing an arrow
    Color the pixel based on a given pattern.

To detect whether a pixel is part of a "dark" area of the overall scene, we can rely on the luminance of the pixel returned by the luma function we used earlier in this project to grayscale our Normal data. That is far from a bulletproof way to achieve this, but it's "good enough".

1
float pixelLuma = luma(pixelColor.rgb);

With this, we can build a simple tonal shadow pattern. If the luma of a given pixel falls under a specific luma value, we change its color to a darker shade. Below is an example where:

  • ArrowAn icon representing an arrow
    We declare three different thresholds under which we change the pixel color.
  • ArrowAn icon representing an arrow
    Each threshold has a specific color: the lower the luma, the darker the color shade.

Custom tonal shadow pattern applied to pixels under a certain luma threshold

1
//...
2
3
if(pixelLuma <= 0.35) {
4
pixelColor = vec4(0.0, 0.0, 0.0, 1.0);
5
}
6
7
if (pixelLuma <= 0.45) {
8
pixelColor = pixelColor * vec4(0.25, 0.25, 0.25, 1.0);
9
}
10
11
if (pixelLuma <= 0.6) {
12
pixelColor = pixelColor * vec4(0.5, 0.5, 0.5, 1.0);
13
}
14
15
if (pixelLuma <= 0.75) {
16
pixelColor = pixelColor * vec4(0.7, 0.7, 0.7, 1.0);
17
}
18
19
//...

However, there's a catch! Our post-processing pass applies this shadow pattern to every pixel on the screen, including the background! That means that if we were to choose a background color with a low enough luminance, it would end up shaded. To solve this, we can reuse the depth buffer we introduced in the first part and use it to only apply the shadow pattern to objects that are close enough to the camera:

Making our tonal shadow pattern take into account the depth value of the pixel

1
//...
2
float depth = readDepth(tDepth, vUv);
3
4
if(pixelLuma <= 0.35 && depth <= 0.99) {
5
pixelColor = vec4(0.0, 0.0, 0.0, 1.0);
6
}
7
8
if (pixelLuma <= 0.45 && depth <= 0.99) {
9
pixelColor = pixelColor * vec4(0.25, 0.25, 0.25, 1.0);
10
}
11
12
if (pixelLuma <= 0.6 && depth <= 0.99) {
13
pixelColor = pixelColor * vec4(0.5, 0.5, 0.5, 1.0);
14
}
15
16
if (pixelLuma <= 0.75 && depth <= 0.99) {
17
pixelColor = pixelColor * vec4(0.7, 0.7, 0.7, 1.0);
18
}
19
//...

That gives us a first simple custom shadow pattern we can use as a foundation to build more complex ones in the next part!

Crosshatched and raster shadow patterns

We'll define our Moebius shadow following what @UselessGameDev established it in his video:

  • ArrowAn icon representing an arrow
    The lightest shadow will have diagonal stripes.
  • ArrowAn icon representing an arrow
    The second threshold will add vertical stripes.
  • ArrowAn icon representing an arrow
    Finally, the darkest shadows will have an extra set of horizontal stripes.

To draw the stripes, we can use GLSL's modulo function mod that lets us color pixels at a regular interval. For instance:

Crosshatched shadow pattern implemented by drawing stripes at given luma threshold

1
//...
2
float modVal = 8.0;
3
4
if (pixelLuma <= 0.35 && depth <= 0.99) {
5
if (mod((vUv.y + displacement.y) * resolution.y , modVal) < 1.0) {
6
pixelColor = outlineColor;
7
};
8
}
9
if (pixelLuma <= 0.55 && depth <= 0.99) {
10
if (mod((vUv.x + displacement.x) * resolution.x , modVal) < 1.0) {
11
pixelColor = outlineColor;
12
};
13
14
}
15
if (pixelLuma <= 0.80 && depth <= 0.99) {
16
if (mod((vUv.x + displacement.x) * resolution.y + (vUv.y + displacement.y) * resolution.x, modVal) <= 1.0) {
17
pixelColor = outlineColor;
18
};
19
}
20
//...
  1. ArrowAn icon representing an arrow
    vUv.x * uResolution.x converts the UV coordinate to a pixel coordinate on the screen.
  2. ArrowAn icon representing an arrow
    The modulo function mod returns the remainder of the pixel's position along the x-axis divided by a specific value of 8.0.
  3. ArrowAn icon representing an arrow
    The <= 1.0 condition is true for the first 2 pixels (0 and 1) of every 8-pixel set. The value 1.0 corresponds to the outline thickness we established earlier: the larger the value, the bigger the shadow stripe.
  4. ArrowAn icon representing an arrow
    Thus, for every 8 pixels along the x-axis, we will set the first 2 pixels to black, creating a pattern of vertical stripes.

The widget below showcases this pattern on a small set of pixels for you to fully grasp the logic. It defines the full crosshatched shadow pattern with three luminance thresholds. As you decrease the luma value, more stripe patterns appear on the screen.

0.65
1
if ((pixelLuma <= 0.45
2
&& mod(vUv.y * uResolution.y, 8.0) <= 1.0) ||
3
(pixelLuma <= 0.55
4
&& mod(vUv.x * uResolution.x, 8.0) <= 1.0) ||
5
(pixelLuma <= 0.65
6
&& mod(vUv.x * uResolution.y +
7
vUv.y * uResolution.x, 8.0) <= 1.0)) {
8
pixelColor = outlineColor;
9
}

Similarly, while working on my Moebius-style post-processing effect, I explored other shadow patterns like Raster, which I found used in this Shadertoy scene from cdyk. This pattern creates a grid of circles in the darker regions of our scene, and the size of those circles grows as the luma value decreases. This alternative shadow pattern is also featured in the widget above for you to visualize.

If we add the concepts and techniques we just explored to our post-processing custom shader pass, we get the following render output 👇:

Before
After
ArrowAn icon representing an arrowArrowAn icon representing an arrow
Comparison of our scene before and after applying a crosshatched shadow pattern to the darker areas.

Custom Stylized Lighting

We have outlines. We have shadows. But we miss yet one final aspect for our Moebius-style post-processing pass to feel complete: hand-drawn specular reflections! As showcased in @UselessGameDev's video, those reflections are pretty distinctive in Moebius' drawings:

  • ArrowAn icon representing an arrow
    They are plain white.
  • ArrowAn icon representing an arrow
    They also have outlines.

The second point is the most interesting one: if we need our specular to have an outline, we'll need to figure out a way for it to get caught by our Sobel filter. That means that, unlike our shadow, we'll have to render those specular upstream from the post-processing shader pass.

@UselessGameDev mentions that we can achieve this by doing the necessary computation and coloring of the specular in our Normal material. (Hence why I said in the first part that using a custom shader material may come in handy at some point 😁)

Below is the updated code of the vertex shader of our custom Normal material:

1
varying vec3 vNormal;
2
varying vec3 eyeVector;
3
4
void main() {
5
vec4 worldPos = modelMatrix * vec4(position, 1.0);
6
vec4 mvPosition = viewMatrix * worldPos;
7
8
gl_Position = projectionMatrix * mvPosition;
9
10
vec3 transformedNormal = normalMatrix * normal;
11
vNormal = normalize(transformedNormal);
12
eyeVector = normalize(worldPos.xyz - cameraPosition);
13
}

And here's the updated code of its fragment shader:

1
varying vec3 vNormal;
2
varying vec3 eyeVector;
3
uniform vec3 lightPosition;
4
5
const float shininess = 600.0;
6
const float diffuseness = 0.5;
7
8
vec2 phong() {
9
vec3 normal = normalize(vNormal);
10
vec3 lightDirection = normalize(lightPosition);
11
vec3 halfVector = normalize(eyeVector - lightDirection);
12
13
float NdotL = dot(normal, lightDirection);
14
float NdotH = dot(normal, halfVector);
15
float NdotH2 = NdotH * NdotH;
16
17
float kDiffuse = max(0.0, NdotL) * diffuseness;
18
float kSpecular = pow(NdotH2, shininess);
19
20
return vec2(kSpecular, kDiffuse);
21
}
22
23
void main() {
24
vec3 color = vec3(vNormal);
25
vec2 phongLighting = phong();
26
27
float specularLight = phongLighting.x;
28
float diffuseLight = phongLighting.y;
29
30
if(specularLight >= 0.25) {
31
color = vec3(1.0, 1.0, 1.0);
32
}
33
34
gl_FragColor = vec4(color, diffuseLight);
35
}

Notice how:

  • ArrowAn icon representing an arrow
    When the specular is over a certain threshold, the output of the fragment shader is vec4(1.0), i.e. white. Otherwise, we still return the standard Normal color data.
  • ArrowAn icon representing an arrow
    I also sneaked in an extra bit of data in place of the alpha: the diffuse light.

For that last point, I included the amount of diffuse light when checking the luminance of a pixel with an arbitrary factor (you know me at this point if you've read my shader content, there's always a hacky number somewhere) to ensure that areas with a darker color but lit by our light do not show too much shadow patterns.

Tweaking our pixelLuma value with the diffuseLight obtained from our Normal data

1
float diffuseLight = normal.a;
2
float pixelLuma = luma(pixelColor.rgb + diffuseLight * 0.65);

The white specular reflections that result from this updated code stand out enough from the standard Normal colors to get detected by our Sobel filter, giving us lovely outlines around them! The demo below showcases the result of this final touch to our Moebius post-processing pass, which now features:

  • ArrowAn icon representing an arrow
    Hand-drawn outlines
  • ArrowAn icon representing an arrow
    Crosshatched shadow pattern with consistent displacement and thickness
  • ArrowAn icon representing an arrow
    Specular reflections

A Moebius-worthy WebGL scene and some final thoughts

Render of the spaceship mesh (originally made by Sousinho) used in the final scene with a custom version of our Moebius post-processing pipeline returning a blueprint style output

As a conclusion to this post, I wanted to cover some aspects of my Moebius-inspired React Three Fiber scene I showcased on Twitter in early March 2024. It was interesting to apply the post-processing pass we built throughout this article as it allowed me to uncover some weaknesses of the implementation that I think are worth mentioning.

My original idea was to build sort of an homage to Jean Giraud's sci-fi drawings and his obsession with desert landscapes by featuring a spaceship speeding over sand dunes through the desert with planets in the distance. Here's a list aspects of this scene worth highlighting:

  1. ArrowAn icon representing an arrow
    The spaceship was found on Sketchfab and is originally made by Sousinho. I ran it through gltf-transform to compress some of the texture and the mesh to bring it from 30MB to about 1.6MB.
  2. ArrowAn icon representing an arrow
    The motion is 80% fake. The hover of the spaceship is real, but besides that, it's completely static. I wanted the scene to feel both fast but also be able to run perpetually. To avoid building a gigantic plane for the desert, I simply introduced some Perlin noise in my ground mesh, moving in the opposite direction. (I'm proud of that one 😄)
  3. ArrowAn icon representing an arrow
    The material of the ground was tricky to build. I wanted to displace it with Perlin noise to create rolling dunes while keeping the lighting model/shadows. I ended up basing it on MeshStandardMaterial and modifying the vertex shader using the onBeforeCompile method. Below, you'll see an excerpt of that vertex shader code:

Excerpt of my implementation of GroundMaterial

1
class GroundMaterial extends THREE.MeshStandardMaterial {
2
constructor() {
3
super();
4
this.uniforms = {
5
uTime: { value: 0.0 },
6
};
7
}
8
9
this.onBeforeCompile = (shader) => {
10
shader.uniforms = {
11
...shader.uniforms,
12
...this.uniforms,
13
};
14
15
shader.vertexShader = `
16
// Adding extra code here, Perlin noise, orthogonal, displacement,...
17
` + shader.vertexShader;
18
19
shader.vertexShader = shader.vertexShader.replace(
20
"#include <clipping_planes_vertex>",
21
`#include <clipping_planes_vertex>
22
23
// Adding my own displacement and normal recalculation here
24
`;
25
);
26
}
27
}

If you've read Shining a light on Caustics with Shaders and React Three Fiber, you may notice something familiar in that code (fully featured below): I'm using the same Normal recomputation logic. Doing this allows the shadows and lights to account for the Perlin noise displacement of the terrain. That is why you see the shadow cast by the spaceship onto the ground follow the shape of the dunes as the scene moves!

At this moment, however, I hit a roadblock:

  • ArrowAn icon representing an arrow
    We're displacing the ground mesh with Perlin noise.
  • ArrowAn icon representing an arrow
    We're recomputing the normals so that the shadows and lighting models of MeshStandard material follow that displacement.
  • ArrowAn icon representing an arrow
    However, our scene's Normal data (used to draw outlines) does not contain this displacement information.
Before
After
ArrowAn icon representing an arrowArrowAn icon representing an arrow
Comparison of our rendered scene with displaced ground mesh VS what the Normal data used by the Sobel Filter looks like.

Thus, the Sobel filter had no way to draw all the outlines of the displaced ground mesh 🫠 which, to its own point of few, was nothing more than a flat plane. That is a clear pitfall of our implementation, and moreover one I only noticed while building the final showcase of this project...

@Cody_J_Bennett gave me a couple of alternatives like deriving the Normals from the depth data, or using a gbuffer but those seemed a bit too daunting for me at this stage (and to be honest I was quite tired as well) so I ended up choosing an "the easy way out": manually traversing the scene and assigning the ground its own custom Normal material that takes the displacement into account.

Traversing the scene manually to override materials with more flexibility

1
//...
2
const materials = [];
3
4
gl.setRenderTarget(normalRenderTarget);
5
6
scene.traverse((obj) => {
7
if (obj.isMesh) {
8
materials.push(obj.material);
9
if (obj.name === 'ground') {
10
obj.material = GroundNormalMaterial; // <- takes displacement into account
11
obj.material.uniforms.uTime.value = clock.elapsedTime;
12
obj.material.uniforms.lightPosition.value = lightPosition;
13
} else {
14
obj.material = CustomNormalMaterial; // <- our standard normal material we built earlier
15
obj.material.uniforms.lightPosition.value = lightPosition;
16
}
17
}
18
});
19
20
gl.render(scene, camera);
21
22
scene.traverse((obj) => {
23
if (obj.isMesh) {
24
obj.material = materials.shift();
25
}
26
});
27
//...

I'm not the most proud of that one 😅, but it works: the displaced ground now features outlines at the right place and the whole scene comes together beautifully!

Before
After
ArrowAn icon representing an arrowArrowAn icon representing an arrow
Comparison of our rendered scene with displaced ground mesh VS the displaced GroundNormalMaterial and CustomNormalMaterial passed to the Sobel filter.

The demo below features all the techniques and workarounds I mentioned in this last part and showcases a reconstitution of my take on what a Moebius-worthy scene for the web should look like.

Through this attempt at reproducing the Moebius style, we learned how to build very pleasing stylized shaders and how we can apply them via post-processing, and on top of that, with custom shadows, outlines, and lighting patterns! I hope this exercise gave you confidence and inspiration to go beyond the physically based and emulate different art styles through your shader work.

Hope you'll have a lot of fun playing with this effect! In the meantime, I'll continue exploring other kinds of stylized shaders and post-processing effects that have been on my bucket list for way too long 😌.

Liked this article? Share it with a friend on Twitter or support me to take on more ambitious projects to write about. Have a question, feedback or simply wish to contact me privately? Shoot me a DM and I'll do my best to get back to you.

Have a wonderful day.

– Maxime

A detailed essay on the process of building a post-processing stylized shader reproducing the style of legendary artist Jean Giraud a.k.a Moebius for your React Three Fiber projects. In it, I detail the process of drawing outlines with a Sobel Filter as well as custom shadow and lighting patterns to bring a unique style to your WebGL scene.