vec2 pos; uniform float time; uniform vec2 resolution; void main(void) { vec2 pos = 2.0*gl_FragCoord.xy / resolution.y - vec2(resolution.x/resolution.y, 1.0); float d = sqrt(pos.x*pos.x + pos.y*pos.y)*10.0; gl_FragColor = vec4(1.0+cos(time*0.97+d), 1.0+cos(time*0.59+d), 1.0+cos(-0.83*time+d), 2.0)/2.0; }
<- back3D pixel blur render
In my journey through 3D rendering and shader programming, I often aspired to craft unique post-processing effects. My curiosity was piqued by the common challenge in the field of bridging the gap between 3D environments and screen-space shaders. I aimed for a design where pixels were aware of their 3D context without leaning on resource-intensive techniques like ray-casting. My approach materialized as a two-pass rendering system, transferring three-dimensional data through the RGBA channels of meshes. Here's a brief overview of the two phases in the rendering workflow:
The video snippet above showcases the shader I devised for 3D meshes. The upper half of the screen recording might appear unimpressive at first, yet these visuals are filled with essential data for my subsequent post-processing work. The camera captures the raw RGBA 3D scene onto a texture, which is then refined using a fragment shader. This shader taps into the RGBA channels to apply 2D masks and distortions grounded in the 3D attributes of the objects. Thus, for lack of a better title, the '3D-Pixel-Noise-Blur' shader emerged!