So I’ve been working on my screen space reflections (SSR) and have been trying to eliminate artifacts. The next step will be the somehow make it more physically, because currently I just base the strength of the reflection linearly on the roughness ( sorta ).
Sponza Scene ( Yes, again ) ( Oh, and I decreased the intensity, although its configurable by the user, as the previous intensity was WAY too high ):
PS: Notice the weird line artifact below the arches, I still have to figure out what that is, including a few other artifacts. :). And I forgot to disable fog, so the colors are a bit dimmed down.
Another testing scene, the direction of the sun is very low on purpose to enhance the reflections. This is the scene WITHOUT SSR:
Then, WITH SSR:
And, as always, that’s it!
So I finally got a basic implementation of Screen Space Reflections ( SSR ), aside from the fact that its screen space and some artifacts it’s actually ok. Now, you may wonder why the title is as following:
“We must all accept yoshi_lol as our lord and true saviour!”
I based my implementation on the article from Casual Effects:
However I was in trouble as there were a few conversion problems from GLSL -> HLSL, not the syntax conversion. So there yoshi_lol ( User from GameDev.net ) came, gave me his implementation and from there I saw how he converted it to D3D-HLSL. Thanks yoshi_lol! So now we must accept him as our true lord and saviour. :)
Screenshots! ( There are many artifacts, it’s a very early implementation, so there are many areas that look really messed up! )
And that’s about it!
Until next time!
This isn’t really a new big update, just me talking a bit and showing some pictures. :)
Well, as the title suggests I wanted to play with trees, to see how my shading model handles vegetation together with GI. Now aside from the performance as I’ve disabled any frustum culling, the performance is not too bad. However there’s still LOTS of work in the shading model of surfaces where light is guaranteed to pass through, so the images might be a bit weird…
There’s also a few problems with my volumetric lighting. Currently I find the vector between the world space position and the world space position of the pixel, but, if the ray is TOO long, then what? I know there’s some really nice research published by Intel that describes large scale outdoor volumetric lighting, however I’m not going to dive into that right now as it’s a lot of work.
So, as people want to see pictures, I give you pictures!
Now, for the fun of it, why not render 6000 trees!
So something I never got a basic implementation of was color grading, so today I decided to get a rough implementation. There’s still lots to work on, its based on the NVIDIA’s post complement sample (http://developer.download.nvidia.com/shaderlibrary/webpages/shader_library.html).
Color Grading DISABLED vs ENABLED:
And that’s all, until next time! And enough about the damn white/gold/purple/brown/etc… dress!
So one topic that we all hear over and over is VOLUMETRIC LIGHTING ( caps intended ). Why? Because its so damn awesome. Why not? Because it can get expensive depending on the hardware. So after countless of tries I scrapped the code I’ve been wanting to shoot then resurrect then shoot again and just wrote what made sense, and it worked!
The implementation is actually really simple, in simple terms I did it like this: ( I havent optimized it yet, E.g. I should do it all in light view space )
// Number of raymarches
steps = 50
// Get world space position
positionWS = GetPosition();
// Get world space position of the pixel
rayWS = GetWorldSpacePixelPos();
// Get ray between world space position and pixel world space pos
v = positionWS - rayWS;
vStep = v / steps;
color = 0,0,0
for i = 0 to steps
rayWS += vStep;
// Calculate view and proj space rayWS
rayWSVS = ...
rayWSPS = ...
// Does this position recieve light?
occlusion = GetShadowOcclusion(..., rayWSPS);
// Do some fancy math about energy
energy = ... * occlusion * ...
color += energy.xxx;
return color * gLightColor;
Results: ( Its not done yet )
Thats all! Until next time!
Last Entry: http://www.gamedev.net/blog/1882/entry-2260844-got-new-particle-rendering-up-and-running-simulation-next/
So I got a basic backbones of the simulation system up and running. The simulation happens in a compute shader, and everything just works out, which is great! :) So to test it out I put two point masses with low intensity a bit from eachother, and this was the result.
Next step will to be stretch the particles based on velocity for a fake like motion blur, and then allowing the particles to collide with the objects around them.
GIF: ( Its a bit large )
Until next time!
So thanks to this awesome community I got my particle system up and running using structured buffers. My next step will be simulations with it, my goal is to make my particle simulation in the world entirely GPU based.
GIF: ( Its a bit large )
Thats all, until next time!