So, what do distance field equations look like? And how do we solve them?
category: code [glöplog]
Mewler: I am glad it did. As for the AO we do something like this:
Where: i is the nth point we're sampling, colour is the resultant colour from your shading code. k is a scalar constant for fine tuning. ƒ is again your distance function, and p is the current position of sampling while n is the normal at that point. The division at the end is just for the exponential decay of the AO's effect. as it gets farther away from the shading point on the line, the AO will have less effect. You can remove it, but it adds a nice touch to the whole thing.
Here's a bonus: If you replace n with the unit direction vector to the light from the point of sampling, you will end up with the shadowing coefficient, meaning that you'll be calculating the shadow factors for that point. :)
Also the "compare the result to the actual distance" thing from IQ's presentation simply means: i * k - ƒ(p + n * i * k)
Enjoy :)
Code:
for (float i = 0.; i < 5.; ++i) {
colour -= vec3(i * k - ƒ(p + n * i * k)) / pow(2., i);
}
Where: i is the nth point we're sampling, colour is the resultant colour from your shading code. k is a scalar constant for fine tuning. ƒ is again your distance function, and p is the current position of sampling while n is the normal at that point. The division at the end is just for the exponential decay of the AO's effect. as it gets farther away from the shading point on the line, the AO will have less effect. You can remove it, but it adds a nice touch to the whole thing.
Here's a bonus: If you replace n with the unit direction vector to the light from the point of sampling, you will end up with the shadowing coefficient, meaning that you'll be calculating the shadow factors for that point. :)
Also the "compare the result to the actual distance" thing from IQ's presentation simply means: i * k - ƒ(p + n * i * k)
Enjoy :)
Exponential distance steps worked nicely for me: I've used {2,3,5,9,17}[i]*k instead of i*k.
I often go i*i, and remove the exponential decay. It's sort of importance sampling :)
This thread is just great. :)
Union of two fields:
Intersection of two fields:
How does the complement look like?
I tried:
Works for cutting a sphere out of a cube... But seems not to be 100% correct for other cases.
Union of two fields:
Code:
d = min(field1(p), field2(p));
Intersection of two fields:
Code:
d = max(field1(p), field2(p));
How does the complement look like?
I tried:
Code:
// A \ B
float cut(float a, float b) {
return mix(a - b, a, step(REPS, b));
}
Works for cutting a sphere out of a cube... But seems not to be 100% correct for other cases.
isnt it like d=max(a,-b)?
(thats for signed distance fields)
Sweet. So simple :)
i told you its very simple at Evoke ;)
hope you got my email with the Rendermonkey-SphereMarcher. but seems not, as i put some example of min(a,-b) in there ;)
now visit the that other thread and come up with sth kewl, spiced up with that usual kewl mercury-design ;)
hope you got my email with the Rendermonkey-SphereMarcher. but seems not, as i put some example of min(a,-b) in there ;)
now visit the that other thread and come up with sth kewl, spiced up with that usual kewl mercury-design ;)
hardy :D I have to work now.
Danke ich schau gerade mal drueber - das ist ja echt saugut durchdokumentiert! :)
Danke ich schau gerade mal drueber - das ist ja echt saugut durchdokumentiert! :)
the only I have got currently from reading first half of zeno.pdf that interior part of signed distance function which is negative f(p) < 0 is maybe important for destructable/deformable physics, am I right?
"sphere tracing" will be never realtime without some other techniques, cast a ray to sinc(x) function from a side y=0.15 or some other worst case, what if I want to raymarch ocean waves looking to the sunrise/sunset (by the way it is a good idea to start demo as sunrise and end as sunset), ok, after maybe 10 years I could see it in realtime, but currently polygonal tesselation gives faster implemantion with effective using electricity while "shpere tracing" is creating beautiful image but not electricity effective
anyway I also have some questions. the algorithm uses p or ro as a camera position and v or rd as ray direction and d or h as current step size which is equal to minimal geometric distance to nearest object, because we must not penetrate any solid object, in such way after we find intersection pos with object's surface, next question arises for me, I always have read about from 5 to 30 rays, but we currently cast only one ray? another question is about far behind objects with big depth value, the pos is exact point on the surface of some object, how we do trilinear filtering for far objects? and third question, I don't understand how light positioning and calculation are done
anyway I also have some questions. the algorithm uses p or ro as a camera position and v or rd as ray direction and d or h as current step size which is equal to minimal geometric distance to nearest object, because we must not penetrate any solid object, in such way after we find intersection pos with object's surface, next question arises for me, I always have read about from 5 to 30 rays, but we currently cast only one ray? another question is about far behind objects with big depth value, the pos is exact point on the surface of some object, how we do trilinear filtering for far objects? and third question, I don't understand how light positioning and calculation are done
t = is a sum of all [h], so t = is a depth buffer z-value
ah I see one ray is to detect objectID, 5 rays to do Ambient Occlussion, 6 rays to do Shadows, but from this position and in which direction?
was these normalizing coefficient (vec3) found experimentally by recompile and run?
ah I see one ray is to detect objectID, 5 rays to do Ambient Occlussion, 6 rays to do Shadows, but from this position and in which direction?
was these normalizing coefficient (vec3) found experimentally by recompile and run?
Quote:
rgb = vec3(spe) + rgb * (ao*vec3(0.25,0.30,0.35) + dif*vec3(1.95,1.65,1.05));
which position*
Quote:
the only I have got currently from reading first half of zeno.pdf that interior part of signed distance function which is negative f(p) < 0 is maybe important for destructable/deformable physics, am I right?
It's important to determine whether your are inside an object or not. And you can go with higher stepwidths with signed distance functions (If you stepped through the surface it will step back to the surface).
Quote:
"sphere tracing" will be never realtime without some other techniques
That is just wrong man :)
And it's completely different to polygonal rendering.
Please read/use the other threads.
http://www.pouet.net/topic.php?which=7931
http://www.pouet.net/topic.php?which=7920
I think the best way to debug GLSL shaders is software/reference implemantion. I have taken iq's g4k_Software as starting point.
I have replaced m2xf() function with exp2(), but I think they are not the same
I do not understand FPU assembler very well, what m2xf function do?
Floor is dark
Compare to original
I have replaced m2xf() function with exp2(), but I think they are not the same
Code:
float m2xf(float f)
{
_asm fld dword ptr [f]
_asm fld1
_asm fld st(1)
_asm fprem
_asm f2xm1
_asm faddp st(1), st
_asm fscale
_asm fstp st(1)
_asm fstp dword ptr [f]
return f;
}
I do not understand FPU assembler very well, what m2xf function do?
Code:
#ifdef GL_ES
precision highp float;
#endif
uniform float time;
uniform vec2 resolution;
uniform vec4 mouse;
uniform sampler2D tex0;
uniform sampler2D tex1;
float interesctSphere( const vec3 rO, const vec3 rD, const vec4 sph)
{
vec3 p = rO - sph.xyz;
float b = dot( p, rD );
float c = dot( p, p ) - sph.w*sph.w;
float h = b*b - c;
if( h > 0.0 )
{
h = -b - sqrt( h );
}
return h;
}
float interesctFloor( const vec3 rO, const vec3 rD )
{
return -rO.y/rD.y;
}
//static void calcColor( vec4 & gl_FragColor, const vec4 & gl_FragCoord )
void main()
{
vec2 p = -1.0 + 2.0 * gl_FragCoord.xy / resolution;
vec3 wrd = normalize(vec3(p.x*1.77,p.y,-1.0));
vec3 wro = vec3(0.0,1.0,1.8);
vec4 sphere = vec4(0.0,1.0,0.0,1.0);
bool didHit = false;
float t = 1e20;
float amb = 0.0;
// floor
float t1 = interesctFloor(wro,wrd);
if( t1>0.0 && t1<t )
{
t = t1;
didHit = true;
vec3 pos = wro + t1*wrd;
amb = 0.8*smoothstep(sqrt( pos.x*pos.x + pos.z*pos.z ), 0.0, 2.0);
}
// sphere
float t2 = interesctSphere(wro,wrd,sphere);
if( t2>0.0 && t2<t )
{
t = t2;
didHit = true;
vec3 pos = wro + t2*wrd;
vec3 nor = (pos - sphere.xyz)/sphere.w;
float fre = 1.0+dot(nor,wrd); fre = fre*fre; fre = fre*fre;
amb = clamp( 0.5 + 0.5*nor.y + fre*0.1, 0.0, 1.0 );
}
//
if( didHit )
{
gl_FragColor = vec4( mix( vec3(amb), vec3(1.0), 1.0-exp2(-0.05*t) ), 1.0);
}
else
{
gl_FragColor = vec4(1.0);
}
}
Floor is dark
Compare to original
Hey,
Would someone give me the spike ball function? :< I'm too sleepy to think.
Thanks. :-P
Would someone give me the spike ball function? :< I'm too sleepy to think.
Thanks. :-P
SLeo, m2xf computes 2^x. So, you can do this:
Code:
float m2xf(float f)
{
return powf( 2.0f, f );
}