Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Comments for post 2011-04-02-realtime-global-illumination #8

Closed
tommadams opened this issue Apr 4, 2020 · 16 comments
Closed

Comments for post 2011-04-02-realtime-global-illumination #8

tommadams opened this issue Apr 4, 2020 · 16 comments

Comments

@tommadams
Copy link
Owner

Comments for blog post 2011-04-02-realtime-global-illumination

@tommadams
Copy link
Owner Author

comment imported from WordPress
Miles said on 2011-04-02 20:33:09:

Hi Tom, you have probably already tried it but you can cache the indirect lighting between frames to get multiple bounce effects for "free".

Usually the lighting environment won't be changing fast enough for the latency to be a problem and it should converge pretty quickly.

Looks great though!

@tommadams
Copy link
Owner Author

comment imported from WordPress
Tom Madams said on 2011-04-02 23:14:51:

Thanks!

I've not actually tried inter-frame caching yet, I want to clean the code up a bit first. Well, a lot actually; it's an embarrassment right now. Ideally, each bounce should just be a couple of loops over flattened arrays and then a vertex buffer/light map update.

@tommadams
Copy link
Owner Author

comment imported from WordPress
MJP said on 2011-04-04 10:17:49:

I've been playing around with a similar point-based approach, except I pre-calculate disc-to-disc visibility and do the runtime steps in a compute shader. At the moment I have it down do ~4.1ms for a scene with 20k sample points, and it's completely bandwidth bound so I should be able to get that time lower by packing the data formats a little tighter. I haven't even bothered with the hierarchical stuff yet, since the pre-computed visibility greatly reduces the number of calculations anyway. I was thinking that for scaling it up to bigger scenes, I could do some kind of real-time visibility determination for the sample points so that I only update the indirect lighting for points that actually need it for that frame.

Anyway, very cool stuff! There's definitely a lot of promising avenues to explor

@tommadams
Copy link
Owner Author

comment imported from WordPress
Tom Madams said on 2011-04-04 12:01:49:

Well it looks like you've got me beat by quite a way. Any chance that there's a blog post about it coming up soon? I'd like to see some pretty pictures!

What kind of test scene are you using? In my rather artificial setup, precomputed visibility only reduces the number of patch-to-patch interactions by about 33%, which doesn't help that much. I can easily imagine that in more real-world scenes there's a much higher rate of occlusion.

@tommadams
Copy link
Owner Author

comment imported from WordPress
MJP said on 2011-04-05 10:18:11:

I think it's a little ways off from being safe for public consumption...it's still riddled with hacks and commented-out code.

The test scene isn't too different from yours...it's a floor with three colored walls, with a sphere, torus, and cone in the center. It really wasn't visibility that helped out a lot for me, it was actually pre-computing the form factors. This let me reject points below a certain threshold, and also kept from having to sample the buffer containing sample point positions + normals + colors in the compute shader.

@tommadams
Copy link
Owner Author

comment imported from WordPress
Robin Green said on 2011-04-05 22:26:06:

Good grief, not the HL2 basis please. Ick.

@tommadams
Copy link
Owner Author

comment imported from WordPress
Tom Madams said on 2011-04-06 09:26:18:

Hah!

You're right of course, looking at the comparison images from "Efficient Irradiance Normal Mapping", the HL2 basis gives worse results than I remember.

@tommadams
Copy link
Owner Author

comment imported from WordPress
Szymon Swistun said on 2011-04-11 12:53:57:

Hey Tom, did I see you are using a "tree" structure in there? We talked about this... I think you are doing it wrong :)

Good stuff though. Would be nice to spread the cost of updating the patches / form factor relationships over several frames ( or frequencies ) and be able to introduce dynamic geo somehow into the mix, without special casing etc...

@tommadams
Copy link
Owner Author

comment imported from WordPress
Tom Madams said on 2011-04-11 17:44:34:

Things are a lot easier when you know your code never has to survive full production ;)

I'm reworking the whole thing at the moment to use an octree (!) of SH probes, similar to the point-based approximate color-bleeding paper. I'm not sure if the quality will be up to snuff, but I think it would be easier to make dynamic updates work.

@tommadams
Copy link
Owner Author

comment imported from WordPress
chrisintheboxhris said on 2011-12-13 21:58:59:

This may sound like a nooby question, but, what is the usual way to generate the patches of a given scene? I've read a few articles on how GI is computed for each patch or point given an average light intensity computed on renders, piece of cake stuff for me to understand, but nothing mentioning on how the surfaces are actually generated. Which probably makes sense to them since the lighting is the focus of the article :p

So how do you split your surfaces (before you get to the clustering phase to organize them)? And does your method work for any surfaces or just planar ones like in the Cornell Box?

Your CPU is similar in spec to mine (Core 2 at 2.2ghz) so I've been meaning to try out some GI work to go with my deferred renderer.

@tommadams
Copy link
Owner Author

comment imported from WordPress
Tom Madams said on 2011-12-14 14:58:50:

This is something I've never actually done myself, but I believe a common approach is to perform a UV unwrapping process similar to one used during lightmap generation. The DirectX SDK contains UVAtlas, which I've heard good things about: http://msdn.microsoft.com/en-us/library/windows/desktop/bb206321(v=vs.85).aspx

@tommadams
Copy link
Owner Author

comment imported from WordPress
chris said on 2011-12-14 18:08:52:

I'm more interested in knowing how you did the surface splitting. The results look quite similar to the ones shown here: http://freespace.virgin.net/hugo.elias/radiosity/radiosity.htm

I'm actually using XNA so I can't use the D3DX library to use UVatlas functions. So other than that, I haven't been able to find an algorithm that splits up surfaces into patches.

@tommadams
Copy link
Owner Author

comment imported from WordPress
Tom Madams said on 2011-12-14 21:55:35:

Well, in my test case all the surfaces were quads, so generating the patches was trivial.

Are you really interesting in splitting your geometry? These days, I think it's more common to parameterize with textures (you can think of the texels as your patches if that makes it easier to visualize). UVAtlas can run off-line, so it doesn't matter whether you're using XNA or not.

@tommadams
Copy link
Owner Author

comment imported from WordPress
chrisinthebox said on 2011-12-15 14:59:41:

Ah I see now, well I just prefer to use a managed library, but good to know that about UVAtlas. I'll probably start using quads only, since they really are trivial to map (and you can still do a lot with quads). There's still a lot that can be made using quads. Thanks for the visualization, that's how I pictured the patches in your screenshots, like texels without a filter.

@tommadams
Copy link
Owner Author

comment imported from WordPress
Akres said on 2015-04-02 13:36:11:

Hi, could i use the image https://imdoingitwrong.files.wordpress.com/2011/04/gi1.png in my thesis? It illustrates nicely the artifact created by simple radiosity.
Thanks :-)

@tommadams
Copy link
Owner Author

comment imported from WordPress
Tom Madams said on 2015-04-02 13:47:18:

Sure!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant