-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments for post 2011-04-02-realtime-global-illumination #8
Comments
comment imported from WordPress Hi Tom, you have probably already tried it but you can cache the indirect lighting between frames to get multiple bounce effects for "free". Usually the lighting environment won't be changing fast enough for the latency to be a problem and it should converge pretty quickly. Looks great though! |
comment imported from WordPress Thanks! I've not actually tried inter-frame caching yet, I want to clean the code up a bit first. Well, a lot actually; it's an embarrassment right now. Ideally, each bounce should just be a couple of loops over flattened arrays and then a vertex buffer/light map update. |
comment imported from WordPress I've been playing around with a similar point-based approach, except I pre-calculate disc-to-disc visibility and do the runtime steps in a compute shader. At the moment I have it down do ~4.1ms for a scene with 20k sample points, and it's completely bandwidth bound so I should be able to get that time lower by packing the data formats a little tighter. I haven't even bothered with the hierarchical stuff yet, since the pre-computed visibility greatly reduces the number of calculations anyway. I was thinking that for scaling it up to bigger scenes, I could do some kind of real-time visibility determination for the sample points so that I only update the indirect lighting for points that actually need it for that frame. Anyway, very cool stuff! There's definitely a lot of promising avenues to explor |
comment imported from WordPress Well it looks like you've got me beat by quite a way. Any chance that there's a blog post about it coming up soon? I'd like to see some pretty pictures! What kind of test scene are you using? In my rather artificial setup, precomputed visibility only reduces the number of patch-to-patch interactions by about 33%, which doesn't help that much. I can easily imagine that in more real-world scenes there's a much higher rate of occlusion. |
comment imported from WordPress I think it's a little ways off from being safe for public consumption...it's still riddled with hacks and commented-out code. The test scene isn't too different from yours...it's a floor with three colored walls, with a sphere, torus, and cone in the center. It really wasn't visibility that helped out a lot for me, it was actually pre-computing the form factors. This let me reject points below a certain threshold, and also kept from having to sample the buffer containing sample point positions + normals + colors in the compute shader. |
comment imported from WordPress Good grief, not the HL2 basis please. Ick. |
comment imported from WordPress Hah! You're right of course, looking at the comparison images from "Efficient Irradiance Normal Mapping", the HL2 basis gives worse results than I remember. |
comment imported from WordPress Hey Tom, did I see you are using a "tree" structure in there? We talked about this... I think you are doing it wrong :) Good stuff though. Would be nice to spread the cost of updating the patches / form factor relationships over several frames ( or frequencies ) and be able to introduce dynamic geo somehow into the mix, without special casing etc... |
comment imported from WordPress Things are a lot easier when you know your code never has to survive full production ;) I'm reworking the whole thing at the moment to use an octree (!) of SH probes, similar to the point-based approximate color-bleeding paper. I'm not sure if the quality will be up to snuff, but I think it would be easier to make dynamic updates work. |
comment imported from WordPress This may sound like a nooby question, but, what is the usual way to generate the patches of a given scene? I've read a few articles on how GI is computed for each patch or point given an average light intensity computed on renders, piece of cake stuff for me to understand, but nothing mentioning on how the surfaces are actually generated. Which probably makes sense to them since the lighting is the focus of the article :p So how do you split your surfaces (before you get to the clustering phase to organize them)? And does your method work for any surfaces or just planar ones like in the Cornell Box? Your CPU is similar in spec to mine (Core 2 at 2.2ghz) so I've been meaning to try out some GI work to go with my deferred renderer. |
comment imported from WordPress This is something I've never actually done myself, but I believe a common approach is to perform a UV unwrapping process similar to one used during lightmap generation. The DirectX SDK contains UVAtlas, which I've heard good things about: http://msdn.microsoft.com/en-us/library/windows/desktop/bb206321(v=vs.85).aspx |
comment imported from WordPress I'm more interested in knowing how you did the surface splitting. The results look quite similar to the ones shown here: http://freespace.virgin.net/hugo.elias/radiosity/radiosity.htm I'm actually using XNA so I can't use the D3DX library to use UVatlas functions. So other than that, I haven't been able to find an algorithm that splits up surfaces into patches. |
comment imported from WordPress Well, in my test case all the surfaces were quads, so generating the patches was trivial. Are you really interesting in splitting your geometry? These days, I think it's more common to parameterize with textures (you can think of the texels as your patches if that makes it easier to visualize). UVAtlas can run off-line, so it doesn't matter whether you're using XNA or not. |
comment imported from WordPress Ah I see now, well I just prefer to use a managed library, but good to know that about UVAtlas. I'll probably start using quads only, since they really are trivial to map (and you can still do a lot with quads). There's still a lot that can be made using quads. Thanks for the visualization, that's how I pictured the patches in your screenshots, like texels without a filter. |
comment imported from WordPress Hi, could i use the image https://imdoingitwrong.files.wordpress.com/2011/04/gi1.png in my thesis? It illustrates nicely the artifact created by simple radiosity. |
comment imported from WordPress Sure! |
Comments for blog post 2011-04-02-realtime-global-illumination
The text was updated successfully, but these errors were encountered: