Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Comparing to SoTA #9

Open
aelhakie opened this issue Jan 27, 2025 · 1 comment
Open

Comparing to SoTA #9

aelhakie opened this issue Jan 27, 2025 · 1 comment

Comments

@aelhakie
Copy link

Hello,
First of all great work with this research! Easy to use, well documented, and insanely useful!
I was curious to know if you ever compared this method to the current state-of-the-arts? Like SuGaR, RaDe-GS or Gaussian Opacity Fields.
Thank you!

@Lewis-Stuart-11
Copy link
Owner

Hi,

Thank you for that :). That's an interesting question. I have not done a direct comparison with these methods. However, if someone else has, then I encourage them to comment the results. However, I can give my two cents on the benefits/drawbacks of our method compared to some of these models.

Firstly, for meshing, these methods produce far superior results. Our solution is pretty basic, involving generating points on predicted surfaces and using surface reconstruction to get a final mesh, which can produce less desirable results. Meanwhile these other models incorporate meshing into the training process, resulting in a much smoother and accurate mesh.

For point clouds, it depends what you want. If you only want points on the surface of the scene, then you can use the meshes generated from these models to do that, since these meshes focus on covering the surface of the scene and new points can be sampled from these meshes easily. However, if you want to convert the entire 3DGS scene into a point cloud, then our method can do that for you, since it samples from all Gaussians regardless of position near the surface. The benefit of this is that you can get a more rich and comprehensive converted scene, but this will be susceptible to noise. Another benefit is that our approach requires no retraining of the original scene, making it usable on scenes generated by any model that outputs a valid .ply or .splat file

Hope this helps

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants