-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Understanding Eq. 1 and 2 #7
Comments
Hi Tengyu, I'm not sure if I fully understand your confusion. I'll try to answer your questions, and let me know if you have further ones.
the occlusion relationship can change, as two different samples I can try to give an example to explain why OmniMotion can handle occlusions. Let's say
This can also work if and only if the two points are cycle consistent (co-visible). But the loss in Eq. 2 can be applied to occluded points as well. We tried the idea of enforcing cycle-consistent points to be mapped to the same canonical space, but it didn't work very well. In fact, what you need is not only enforcing matching points to be closer but also non-matching points to be further away, otherwise a trivial solution would be to make the canonical space infinitely small. But we didn't find a version of this loss to work robustly well either. Best, |
Please correct me if my understanding is wrong:
If
Because both |
@tengyu-liu thanks for asking these questions, I'm also trying to get my head around this.
My understanding is they don't necessarily get the same colour and density as |
According to sections 4.1 and 4.2, I believe that |
In section 4.3 it says
so it seems perfectly possible to get different colour and density for the same point in different frames. |
@boxraw-tech Tengyu is correct, if two local points map to the same point in the canonical volume, then they are guaranteed to get the same color and density. |
Hi @qianqianwang68 @tengyu-liu I am still trying to wrap my head around the discussion. Could you guys help to clarify them for me, particularly
Following your discussion, assuming m<n, the corresponding surface for |
I think I have finally figured out the occlusions :). For a given point p_i in the first image, you always get the same positions in the canonical volume, same colors and densities. Then for the second image you do the "alpha compositing" to get a single point x_j (that is the 2D point p_j and its "depth"). You don't get the occlusion state yet. They don't mention how to get the occlusion state in the paper, but I think I have found it in the code. |
Congratulations on achieving this great work! The demo and results are very impressive, and it has been a big hit! I really like the idea of using a quasi-3D representation and ignoring the ambiguities because they are not important to the problem.
I'm trying to understand Eq. 1 and 2 from the paper and can't understand why we use the same points in the source$x_i^k$ and target frame $x_j^k=\mathcal{T}_j^{-1}\circ\mathcal{T}_i(x_i^k)$ and hope I can get some clarifications.
In my understanding, if the points$x_j^k$ are the same points as $x_i^k$ in the canonical frame, then the occlusion relationship would not change across frames as the camera ray still passes through the same set of points in the same order. Since $\sigma_k$ is stored in $G$ and does not change across frames, I don't understand why OmniMotion can handle occlusions.
So my question is, why are we computing$x_j^k$ as $\mathcal{T}_j^{-1}\circ\mathcal{T}_i(x_i^k)$ instead of sampling from a new ray in $j$ -th frame and map that to the same canonical space? Why does the model work so well despite $M_\theta$ cannot change the occlusion relationship?
The text was updated successfully, but these errors were encountered: