You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have some questions about MinibatchStatConcatLayer. They are mostly about the shape of the vals in the code.
Let's say the shape of the input x is [b, c, h, w]
outputs vals of shape [1, 1, h, w].
I think we should use vals = torch.mean(vals, keepdim=True) instead, which you have commented for an unknown reason.
I have some questions about MinibatchStatConcatLayer. They are mostly about the shape of the vals in the code.
Let's say the shape of the input x is [b, c, h, w]
After this line:
PyTorch-progressive_growing_of_gans/models/base_model.py
Line 80 in 8337fc9
the shape of vals should be [1, c, h, w].
For the case of "all ", we should get vals of shape [1, 1, 1, 1], since it is to "average everything --> 1 value per minibatch". However, this line
PyTorch-progressive_growing_of_gans/models/base_model.py
Line 83 in 8337fc9
outputs vals of shape [1, 1, h, w].
I think we should use
vals = torch.mean(vals, keepdim=True)
instead, which you have commented for an unknown reason.What is the purpose of this line?
PyTorch-progressive_growing_of_gans/models/base_model.py
Line 89 in 8337fc9
It seems like 'target_shape = target_shape' and we still get [b, c, h, w].
The text was updated successfully, but these errors were encountered: