We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bugs in the f function of cw.py when facing negative logits
In f function of cw.py, the real and other are computed by
other = torch.max((1 - one_hot_labels) * outputs, dim=1)[0] real = torch.max(one_hot_labels * outputs, dim=1)[0]
Howerver, when facing negative logits(negative other label logits in other and negative target label in real), the other and real become zero.
I suggest to make the following modifications:
other = torch.max((1 - one_hot_labels) * outputs - one_hot_labels * 1e4, dim=1)[0] real = torch.sum(one_hot_labels*outputs, dim=1)
The text was updated successfully, but these errors were encountered:
Hi @EthanChu7 , good question, this bug already fix by this pull #168, but it’s has not been merged into the main line yet. 👍👍👍
Sorry, something went wrong.
thx for the reply, i m satisfied with the modifications in pull #168, looking forward to see them in the main branch.
No branches or pull requests
✨ Short description of the bug [tl;dr]
Bugs in the f function of cw.py when facing negative logits
💬 Detailed code and results
In f function of cw.py, the real and other are computed by
Howerver, when facing negative logits(negative other label logits in other and negative target label in real), the other and real become zero.
I suggest to make the following modifications:
The text was updated successfully, but these errors were encountered: