Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A serious problem: the target domain label is used in the training phase. #13

Open
WUT-YYN opened this issue Jan 13, 2025 · 0 comments
Open

Comments

@WUT-YYN
Copy link

WUT-YYN commented Jan 13, 2025

From the code "def preprocess_labels(self, source_loader, target_loader):
trg_y = copy.deepcopy(target_loader.dataset.y_data)
src_y = source_loader.dataset.y_data
pri_c = np.setdiff1d(trg_y, src_y) # Get the private class in the target domain
mask = np.isin(trg_y, pri_c) # Mark the private sample in the target domain
trg_y[mask] = -1 # Mark the label of the private sample as -1
return trg_y, pri_c" we can see that

  1. The true label trg_y of the target domain sample is directly extracted from target_loader.dataset.y_data, and the private class label set in the target domain is clearly obtained.
  2. trg_y[mask] = -1, all "unknown classes" are marked as "-1", but there is more than one "unknown class" in the target domain, how to solve this problem?
  3. The "unknown class" in the target domain is only detected by the bimodal, but how can the model recognize it? There is no relevant content for learning "unknown classes" in the code.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant