Textual Inversion training with negative example by altering loss function? #819
Replies: 1 comment
-
I realize the problem/possible space of "bad hand" might be too large, or perhaps just scope it down to too many fingers for the sake of discussion. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Let say I want to train a bad/good hand textual inversion embedding,
can I use both positive and negative example (good/bad hand) instead of using only one?
I imagine preparing pairs of images that are identical, except for the hand (via inpainting manually as data preparation),
instead of calculating loss as
loss = mse(target)
, I calculate it asloss = mse(positive_target) - mse(negative_target)
. or evenloss = mse(positive_target) - max(mse(target) for target in negative_targets)
would that work? would that actually be inefficient and better off just training with positive example?
Beta Was this translation helpful? Give feedback.
All reactions