You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
self.txt2img_pipe.load_textual_inversion(
EMBEDDING_PATHS, token=EMBEDDING_TOKENS, local_files_only=True
)
textual_inversion_manager = DiffusersTextualInversionManager(self.txt2img_pipe)
self.compel_proc = Compel(
tokenizer=self.txt2img_pipe.tokenizer,
text_encoder=self.txt2img_pipe.text_encoder,
textual_inversion_manager=textual_inversion_manager,
truncate_long_prompts=False,
)
if prompt:
conditioning = self.compel_proc.build_conditioning_tensor(prompt)
if not negative_prompt:
negative_prompt = "" # it's necessary to create an empty prompt - it can also be very long, if you want
negative_conditioning = self.compel_proc.build_conditioning_tensor(
negative_prompt
)
[
prompt_embeds,
negative_prompt_embeds,
] = self.compel_proc.pad_conditioning_tensors_to_same_length(
[conditioning, negative_conditioning]
)
...
output = pipe(
prompt_embeds=prompt_embeds,
negative_prompt_embeds=negative_prompt_embeds,
guidance_scale=guidance_scale,
generator=generator,
num_inference_steps=num_inference_steps,
**extra_kwargs,
)
Im having weird issues, all the relevant code is shown above, however, negative_prompt messes up my image results, almost as if negatives are getting mixed up with positives.
Also, this happens only if prompt and negative prompt length exceeds 77 tokens.
extra_kwargs does not contain prompt or negative_prompt so only embeds are passed into pipeline. The pipeline in this case is controlnet text to image
Is it possible that negatives get mixed up into positives in pad_conditioning_tensors_to_same_length function?
This is my image with long negative prompt
And this is same seed, same prompt, no negative
The text was updated successfully, but these errors were encountered:
Im having weird issues, all the relevant code is shown above, however, negative_prompt messes up my image results, almost as if negatives are getting mixed up with positives.
Also, this happens only if prompt and negative prompt length exceeds 77 tokens.
extra_kwargs does not contain prompt or negative_prompt so only embeds are passed into pipeline. The pipeline in this case is controlnet text to image
Is it possible that negatives get mixed up into positives in
pad_conditioning_tensors_to_same_length
function?This is my image with long negative prompt
And this is same seed, same prompt, no negative
The text was updated successfully, but these errors were encountered: