You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, authors. You guys work are so amazing. Thank you for your efforts for safe AIGC.
I have one simple question at inference stage. The white model used in this repository is "/runwayml/stable-diffusion-inpainting",if i only use the text-prompt attack to generate prompt, how can i generate an image with just the prompt? Because,the official hugging face repository of say "stable-diffusion-inpainting" needs three inputs, prompt、image and masked image whereas i only have the optimaized prompt.
The text was updated successfully, but these errors were encountered:
Thanks for your question. The text encoders used in both SD-inpainting and SD are the same. To conduct prompt attacks, you can change the repository path to runwayml/stable-diffusion-v1-5 and use the corresponding pipeline for image generation without a mask and the original image as inputs.
Hi, authors. You guys work are so amazing. Thank you for your efforts for safe AIGC.
I have one simple question at inference stage. The white model used in this repository is "/runwayml/stable-diffusion-inpainting",if i only use the text-prompt attack to generate prompt, how can i generate an image with just the prompt? Because,the official hugging face repository of say "stable-diffusion-inpainting" needs three inputs, prompt、image and masked image whereas i only have the optimaized prompt.
The text was updated successfully, but these errors were encountered: