-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Poor Output Quality Using demo_sample.ipynb with Provided Model Parameters #17
Comments
First, you need to make a more detailed description, such as there is a red apple on the table; in addition, the category you need to describe should be relatively common in the imagenet dataset. I think apples may not be particularly common in this dataset. You can try replacing apples with cars. |
Same poor results |
Another thing to note is that the text description can be more and the description method can be adjusted more, because these will greatly affect the generated effect. After all, we just trained it once on ImageNet. If we want better results, I think we need more detailed text and diverse image data. |
last-ckpt have updated. you can try it again. |
Thank you for providing the pretrained model parameters and the demo_sample.ipynb notebook. However, I encountered an issue where the generated results are of very poor quality and do not match the input text at all. Could you please help me understand and resolve this issue? Here is an example image with text of 'an apple':
The text was updated successfully, but these errors were encountered: