We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
如果用现有作者模型进行训练,应该是256x256的对不? 我想用自己的2K视频训练自己的512x512模型,请问详细步骤我应该怎么做? @chunyu-li 求帮助。
The text was updated successfully, but these errors were encountered:
请参考 issue #12
Sorry, something went wrong.
感谢大佬的回复,这篇帖子我看过,我目前有几个疑问:
关于第一个问题 https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5/tree/main/vae 我下载后应该放在哪个目录?目前您的训练代码如何使用这个vae模型?
第二个问题我已经准备好了。
第三个问题,SyncNet的配置文件我应该怎么改?
比如first_stage.yaml中我出了修改 resolution: 256 为 512,还需要做什么调整吗? syncnet_16_pixel.yaml 中 我修改resolution: 256 为512,那model下的audio_encoder和visual_encoder是否可以给我一个训练512x512的配置?
非常感谢你的支持和帮助。
No branches or pull requests
如果用现有作者模型进行训练,应该是256x256的对不?
我想用自己的2K视频训练自己的512x512模型,请问详细步骤我应该怎么做?
@chunyu-li 求帮助。
The text was updated successfully, but these errors were encountered: