You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From past 27 days I am trying to fix my Lora error, but no matters what methods I use, I even use chat got, but still it´s didn't fix. So please if someone knows how to correct that error please let me know (sorry for my bad english).
Folder 75_test: 11 images found
Folder 75_test: 825 steps
max_train_steps = 825
stop_text_encoder_training = 0
lr_warmup_steps = 82
accelerate launch --num_cpu_threads_per_process=2 "train_network.py" --enable_bucket --pretrained_model_name_or_path="D:/stable-diffusion/stable-diffusion-webui/models/Stable-diffusion/anything-v4.0-pruned.ckpt" --train_data_dir="D:/stable-diffusion/output/image" --resolution=512,512 --output_dir="D:/stable-diffusion/output/model" --logging_dir="D:/stable-diffusion/output/log" --network_alpha="1" --save_model_as=safetensors --network_module=networks.lora --text_encoder_lr=5e-5 --unet_lr=0.0001 --network_dim=8 --output_name="last" --lr_scheduler_num_cycles="1" --learning_rate="0.0001" --lr_scheduler="cosine" --lr_warmup_steps="82" --train_batch_size="1" --max_train_steps="825" --save_every_n_epochs="1" --mixed_precision="fp16" --save_precision="fp16" --cache_latents --optimizer_type="AdamW8bit" --max_data_loader_n_workers="0" --bucket_reso_steps=64 --xformers --bucket_no_upscale
prepare tokenizer
Use DreamBooth method.
prepare images.
found directory D:\stable-diffusion\output\image\75_test contains 22 image files
1650 train images with repeating.
0 reg images.
no regularization images / 正則化画像が見つかりませんでした
[Dataset 0]
batch_size: 1
resolution: (512, 512)
enable_bucket: True
min_bucket_reso: 256
max_bucket_reso: 1024
bucket_reso_steps: 64
bucket_no_upscale: True
[Subset 0 of Dataset 0]
image_dir: "D:\stable-diffusion\output\image\75_test"
image_count: 22
num_repeats: 75
shuffle_caption: False
keep_tokens: 0
caption_dropout_rate: 0.0
caption_dropout_every_n_epoches: 0
caption_tag_dropout_rate: 0.0
color_aug: False
flip_aug: False
face_crop_aug_range: None
random_crop: False
is_reg: False
class_tokens: test
caption_extension: .caption
[Dataset 0]
loading image sizes.
100%|██████████████████████████████████████████████████████████████████████████████████| 11/11 [00:00<00:00, 77.78it/s]
make buckets
min_bucket_reso and max_bucket_reso are ignored if bucket_no_upscale is set, because bucket reso is defined by image size automatically / bucket_no_upscaleが指定された場合は、bucketの解像度は画像サイズから自動計算されるため、min_bucket_resoとmax_bucket_resoは無視されます
number of images (including repeats) / 各bucketの画像枚数(繰り返し回数を含む)
bucket 0: resolution (384, 640), count: 75
bucket 1: resolution (512, 320), count: 75
bucket 2: resolution (512, 448), count: 75
bucket 3: resolution (512, 512), count: 600
mean ar error (without repeats): 0.0284594653226946
prepare accelerator
Using accelerator 0.15.0 or above.
load StableDiffusion checkpoint
loading u-net: <All keys matched successfully>
loading vae: <All keys matched successfully>
Traceback (most recent call last):
File "D:\stable-diffusion\kohya\kohya_ss\venv\lib\site-packages\transformers\modeling_utils.py", line 415, in load_state_dict
return torch.load(checkpoint_file, map_location="cpu")
File "D:\stable-diffusion\kohya\kohya_ss\venv\lib\site-packages\torch\serialization.py", line 705, in load
with _open_zipfile_reader(opened_file) as opened_zipfile:
File "D:\stable-diffusion\kohya\kohya_ss\venv\lib\site-packages\torch\serialization.py", line 242, in __init__
super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer))
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\stable-diffusion\kohya\kohya_ss\venv\lib\site-packages\transformers\modeling_utils.py", line 419, in load_state_dict
if f.read(7) == "version":
File "D:\python\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 2077: character maps to <undefined>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\stable-diffusion\kohya\kohya_ss\train_network.py", line 699, in <module>
train(args)
File "D:\stable-diffusion\kohya\kohya_ss\train_network.py", line 126, in train
text_encoder, vae, unet, _ = train_util.load_target_model(args, weight_dtype)
File "D:\stable-diffusion\kohya\kohya_ss\library\train_util.py", line 2545, in load_target_model
text_encoder, vae, unet = model_util.load_models_from_stable_diffusion_checkpoint(args.v2, name_or_path)
File "D:\stable-diffusion\kohya\kohya_ss\library\model_util.py", line 921, in load_models_from_stable_diffusion_checkpoint
text_model = CLIPTextModel.from_pretrained("openai/clip-vit-large-patch14")
File "D:\stable-diffusion\kohya\kohya_ss\venv\lib\site-packages\transformers\modeling_utils.py", line 2301, in from_pretrained
state_dict = load_state_dict(resolved_archive_file)
File "D:\stable-diffusion\kohya\kohya_ss\venv\lib\site-packages\transformers\modeling_utils.py", line 431, in load_state_dict
raise OSError(
OSError: Unable to load weights from pytorch checkpoint file for 'C:\Users\bisha/.cache\huggingface\hub\models--openai--clip-vit-large-patch14\snapshots\8d052a0f05efbaefbc9e8786ba291cfdf93e5bff\pytorch_model.bin' at 'C:\Users\bisha/.cache\huggingface\hub\models--openai--clip-vit-large-patch14\snapshots\8d052a0f05efbaefbc9e8786ba291cfdf93e5bff\pytorch_model.bin'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True.
Traceback (most recent call last):
File "D:\python\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "D:\python\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "D:\stable-diffusion\kohya\kohya_ss\venv\Scripts\accelerate.exe\__main__.py", line 7, in <module>
File "D:\stable-diffusion\kohya\kohya_ss\venv\lib\site-packages\accelerate\commands\accelerate_cli.py", line 45, in main
args.func(args)
File "D:\stable-diffusion\kohya\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 1104, in launch_command
simple_launcher(args)
File "D:\stable-diffusion\kohya\kohya_ss\venv\lib\site-packages\accelerate\commands\launch.py", line 567, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['D:\\stable-diffusion\\kohya\\kohya_ss\\venv\\Scripts\\python.exe', 'train_network.py', '--enable_bucket', '--pretrained_model_name_or_path=D:/stable-diffusion/stable-diffusion-webui/models/Stable-diffusion/anything-v4.0-pruned.ckpt', '--train_data_dir=D:/stable-diffusion/output/image', '--resolution=512,512', '--output_dir=D:/stable-diffusion/output/model', '--logging_dir=D:/stable-diffusion/output/log', '--network_alpha=1', '--save_model_as=safetensors', '--network_module=networks.lora', '--text_encoder_lr=5e-5', '--unet_lr=0.0001', '--network_dim=8', '--output_name=last', '--lr_scheduler_num_cycles=1', '--learning_rate=0.0001', '--lr_scheduler=cosine', '--lr_warmup_steps=82', '--train_batch_size=1', '--max_train_steps=825', '--save_every_n_epochs=1', '--mixed_precision=fp16', '--save_precision=fp16', '--cache_latents', '--optimizer_type=AdamW8bit', '--max_data_loader_n_workers=0', '--bucket_reso_steps=64', '--xformers', '--bucket_no_upscale']' returned non-zero exit status 1.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
From past 27 days I am trying to fix my Lora error, but no matters what methods I use, I even use chat got, but still it´s didn't fix. So please if someone knows how to correct that error please let me know (sorry for my bad english).
Beta Was this translation helpful? Give feedback.
All reactions