Skip to content

Commit

Permalink
Update LyCORIS options
Browse files Browse the repository at this point in the history
  • Loading branch information
bmaltais committed Dec 10, 2023
1 parent 50ffd1a commit 27cd481
Show file tree
Hide file tree
Showing 5 changed files with 430 additions and 243 deletions.
2 changes: 1 addition & 1 deletion .release
Original file line number Diff line number Diff line change
@@ -1 +1 @@
v22.3.0
v22.3.1
49 changes: 8 additions & 41 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -202,8 +202,8 @@ This Colab notebook was not created or maintained by me; however, it appears to

I would like to express my gratitude to camendutu for their valuable contribution. If you encounter any issues with the Colab notebook, please report them on their repository.

| Colab | Info |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------- |
| Colab | Info |
| ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------ |
| [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/kohya_ss-colab/blob/main/kohya_ss_colab.ipynb) | kohya_ss_gui_colab |

## Installation
Expand Down Expand Up @@ -651,6 +651,11 @@ masterpiece, best quality, 1boy, in business suit, standing at street, looking b


## Change History
* 2023/12/10 (v22.3.1)
- Add goto button to manual caption utility
- Add missing options for various LyCORIS training algorythms

Check warning on line 656 in README.md

View workflow job for this annotation

GitHub Actions / build

"algorythms" should be "algorithms".
- Refactor how feilds are shown or hidden
- Made max value for network and convolution rank 512 except for LyCORIS/LoKr.
* 2023/12/06 (v22.3.0)
- Merge sd-scripts updates:
- `finetune\tag_images_by_wd14_tagger.py` now supports the separator other than `,` with `--caption_separator` option. Thanks to KohakuBlueleaf! PR [#913](https://github.com/kohya-ss/sd-scripts/pull/913)
Expand All @@ -664,42 +669,4 @@ masterpiece, best quality, 1boy, in business suit, standing at street, looking b
- `--ds_ratio` option denotes the ratio of the Deep Shrink. `0.5` means the half of the original latent size for the Deep Shrink.
- `--dst1`, `--dst2`, `--dsd1`, `--dsd2` and `--dsr` prompt options are also available.
- Add GLoRA support

* 2023/12/03 (v22.2.2)
- Update Lycoris module to 2.0.0 (https://github.com/KohakuBlueleaf/LyCORIS/blob/0006e2ffa05a48d8818112d9f70da74c0cd30b99/README.md)
- Update Lycoris merge and extract tools
- Remove anoying warning about local pip modules that is not necessary.
- Adding support for LyCORIS presets
- Adding Support for LyCORIS Native Fine-Tuning
- Adding support for Lycoris Diag-OFT

* 2023/11/20 (v22.2.1)
- Fix issue with `Debiased Estimation loss` not getting properly loaded from json file. Oups.

* 2023/11/15 (v22.2.0)
- sd-scripts code base update:
- `sdxl_train.py` now supports different learning rates for each Text Encoder.
- Example:
- `--learning_rate 1e-6`: train U-Net only
- `--train_text_encoder --learning_rate 1e-6`: train U-Net and two Text Encoders with the same learning rate (same as the previous version)
- `--train_text_encoder --learning_rate 1e-6 --learning_rate_te1 1e-6 --learning_rate_te2 1e-6`: train U-Net and two Text Encoders with the different learning rates
- `--train_text_encoder --learning_rate 0 --learning_rate_te1 1e-6 --learning_rate_te2 1e-6`: train two Text Encoders only
- `--train_text_encoder --learning_rate 1e-6 --learning_rate_te1 1e-6 --learning_rate_te2 0`: train U-Net and one Text Encoder only
- `--train_text_encoder --learning_rate 0 --learning_rate_te1 0 --learning_rate_te2 1e-6`: train one Text Encoder only

- `train_db.py` and `fine_tune.py` now support different learning rates for Text Encoder. Specify with `--learning_rate_te` option.
- To train Text Encoder with `fine_tune.py`, specify `--train_text_encoder` option too. `train_db.py` trains Text Encoder by default.

- Fixed the bug that Text Encoder is not trained when block lr is specified in `sdxl_train.py`.

- Debiased Estimation loss is added to each training script. Thanks to sdbds!
- Specify `--debiased_estimation_loss` option to enable it. See PR [#889](https://github.com/kohya-ss/sd-scripts/pull/889) for details.
- Training of Text Encoder is improved in `train_network.py` and `sdxl_train_network.py`. Thanks to KohakuBlueleaf! PR [#895](https://github.com/kohya-ss/sd-scripts/pull/895)
- The moving average of the loss is now displayed in the progress bar in each training script. Thanks to shirayu! PR [#899](https://github.com/kohya-ss/sd-scripts/pull/899)
- PagedAdamW32bit optimizer is supported. Specify `--optimizer_type=PagedAdamW32bit`. Thanks to xzuyn! PR [#900](https://github.com/kohya-ss/sd-scripts/pull/900)
- Other bug fixes and improvements.
- kohya_ss gui updates:
- Implement GUI support for SDXL finetune TE1 and TE2 training LR parameters and for non SDXL finetune TE training parameter
- Implement GUI support for Dreambooth TE LR parameter
- Implement Debiased Estimation loss at the botom of the Advanced Parameters tab.

-
6 changes: 6 additions & 0 deletions library/class_basic_training.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,6 +115,12 @@ def __init__(
interactive=True,
)
with gr.Row():
self.max_grad_norm = gr.Slider(
label="Max grad norm",
value=1.0,
minimum=0.0,
maximum=1.0
)
self.lr_scheduler_args = gr.Textbox(
label="LR scheduler extra arguments",
placeholder='(Optional) eg: "lr_end=5e-5"',
Expand Down
5 changes: 5 additions & 0 deletions library/common_gui.py
Original file line number Diff line number Diff line change
Expand Up @@ -710,6 +710,11 @@ def run_cmd_training(**kwargs):
lr_scheduler_args = kwargs.get('lr_scheduler_args', '')
if lr_scheduler_args != '':
run_cmd += f' --lr_scheduler_args {lr_scheduler_args}'

max_grad_norm = kwargs.get('max_grad_norm', '')
if max_grad_norm != '':
run_cmd += f' --max_grad_norm="{max_grad_norm}"'

return run_cmd


Expand Down
Loading

0 comments on commit 27cd481

Please sign in to comment.