Question About LoRA Training

#5
by Jilas - opened

First off, I just want to say a big thank you for open-sourcing LibreFLUX! It’s a fantastic project, and I’m really excited to dive into it.

I’ve been experimenting with LoRA training on your model and wanted to ask if you have any recommended parameters or specific tools that work best for this. I’ve tried using AI Toolkit, but I ran into an issue where I couldn’t get it to train on text. Do you happen to know why that might be?

Again, thanks so much for your hard work and for sharing it with the community! I really appreciate it and look forward to hearing your thoughts.

AI Toolkit does not implement attention masking, there is an open issue. Please use SimpleTuner as specified in the README.md.

edit: This guide here may help you too: https://github.com/AmericanPresidentJimmyCarter/simple-flux-lora-training

You will need to add --flux_attention_masked_training to the config.json.

A user just attempted a finetune with LibreFLUX and found that the custom pipeline messed things up on SimpleTuner, so when finetuning please use this repo instead: https://huggingface.co/jimmycarter/LibreFlux-SimpleTuner

If you have trouble, please come to the SimpleTuner discord server through the link here: https://huggingface.co/terminusresearch

Thanks!

Thank you so much. I will test it out today!

Sign up or log in to comment