About 8 results
Open links in new tab
  1. Doing big training best parameters research is so hard - GitHub

    Dec 1, 2025 · Please fix this issue because I already have improved default config and every model requires new hyperparameter research seperately and it is extremely hard right now

  2. ai-toolkit/notebooks/FLUX_1_dev_LoRA_Training.ipynb at main - GitHub

    The ultimate training toolkit for finetuning diffusion models - ostris/ai-toolkit

  3. feat: Add SageAttention support for Wan models (2-3x training

    Config mistakes (wrong LR ratios) Includes: - Three recommended config approaches - Training duration guidelines (500-800 steps max per expert) - Alternative strategies (sequential training, …

  4. AI Toolkit by Ostris - GitHub

    IMPORTANT NOTE - READ THIS This is my research repo. I do a lot of experiments in it and it is possible that I will break things. If something breaks, checkout an earlier commit. This repo …

  5. Progressive Alpha Scheduling, Advanced Metrics, and Video

    Config mistakes (wrong LR ratios) Includes: - Three recommended config approaches - Training duration guidelines (500-800 steps max per expert) - Alternative strategies (sequential training, …

  6. Does not work with m1 mac · Issue #127 · ostris/ai-toolkit - GitHub

    also do not say "no one", most of the people in AI top ones are chinese! look at all our research papers also