I really like that you can resume or extended a LORA training using LTX Trainer by setting the model.load_checkpoint parameter in the config YAML file, but there is one inconvenience - training steps ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results