I really like that you can resume or extended a LORA training using LTX Trainer by setting the model.load_checkpoint parameter in the config YAML file, but there is one inconvenience - training steps ...