Skip to content

LoRA

What is LoRA?

LoRA (Low-Rank Adaptation) is a method for fine-tuning generative AI models with specific styles or concepts while keeping the process efficient and lightweight.

Here’s how it works in simple terms:
- The problem: Fine-tuning a large model like Stable Diffusion to learn new styles or concepts usually needs significant compute and storage.

The LoRA Solution:
- Instead of changing all model parameters, LoRA trains a small subset. - Think of it as a removable style layer that can be applied only when needed. - This keeps the base model behavior intact while enabling targeted customization.

Why it’s Cool: - Efficient: It uses way less memory and is faster than traditional fine-tuning methods. - Flexible: You can train multiple LoRA "filters" for different styles or concepts and swap them in and out without modifying the base model. - Compatible: LoRA modules can be shared or reused easily, so artists and developers can collaborate or try out others’ custom styles.

Example Use Case
- Say you want to teach a model to draw in the style of a fictional artist. - You can train a LoRA on a small set of sample images in that style. - Once trained, load the LoRA module and the model can generate images in that style.

In short, LoRA helps you teach models new behavior without changing the original model or requiring heavy hardware.

LoRA Types

There are many LoRA types. Common ones include LoRA, DoRA, LoCon, HaDa, gLoRA, LoKR, and LyCORIS.

They differ in: - Which model components are trained (typically UNet, sometimes text encoder) - Which layers are trained - Which algorithm is used to extract LoRA weights

Warning

LoRA must always match base model used for its training
For example, you cannot use SD1.5 LoRA with SD-XL model

[!WARNING] SD.Next attempts to automatically detect and apply the correct LoRA type.
However, new LoRA types appear regularly. If you find LoRA that is not compatible, please report it so we can add support for it.

How to use?

  • UI: Open the Networks tab, select a LoRA, and it is added to the prompt.
  • Manual: Add <lora:lora_name:strength> to the prompt, then select the LoRA to use.

Trigger words

Some (not all) LoRAs associate specific words during training so same words can be used to trigger specific behavior from the LoRA.
SD.Next displays these trigger words in the UI -> Networks -> LoRA, but they can also be used manually in the prompt.

You can combine any number of LoRAs in a single prompt to get the desired output.

Tip

If you want to automatically apply trigger words/tags to prompt, you can use auto-apply feature in "Settings -> Networks"

[!TIP] You can change the strength of the lora by changing the number <lora:name:x.x> to the desired number

[!TIP] If you're combining multiple LoRAs, you can also "export" that as a single lora via "Models -> Extract LoRA"

Advanced

Advanced options let you control where and how LoRA is applied.

Component selection

By default, LoRA is applied to all model components it was trained on. However, you can also specify which component to apply LoRA to by adding :module=xxx to the LoRA tag.

Example:

<lora:test_lora:1.0:module=unet>

would apply LoRA only on unet regardless of LoRA content.

This is useful when you have multiple LoRAs and want to apply each one to different parts of the model.

Example:

<lora:firstlora:1.0:low> and <lora:secondlora:1.0:high>

Note: low is shorthand for module=transformer_2 and high is shorthand for module=transformer.

Component weights

Typically :strength is applied uniformly for all components of the LoRA.
However, you can also specify individual component weights by adding :comp=x.x to the LoRA tag.

Example:

<lora:test_lora:te=0.5:unet=1.5>

Block weights

Instead of using simple :strength, you can specify individual block weights for LoRA by adding :in=x.x:mid=y.y:out=z.z to the LoRA tag.
Example:

<lora:test_lora:1.0:in=0:mid=1:out=0>

Stepwise weights

LoRA can also be applied with full per-step control by adding step-specific instructions to the LoRA tag. Example:

<lora:test_lora:te=0.1@1,0.6@6>

Would mean apply LoRA to text-encoder with strength 0.1 on step 1 and then switch to strength 0.6 on step 6.

Troubleshooting

For any LoRA related issues, please follow the below procedure: - set environment variable SD_LORA_DEBUG=true - start SD.Next as usual and run it until problem occurs - create GitHub issue - upload full sdnext.log as well as any console exception messages

LoRA Loader Settings

SD.Next provides several options for how LoRAs are loaded and applied. All settings are available in Settings -> Extra Networks -> LoRA.

Loading Method

LoRA load using Diffusers method (lora_force_diffusers) - When disabled (default): Uses SD.Next's native LoRA implementation - When enabled: Uses HuggingFace Diffusers' built-in LoRA support

Fuse Options

Fusing merges LoRA weights directly into the model instead of applying them on-the-fly. This reduces memory usage since no backup weights are stored.

LoRA native fuse with model (lora_fuse_native) - Applies when using the native loading method - Default: enabled

LoRA diffusers fuse with model (lora_fuse_diffusers) - Applies when using the Diffusers loading method - Enables torch.compile compatibility - Default: disabled

Warning

Fused LoRA effects can persist after removal After removing or switching a LoRA while fusing is enabled, you may still see its style in generated images. To fully restore the original model, use System -> Reload Model.

Other Settings

LoRA force reload always (lora_force_reload) - Forces LoRA to reload from storage on every generation - Useful for debugging or when LoRA files are being modified externally - Default: disabled

LoRA memory cache (lora_in_memory_limit) - Number of LoRAs to keep cached in memory - Reduces reload time when switching between frequently used LoRAs - Default: 1