Skip to content

How to use ControlNet

ControlNet lets you guide image generation with extra conditioning data.

Different ControlNet models influence images in different ways. For example, you can pose people, preserve scene layout with depth, or guide generation from line art. They use a control image, either prepared in advance or generated on the fly, that encodes the information the selected model needs.

Note

ControlNet models can be large
Using ControlNet decreases generation speed and increases resource usage, especially GPU VRAM

Choosing a model

Model choice matters because each model controls a different aspect of the result and with different strength.

Available models depend on your base model family (SD 1.5, SDXL, SD 3.5, Flux, and others).

Commonly used models:

  • OpenPose
    Used to pose people or characters from "stick-like" pose images that represent eyes, nose, ears, limbs, and sometimes fingers. Effect on composition: Light to medium
  • Depth
    Uses depth maps, where gray intensity represents distance in the scene (white: closest, black: farthest). Useful for preserving spatial layout and scene structure. Effect on composition: Strong
  • Lineart
    Uses existing line art to guide generation. Line art can come from photoreal or anime-style images. Effect on composition: Weak
  • Canny
    Uses edge-detection images that are similar to line art. Useful when line art is hard to create directly. Composition impact depends on model family: SD 1.5 is usually stronger, SDXL is often weaker. Effect on composition: Variable (from weak to strong)
  • Segmentation
    Uses segmented images where different classes are represented by different colors (for example, people in red, buildings in light blue). Helpful when you need cleaner object separation and less concept bleeding. Effect on composition: Medium
  • Tiling
    A special model type that uses a source image to generate larger outputs from tiles. It can replace standard img2img resizing for very large outputs because each tile is generated separately.
  • Union and ProMax
    Special models that combine multiple control types in one model. When using Union or ProMax, also select the control mode.

How to generate a control image

Control images are required for ControlNet. You can create them in two ways.

Generate control images on the fly

SD.Next can generate a control image from your input image using a preprocessor. The preprocessor depends on the selected model and converts input into the right format.

Preprocessors are additional models, so they consume extra VRAM. If needed, you can unload them or move them to CPU in SD.Next settings. Accuracy also depends on how each preprocessor was trained.

Canny, Depth, Segmentation and Lineart preprocessors are recommended in case you do not have control images at hand.

Use a pre-existing control image

For OpenPose in particular, the preprocessor may not be accurate enough. In that case, you can provide a pre-made control image. This avoids preprocessing VRAM overhead, but requires creating the image yourself.

Examples of tools that can generate ControlNet input images:

Using ControlNet with SD.Next

Note

Following step-by-step guide is created using SD.Next ModernUI. The same options exist in StandardUI, although their location may differ.

First, enable Control by clicking on the "Control" checkbox near the preview area.

Control checkbox near the preview area

A new tab will appear, make sure "ControlNet" is selected.

Control tab with ControlNet selected

Now choose how many units (ControlNet models) to use. One unit is enough for most tasks, but complex workflows may need more. Increase the Units value to add more units. This guide assumes one unit. The workflow is the same with multiple units, but VRAM use increases.

ControlNet unit count selector

Ensure the unit is enabled. If the checkbox under "ControlNet Unit 1" is not checked, enable it.

ControlNet unit enabled checkbox

Select the ControlNet model to use. Click the reload icon to load available models, then choose one from the list. The model is downloaded automatically and becomes available in SD.Next. You can track progress in console output or logs.

ControlNet model reload button

ControlNet model dropdown list

Note

No items will display unless you have loaded a checkpoint first.

If you want to use a preprocessor, select it from the list next to the ControlNet model selector.

ControlNet preprocessor selector

Now choose how strongly ControlNet should affect generation. The "CN" strength slider goes from 0 (no effect) to 1 (full effect). For OpenPose, 1 is often fine. For Depth and Canny, lower values may work better.

ControlNet strength slider

For advanced use cases, you can also control when ControlNet is active during sampling. Adjust "CN start" and "CN end" (0 = start of sampling, 1 = end).

ControlNet start and end controls

Click the upper arrow icon to upload your control image. Note: if you do not use a preprocessor, the control image must match the target output aspect ratio.

Control image upload button

If you have used a preprocessor, you can hit the preview icon to see how the image will be preprocessed.

Preprocessor preview button

You can adjust preprocessor-specific settings in the Control settings section.

Control settings section

Tip

If needed, use the "Reset" icon to restore default values.

Once everything is set up, write your prompt, set your image parameters, and hit Generate. You will get a preview of your control image and generation will begin.