HighCWu's picture
End of training
878529c verified
metadata
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
  - stable-diffusion
  - stable-diffusion-diffusers
  - image-to-image
  - diffusers
  - controlnet
  - control-lora

ControlLoRA - Head3d Version

ControlLoRA is a neural network structure extended from Controlnet to control diffusion models by adding extra conditions. This checkpoint corresponds to the ControlLoRA conditioned on Head3d.

ControlLoRA uses the same structure as Controlnet. But its core weight comes from UNet, unmodified. Only hint image encoding layers, linear lora layers and conv2d lora layers used in weight offset are trained.

The main idea is from my ControlLoRA and sdxl control-lora.

Example

  1. Clone ControlLoRA from Github:
$ git clone https://github.com/HighCWu/control-lora-v2
  1. Enter the repo dir:
$ cd control-lora-v2
  1. Run code:
import torch
from PIL import Image
from diffusers import StableDiffusionControlNetPipeline, UNet2DConditionModel, UniPCMultistepScheduler
from models.control_lora import ControlLoRAModel

device = 'cuda' if torch.cuda.is_available() else 'cpu'
dtype = torch.float16 if torch.cuda.is_available() else torch.float32

image = Image.open('<Your Conditioning Image Path>')

base_model = "runwayml/stable-diffusion-v1-5"

unet = UNet2DConditionModel.from_pretrained(
    base_model, subfolder="unet", torch_dtype=dtype
)
control_lora: ControlLoRAModel = ControlLoRAModel.from_pretrained(
    "HighCWu/sd-latent-control-dora-rank128-head3d", torch_dtype=dtype
)
control_lora.tie_weights(unet)

pipe = StableDiffusionControlNetPipeline.from_pretrained(
    base_model, unet=unet, controlnet=control_lora, safety_checker=None, torch_dtype=dtype
).to(device)
control_lora.bind_vae(pipe.vae)

pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)

# Remove if you do not have xformers installed
# see https://huggingface.co/docs/diffusers/v0.13.0/en/optimization/xformers#installing-xformers
# for installation instructions
pipe.enable_xformers_memory_efficient_attention()

# pipe.enable_model_cpu_offload()

image = pipe("Girl smiling, professional dslr photograph, high quality", image, num_inference_steps=20).images[0]

image.show()

You can find some example images below.

prompt: a photography of a man with a beard and sunglasses on images_0) prompt: worst quality , low quality , portrait , close - up , inconsistent head shape images_1) prompt: a photography of a man with a mustache and a suit jacket images_2)