Adapter commited on
Commit
7177f15
1 Parent(s): 4060377

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -26
README.md CHANGED
@@ -12,7 +12,7 @@ tags:
12
 
13
  T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint.
14
 
15
- This checkpoint provides conditioning on canny for the StableDiffusionXL checkpoint.
16
 
17
  ## Model Details
18
  - **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models
@@ -35,10 +35,12 @@ This checkpoint provides conditioning on canny for the StableDiffusionXL checkpo
35
 
36
  | Model Name | Control Image Overview| Control Image Example | Generated Image Example |
37
  |---|---|---|---|
38
- |[Adapter/t2iadapter_canny_sdxlv1](https://huggingface.co/Adapter/t2iadapter_canny_sdxlv1)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href=""><img width="64" style="margin:0;padding:0;" src=""/></a>|<a href=""><img width="64" src=""/></a>|
39
- |[Adapter/t2iadapter_sketch_sdxlv1](https://huggingface.co/Adapter/t2iadapter_sketch_sdxlv1)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href=""><img width="64" style="margin:0;padding:0;" src=""/></a>|<a href=""><img width="64" src=""/></a>|
40
- |[Adapter/t2iadapter_depth_sdxlv1](https://huggingface.co/Adapter/t2iadapter_depth_sdxlv1)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href=""><img width="64" src=""/></a>|<a href=""><img width="64" src=""/></a>|
41
- |[Adapter/t2iadapter_openpose_sdxlv1](https://huggingface.co/Adapter/t2iadapter_openpose_sdxlv1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href=""><img width="64" src=""/></a>|<a href=""><img width="64" src=""/></a>|
 
 
42
 
43
 
44
  ## Example
@@ -54,48 +56,55 @@ pip install transformers accelerate safetensors
54
  1. Images are first downloaded into the appropriate *control image* format.
55
  2. The *control image* and *prompt* are passed to the [`StableDiffusionXLAdapterPipeline`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_xl_adapter.py#L125).
56
 
57
- Let's have a look at a simple example using the [Canny Adapter](https://huggingface.co/Adapter/t2iadapter_canny_sdxlv1).
58
 
 
59
  ```py
60
- from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteScheduler
61
  from diffusers.utils import load_image, make_image_grid
62
  from controlnet_aux.pidi import PidiNetDetector
 
63
 
64
  # load adapter
65
  adapter = T2IAdapter.from_pretrained(
66
- "Adapter/t2i-adapter-sketch-sdxl-1.0", torch_dtype=torch.float16, varient="fp16"
67
  ).to("cuda")
68
 
69
  # load euler_a scheduler
70
  model_id = 'stabilityai/stable-diffusion-xl-base-1.0'
71
  euler_a = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
72
- vae= AutoencoderKL.from_pretrained(
73
- "madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16
74
- )
75
  pipe = StableDiffusionXLAdapterPipeline.from_pretrained(
76
- model_id, vae=vae, adapter=adapter, scheduler=euler_a, torch_dtype=torch.float16, variant="fp16",
77
  ).to("cuda")
78
  pipe.enable_xformers_memory_efficient_attention()
79
 
80
- # Load PidiNet
81
  pidinet = PidiNetDetector.from_pretrained("lllyasviel/Annotators").to("cuda")
 
82
 
83
- url = "https://raw.githubusercontent.com/lllyasviel/ControlNet/main/test_imgs/cyber.png"
 
 
84
  image = load_image(url)
85
  image = pidinet(
86
- image, detect_resolution=512, image_resolution=1024, apply_filter=True
87
- ).resize((896, 1152))
 
 
88
 
 
 
89
  prompt = "a robot, mount fuji in the background, 4k photo, highly detailed"
90
- negative_prompt = "anime, cartoon, graphic, text, painting, crayon, graphite, abstract, glitch, deformed, mutated, ugly, disfigured"
91
 
92
  gen_images = pipe(
93
- prompt=prompt,
94
- negative_prompt=negative_prompt,
95
- image=image,
96
- num_inference_steps=30,
97
- adapter_conditioning_scale=1,
98
- cond_tau=1
99
- ).images
100
- gen_images[0]
101
- ```
 
 
12
 
13
  T2I Adapter is a network providing additional conditioning to stable diffusion. Each t2i checkpoint takes a different type of conditioning as input and is used with a specific base stable diffusion checkpoint.
14
 
15
+ This checkpoint provides conditioning on sketch for the StableDiffusionXL checkpoint.
16
 
17
  ## Model Details
18
  - **Developed by:** T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models
 
35
 
36
  | Model Name | Control Image Overview| Control Image Example | Generated Image Example |
37
  |---|---|---|---|
38
+ |[TencentARC/t2i-adapter-canny-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-canny-sdxl-1.0)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_canny.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_canny.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_canny.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_canny.png"/></a>|
39
+ |[TencentARC/t2i-adapter-sketch-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-sketch-sdxl-1.0)<br/> *Trained with [PidiNet](https://github.com/zhuoinoulu/pidinet) edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_sketch.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_sketch.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_sketch.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_sketch.png"/></a>|
40
+ |[TencentARC/t2i-adapter-lineart-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-lineart-sdxl-1.0)<br/> *Trained with lineart edge detection* | A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_lin.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_lin.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_lin.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_lin.png"/></a>|
41
+ |[TencentARC/t2i-adapter-depth-midas-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-depth-midas-sdxl-1.0)<br/> *Trained with Midas depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_mid.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_mid.png"/></a>|
42
+ |[TencentARC/t2i-adapter-depth-zoe-sdxl-1.0](https://huggingface.co/TencentARC/t2i-adapter-depth-zoe-sdxl-1.0)<br/> *Trained with Zoe depth estimation* | A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_zeo.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_depth_zeo.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_zeo.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_depth_zeo.png"/></a>|
43
+ |[Adapter/t2iadapter_openpose_sdxlv1](https://huggingface.co/Adapter/t2iadapter_openpose_sdxlv1)<br/> *Trained with OpenPose bone image* | A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/openpose.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/openpose.png"/></a>|<a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/res_pose.png"><img width="64" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/res_pose.png"/></a>|
44
 
45
 
46
  ## Example
 
56
  1. Images are first downloaded into the appropriate *control image* format.
57
  2. The *control image* and *prompt* are passed to the [`StableDiffusionXLAdapterPipeline`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/t2i_adapter/pipeline_stable_diffusion_xl_adapter.py#L125).
58
 
59
+ Let's have a look at a simple example using the [Canny Adapter](https://huggingface.co/TencentARC/t2i-adapter-lineart-sdxl-1.0).
60
 
61
+ - Dependency
62
  ```py
63
+ from diffusers import StableDiffusionXLAdapterPipeline, T2IAdapter, EulerAncestralDiscreteScheduler, AutoencoderKL
64
  from diffusers.utils import load_image, make_image_grid
65
  from controlnet_aux.pidi import PidiNetDetector
66
+ import torch
67
 
68
  # load adapter
69
  adapter = T2IAdapter.from_pretrained(
70
+ "TencentARC/t2i-adapter-sketch-sdxl-1.0", torch_dtype=torch.float16, varient="fp16"
71
  ).to("cuda")
72
 
73
  # load euler_a scheduler
74
  model_id = 'stabilityai/stable-diffusion-xl-base-1.0'
75
  euler_a = EulerAncestralDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
76
+ vae=AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
 
 
77
  pipe = StableDiffusionXLAdapterPipeline.from_pretrained(
78
+ model_id, vae=vae, adapter=adapter, scheduler=euler_a, torch_dtype=torch.float16, variant="fp16",
79
  ).to("cuda")
80
  pipe.enable_xformers_memory_efficient_attention()
81
 
 
82
  pidinet = PidiNetDetector.from_pretrained("lllyasviel/Annotators").to("cuda")
83
+ ```
84
 
85
+ - Condition Image
86
+ ```py
87
+ url = "https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/org_sketch.png"
88
  image = load_image(url)
89
  image = pidinet(
90
+ image, detect_resolution=1024, image_resolution=1024, apply_filter=True
91
+ )
92
+ ```
93
+ <a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_sketch.png"><img width="480" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_sketch.png"/></a>
94
 
95
+ - Generation
96
+ ```py
97
  prompt = "a robot, mount fuji in the background, 4k photo, highly detailed"
98
+ negative_prompt = "extra digit, fewer digits, cropped, worst quality, low quality, glitch, deformed, mutated, ugly, disfigured"
99
 
100
  gen_images = pipe(
101
+ prompt=prompt,
102
+ negative_prompt=negative_prompt,
103
+ image=image,
104
+ num_inference_steps=30,
105
+ adapter_conditioning_scale=0.9,
106
+ guidance_scale=7.5,
107
+ ).images[0]
108
+ gen_images.save('out_sketch.png')
109
+ ```
110
+ <a href="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/cond_sketch.png"><img width="480" style="margin:0;padding:0;" src="https://huggingface.co/Adapter/t2iadapter/resolve/main/figs_SDXLV1.0/res_sketch.png"/></a>