wuxiaojun commited on
Commit
0e424a6
1 Parent(s): 96b7e5c

init readme

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -65,9 +65,9 @@ Our machine evaluation involved a comprehensive comparison of various models. Th
65
 
66
  As shown in the figures below, a comparison of different models in Chinese and English text-to-image generation performance is presented. The XL version models, such as SD-XL and Taiyi-XL, show significant improvements over the 1.5 version models like SD-v1.5 and Alt-Diffusion. DALL-E 3 is renowned for its vibrant colors and its ability to closely follow text prompts, setting a high standard. Our Taiyi-XL model, with its photographic style, closely matches the performance of Midjourney and excels in bilingual (Chinese and English) text-to-image generation.
67
 
68
- 尽管Taiyi-XL可能还未能与商业模型相媲美,但它比当前双语开源模型优越不少。我们认为我们模型与商业模型的差距主要归因于训练数据的数量、质量和多样性的差异。我们的模型仅使用学术数据集和符合版权要求的图文数据进行训练。正如大家所知的,版权问题仍然是文生图和AIGC模型最大的问题。**当然由于数据限制,对于中国人像或者元素我们也希望开源社区进一步数据微调**。
69
 
70
- Although Taiyi-XL may not yet rival commercial models, it excels among current bilingual open-source models. The gap with commercial models is mainly due to differences in the quantity, quality, and diversity of training data. Our model is trained exclusively on copyright-compliant image-text data. As is well known, copyright issues remain the biggest challenge in text-to-image and AI-generated content (AIGC) models.
71
 
72
  ![不同模型中文文生图生成性能比较](imgs/zh_compare.png)
73
 
 
65
 
66
  As shown in the figures below, a comparison of different models in Chinese and English text-to-image generation performance is presented. The XL version models, such as SD-XL and Taiyi-XL, show significant improvements over the 1.5 version models like SD-v1.5 and Alt-Diffusion. DALL-E 3 is renowned for its vibrant colors and its ability to closely follow text prompts, setting a high standard. Our Taiyi-XL model, with its photographic style, closely matches the performance of Midjourney and excels in bilingual (Chinese and English) text-to-image generation.
67
 
68
+ 尽管Taiyi-XL可能还未能与商业模型相媲美,但它比当前双语开源模型优越不少。我们认为我们模型与商业模型的差距主要归因于训练数据的数量、质量和多样性的差异。我们的模型仅使用学术数据集和符合版权要求的图文数据进行训练,未使用Midjourney和DALL-E 3等生成数据。。正如大家所知的,版权问题仍然是文生图和AIGC模型最大的问题。**当然由于数据限制,对于中国人像或者元素我们也希望开源社区进一步数据微调**。
69
 
70
+ Although Taiyi-XL may not yet rival commercial models, it excels among current bilingual open-source models. The gap with commercial models is mainly due to differences in the quantity, quality, and diversity of training data. Our model is trained exclusively on copyright-compliant image-text data. We dont't use AI generated image such as Midjoueney or DALL-E 3. As is well known, copyright issues remain the biggest challenge in text-to-image and AI-generated content (AIGC) models.
71
 
72
  ![不同模型中文文生图生成性能比较](imgs/zh_compare.png)
73