TriadParty commited on
Commit
6a6db99
1 Parent(s): 17627a9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -2
README.md CHANGED
@@ -7,8 +7,8 @@ language:
7
  - en
8
  ---
9
  ## **Deepsword-34B-Base**
 
10
  Introducing **wrath** in the Seven Deadly Sins series of models.
11
- ![](https://media.discordapp.net/attachments/1088992345824972840/1187269297811247195/dickboy._Chinese_Fangtian_Painting_Halberd_Manufactured_by_Mech_532eefe6-7d75-473c-b5ef-13e1f46bb09e.png?ex=659645b2&is=6583d0b2&hm=51125137c9b25e1f7447c35ea07e891393b374c8072e023b04c0f231a1533cd8 =200x200)
12
  - Continuous pre-training of qlora on Yi-34b
13
  - High-quality martial arts novels
14
  - Thoughtful cleaning process
@@ -34,4 +34,30 @@ Metric Quantification:
34
 
35
  Apply subjective weighting to the complexity metrics derived from chapter transitions.
36
  ### 3. Outcome
37
- By employing these methods, we can effectively filter out novels of higher quality. This refined [dataset](https://huggingface.co/datasets/TriadParty/deepsword) has been shared for further use. The next step is to continue pretraining, for which specific parameters can be referred to in my previous model descriptions.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  - en
8
  ---
9
  ## **Deepsword-34B-Base**
10
+ ![f1d09b62cfa0687cf9070ee2a59a2a4.png](https://cdn-uploads.huggingface.co/production/uploads/630c1adea20a5367812196f6/0VTlW9BM-F_cbIF_ww4EP.png)
11
  Introducing **wrath** in the Seven Deadly Sins series of models.
 
12
  - Continuous pre-training of qlora on Yi-34b
13
  - High-quality martial arts novels
14
  - Thoughtful cleaning process
 
34
 
35
  Apply subjective weighting to the complexity metrics derived from chapter transitions.
36
  ### 3. Outcome
37
+ By employing these methods, we can effectively filter out novels of higher quality. This refined [dataset](https://huggingface.co/datasets/TriadParty/deepsword) has been shared for further use. The next step is to continue pretraining, for which specific parameters can be referred to in my previous model descriptions.
38
+
39
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630c1adea20a5367812196f6/tOMnutLIoT3ImsocQ5hdt.png)
40
+
41
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630c1adea20a5367812196f6/XNH2opnnJ9ZwV7ACcBcHL.png)
42
+ As you can see, the zero-shot effect is good. The settings in some pretraining novels are very naturally embedded into the characters.
43
+
44
+
45
+ 本模型旨在当作剧本杀游戏流程的agent模型中的基座模型。
46
+ 主要特性:
47
+ 1. Yi-34b上的qlora持续预训练
48
+ 2. 高质量的武侠小说
49
+ 3. 思虑周全的清洗流程
50
+
51
+ 本次收集了大概10G左右的武侠小说,从各种小说网站和pt站上爬取。本身存在大量的重复数据,也存在很多质量不高的小说。为了清洗这部分数据。做了以下工作:
52
+ 1. 定义数据质量维度。对于武侠小说,传统意义上比较高质量的作品是金庸,古龙,梁羽生这些人的小说。在这些作品中,情节的复杂程度无疑是其中的必备因素。而情节的复杂程度无疑也是剧本杀的重点。
53
+ 2. 量化数据质量维度。既然在上一步中我们定义了情节的复杂程度是重点,那么我们就可以:
54
+ (1)把每一章节做一个概述,具体而言,英文推荐:https://huggingface.co/pszemraj/led-large-book-summary。中文推荐:Randeng-Pegasus-523M-Summary-Chinese
55
+ (2)根据概述把情节摘要用基于bert的模型做成向量,然后通过定义一个复杂度的算法,具体而言,先把章节通过余弦相似度或者欧几里得距离来衡量章节之间的转折幅度。
56
+ (3)通过章节之间的复杂程度指标通过标准差和峰值分析的主观加权来量化最后的指标。
57
+ 3. 这样最后能筛选出很多质量不错的小说。我已经分享了这部分数据。然后我们要做的只是将其持续预训练,具体参数参见我的上一个模型。
58
+
59
+ 效果展示:
60
+
61
+ 见上图
62
+
63
+ 可以看到,zero-shot效果不错。一些小说中的设定会非常自然的嵌入到角色中去。