shenzhi-wang commited on
Commit
893526c
1 Parent(s): b0763a4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -3
README.md CHANGED
@@ -1,3 +1,72 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
+ base_model: mistralai/Mistral-7B-Instruct-v0.3
6
+ language:
7
+ - en
8
+ - zh
9
+ tags:
10
+ - llama-factory
11
+ - orpo
12
+
13
+ ---
14
+
15
+ ❗️❗️❗️NOTICE: For optimal performance, we refrain from fine-tuning the model's identity. Thus, inquiries such as "Who are you" or "Who developed you" may yield random responses that are not necessarily accurate.
16
+
17
+ # Updates
18
+
19
+ - 🚀🚀🚀 [May 26, 2024] We now introduce [Mistral-7B-v0.3-Chinese-Chat](https://huggingface.co/shenzhi-wang/Mistral-7B-v0.3-Chinese-Chat)! Full-parameter fine-tuned on a mixed Chinese-English dataset of **~100K preference pairs**, **the Chinese ability is greatly improved** compared to [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)! Besides, it has great performance in **mathematics, roleplay, tool use**, etc.
20
+
21
+ # Model Summary
22
+
23
+ [Mistral-7B-v0.3-Chinese-Chat](https://huggingface.co/shenzhi-wang/Mistral-7B-v0.3-Chinese-Chat) is an instruction-tuned language model for Chinese & English users with various abilities such as roleplaying & tool-using built upon the [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3).
24
+
25
+ Developed by: [Shenzhi Wang](https://shenzhi-wang.netlify.app) (王慎执) and [Yaowei Zheng](https://github.com/hiyouga) (郑耀威)
26
+
27
+ - License: [Apache License 2.0](https://choosealicense.com/licenses/apache-2.0/)
28
+ - Base Model: [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)
29
+ - Model Size: 7.25B
30
+ - Context length: 32K
31
+
32
+ # 1. Introduction
33
+
34
+ This is **the first model** specifically fine-tuned for Chinese & English user based on the [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3). The fine-tuning algorithm used is ORPO [1].
35
+
36
+ **Compared to the original [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3), our [Mistral-7B-v0.3-Chinese-Chat](https://huggingface.co/shenzhi-wang/Mistral-7B-v0.3-Chinese-Chat) model significantly reduces the issues of "Chinese questions with English answers" and the mixing of Chinese and English in responses.**
37
+
38
+ [1] Hong, Jiwoo, Noah Lee, and James Thorne. "Reference-free Monolithic Preference Optimization with Odds Ratio." arXiv preprint arXiv:2403.07691 (2024).
39
+
40
+ Training framework: [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory).
41
+
42
+ Training details:
43
+
44
+ - epochs: 3
45
+ - learning rate: 3e-6
46
+ - learning rate scheduler type: cosine
47
+ - Warmup ratio: 0.1
48
+ - cutoff len (i.e. context length): 32768
49
+ - orpo beta (i.e. $\lambda$ in the ORPO paper): 0.05
50
+ - global batch size: 128
51
+ - fine-tuning type: full parameters
52
+ - optimizer: paged_adamw_32bit
53
+
54
+ # 2. Usage
55
+
56
+ ```python
57
+ from transformers import pipeline
58
+
59
+ messages = [
60
+ {
61
+ "role": "system",
62
+ "content": "You are a helpful assistant.",
63
+ },
64
+ {"role": "user", "content": "简要地介绍一下什么是机器学习"},
65
+ ]
66
+ chatbot = pipeline(
67
+ "text-generation",
68
+ model="shenzhi-wang/Mistral-7B-v0.3-Chinese-Chat",
69
+ max_length=32768,
70
+ )
71
+ print(chatbot(messages))
72
+ ```