LoneStriker commited on
Commit
17d0f73
1 Parent(s): d3c8de6

Updated README.md from migtissera/Synthia-7B-v1.3

Browse files
Files changed (1) hide show
  1. README.md +159 -0
README.md CHANGED
@@ -1,3 +1,162 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ language:
5
+ - en
6
+ library_name: transformers
7
  ---
8
+
9
+ Change from Synthia-7B-v1.2 -> Synthia-7B-v1.3: Base model was changed from LLaMA-2-7B to Mistral-7B-v0.1
10
+
11
+ All Synthia models are uncensored. Please use it with caution and with best intentions. You are responsible for how you use Synthia.
12
+
13
+ To evoke generalized Tree of Thought + Chain of Thought reasoning, you may use the following system message:
14
+ ```
15
+ Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
16
+ ```
17
+
18
+ # Synthia-7B-v1.3
19
+ SynthIA (Synthetic Intelligent Agent) 7B-v1.3 is a Mistral-7B-v0.1 model trained on Orca style datasets. It has been fine-tuned for instruction following as well as having long-form conversations.
20
+
21
+
22
+ <br>
23
+
24
+ #### License Disclaimer:
25
+
26
+ This model is released under Apache 2.0, and comes with no warranty or gurantees of any kind.
27
+
28
+ <br>
29
+
30
+ ## Evaluation
31
+
32
+ We evaluated Synthia-7B-v1.3 on a wide range of tasks using [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) from EleutherAI.
33
+
34
+ Here are the results on metrics used by [HuggingFaceH4 Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
35
+
36
+ ||||
37
+ |:------:|:--------:|:-------:|
38
+ |**Task**|**Metric**|**Value**|
39
+ |*arc_challenge*|acc_norm|0.6237|
40
+ |*hellaswag*|acc_norm|0.8349|
41
+ |*mmlu*|acc_norm|0.6232|
42
+ |*truthfulqa_mc*|mc2|0.5125|
43
+ |**Total Average**|-|**0.6485**||
44
+
45
+ <br>
46
+
47
+ ## Example Usage
48
+
49
+ ### Here is prompt format:
50
+
51
+ ```
52
+ SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation.
53
+ USER: How is a rocket launched from the surface of the earth to Low Earth Orbit?
54
+ ASSISTANT:
55
+ ```
56
+
57
+ ### Below shows a code example on how to use this model:
58
+
59
+ ```python
60
+ import torch, json
61
+ from transformers import AutoModelForCausalLM, AutoTokenizer
62
+
63
+ model_path = "migtissera/Synthia-7B-v1.3"
64
+ output_file_path = "./Synthia-7B-conversations.jsonl"
65
+
66
+ model = AutoModelForCausalLM.from_pretrained(
67
+ model_path,
68
+ torch_dtype=torch.float16,
69
+ device_map="auto",
70
+ load_in_8bit=False,
71
+ trust_remote_code=True,
72
+ )
73
+
74
+ tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
75
+
76
+
77
+ def generate_text(instruction):
78
+ tokens = tokenizer.encode(instruction)
79
+ tokens = torch.LongTensor(tokens).unsqueeze(0)
80
+ tokens = tokens.to("cuda")
81
+
82
+ instance = {
83
+ "input_ids": tokens,
84
+ "top_p": 1.0,
85
+ "temperature": 0.75,
86
+ "generate_len": 1024,
87
+ "top_k": 50,
88
+ }
89
+
90
+ length = len(tokens[0])
91
+ with torch.no_grad():
92
+ rest = model.generate(
93
+ input_ids=tokens,
94
+ max_length=length + instance["generate_len"],
95
+ use_cache=True,
96
+ do_sample=True,
97
+ top_p=instance["top_p"],
98
+ temperature=instance["temperature"],
99
+ top_k=instance["top_k"],
100
+ num_return_sequences=1,
101
+ )
102
+ output = rest[0][length:]
103
+ string = tokenizer.decode(output, skip_special_tokens=True)
104
+ answer = string.split("USER:")[0].strip()
105
+ return f"{answer}"
106
+
107
+
108
+ conversation = f"SYSTEM: Elaborate on the topic using a Tree of Thoughts and backtrack when necessary to construct a clear, cohesive Chain of Thought reasoning. Always answer without hesitation."
109
+
110
+
111
+ while True:
112
+ user_input = input("You: ")
113
+ llm_prompt = f"{conversation} \nUSER: {user_input} \nASSISTANT: "
114
+ answer = generate_text(llm_prompt)
115
+ print(answer)
116
+ conversation = f"{llm_prompt}{answer}"
117
+ json_data = {"prompt": user_input, "answer": answer}
118
+
119
+ ## Save your conversation
120
+ with open(output_file_path, "a") as output_file:
121
+ output_file.write(json.dumps(json_data) + "\n")
122
+
123
+ ```
124
+
125
+ <br>
126
+
127
+ #### Limitations & Biases:
128
+
129
+ While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
130
+
131
+ Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
132
+
133
+ Exercise caution and cross-check information when necessary. This is an uncensored model.
134
+
135
+
136
+ <br>
137
+
138
+ ### Citiation:
139
+
140
+ Please kindly cite using the following BibTeX:
141
+
142
+ ```
143
+ @misc{Synthia-7B-v1.3,
144
+ author = {Migel Tissera},
145
+ title = {Synthia-7B-v1.3: Synthetic Intelligent Agent},
146
+ year = {2023},
147
+ publisher = {GitHub, HuggingFace},
148
+ journal = {GitHub repository, HuggingFace repository},
149
+ howpublished = {\url{https://huggingface.co/migtissera/Synthia-13B},
150
+ }
151
+ ```
152
+
153
+ ```
154
+ @misc{mukherjee2023orca,
155
+ title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4},
156
+ author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah},
157
+ year={2023},
158
+ eprint={2306.02707},
159
+ archivePrefix={arXiv},
160
+ primaryClass={cs.CL}
161
+ }
162
+ ```