kyujinpy commited on
Commit
424d3a2
1 Parent(s): 5ffaf07

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -0
README.md CHANGED
@@ -1,3 +1,74 @@
1
  ---
 
 
 
 
 
 
2
  license: cc-by-nc-sa-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ datasets:
5
+ - garage-bAInd/Open-Platypus
6
+ library_name: transformers
7
+ pipeline_tag: text-generation
8
  license: cc-by-nc-sa-4.0
9
  ---
10
+
11
+ # **SOLAR-Platypus-10.7B-v2**
12
+
13
+ ## Model Details
14
+
15
+ **Model Developers** Kyujin Han (kyujinpy)
16
+
17
+ **Input** Models input text only.
18
+
19
+ **Output** Models generate text only.
20
+
21
+ **Model Architecture**
22
+ SOLAR-Platypus-10.7B-v2 is an auto-regressive language model based on the architecture.
23
+
24
+ **Blog Link**
25
+ Blog: [Coming soon...]
26
+ Github: [Coming soon...]
27
+
28
+ **Base Model**
29
+ [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0)
30
+
31
+ **Training Dataset**
32
+ [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
33
+
34
+
35
+ ## Notice
36
+ While training, I used Q-LoRA.
37
+ The lora_r values is 64.
38
+
39
+ ## Prompt
40
+ ```
41
+ ## Human:
42
+
43
+ ## Assistant:
44
+ ```
45
+
46
+ # **Model Benchmark**
47
+
48
+ ## Open leaderboard
49
+ - Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
50
+
51
+ | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
52
+ | --- | --- | --- | --- | --- | --- | --- | --- |
53
+ | SOLAR-Platypus-10.7B-v1 | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
54
+ | SOLAR-Platypus-10.7B-v2 | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
55
+ | [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
56
+
57
+
58
+ # Implementation Code
59
+ ```python
60
+ ### KO-Platypus
61
+ from transformers import AutoModelForCausalLM, AutoTokenizer
62
+ import torch
63
+
64
+ repo = "kyujinpy/SOLAR-Platypus-10.7B-v2"
65
+ OpenOrca = AutoModelForCausalLM.from_pretrained(
66
+ repo,
67
+ return_dict=True,
68
+ torch_dtype=torch.float16,
69
+ device_map='auto'
70
+ )
71
+ OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
72
+ ```
73
+
74
+ ---