File size: 1,907 Bytes
5f51a24
424d3a2
 
 
 
 
 
5f51a24
 
424d3a2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0dd8853
424d3a2
 
9f95d2d
 
 
 
 
 
424d3a2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
language:
- en
datasets:
- garage-bAInd/Open-Platypus
library_name: transformers
pipeline_tag: text-generation
license: cc-by-nc-sa-4.0
---

# **SOLAR-Platypus-10.7B-v2**  

## Model Details

**Model Developers** Kyujin Han (kyujinpy)

**Input** Models input text only.

**Output** Models generate text only.

**Model Architecture**   
SOLAR-Platypus-10.7B-v2 is an auto-regressive language model based on the  architecture.  

**Blog Link**  
Blog: [Coming soon...]  
Github: [Coming soon...]   

**Base Model**    
[upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0)   

**Training Dataset**    
[garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).  


## Notice  
While training, I used Q-LoRA.  
The lora_r values is 64.  

## Q-LoRA config
- LoRA_r: 64  
- LoRA_alpha: 16  
- LoRA_dropout: 0.05  
- LoRA_target_modules: [gate_proj, up_proj, down_proj, q_proj, k_proj, v_proj]  

## Prompt
```
## Human:

## Assistant:  
```

# **Model Benchmark**

## Open leaderboard
- Follow up as [link](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).  

| Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K |
| --- | --- | --- | --- | --- | --- | --- | --- |
| SOLAR-Platypus-10.7B-v1 | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| SOLAR-Platypus-10.7B-v2 | NaN | NaN | NaN | NaN | NaN | NaN | NaN |
| [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) | NaN | NaN | NaN | NaN | NaN | NaN | NaN |

  
# Implementation Code
```python
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "kyujinpy/SOLAR-Platypus-10.7B-v2"
OpenOrca = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)
```

---