File size: 958 Bytes
bf3f430
 
f4438e1
88cc7af
 
f4438e1
88cc7af
f4438e1
88cc7af
f4438e1
88cc7af
 
 
 
 
22f4a7b
88cc7af
 
 
 
 
 
 
22f4a7b
88cc7af
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
---
license: apache-2.0
---
# **🐻‍❄️COKAL-v1_70B🐻‍❄️**  
![img](./COKAL-DPO_bear.png)  

## Model Details

**Model Developers** Seungyoo Lee (DopeorNope)

**Input** Models input text only.

**Output** Models generate text only.

**Model Architecture**  
COKAL-v1_70B is an auto-regressive 70B language model based on the LLaMA2 transformer architecture.

**Base Model**  



**Training Dataset**  

- SFT training dataset: [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus)

  
**Training**  
I developed the model in an environment with A100 x 8 


# Implementation Code
```python

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "DopeorNope/COKAL-v1_70B"
model = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
model_tokenizer = AutoTokenizer.from_pretrained(repo)
```

---