File size: 2,547 Bytes
2a7d231
 
 
 
 
 
 
 
 
 
 
 
7e6efcb
 
 
 
 
 
 
2a7d231
e214dd0
 
 
c1924f3
19decf6
 
 
 
60bbd01
c1924f3
60bbd01
 
 
 
2deac7b
 
 
c1924f3
0e09167
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72459fe
 
 
b45c9d9
72459fe
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0e09167
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
license: apache-2.0
datasets:
- Dongwookss/q_a_korean_futsal
language:
- ko
tags:
- unsloth
- trl
- transformer
---

### Model Name : ํ’‹ํ’‹์ด(futfut) 

#### Model Concept 

- ํ’‹์‚ด ๋„๋ฉ”์ธ ์นœ์ ˆํ•œ ๋„์šฐ๋ฏธ ์ฑ—๋ด‡์„ ๊ตฌ์ถ•ํ•˜๊ธฐ ์œ„ํ•ด LLM ํŒŒ์ธํŠœ๋‹๊ณผ RAG๋ฅผ ์ด์šฉํ•˜์˜€์Šต๋‹ˆ๋‹ค.
- **Base Model** : [zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) 
- ํ’‹ํ’‹์ด์˜ ๋งํˆฌ๋Š” 'ํ•ด์š”'์ฒด๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ง๋์— '์–ผ๋งˆ๋“ ์ง€ ๋ฌผ์–ด๋ณด์„ธ์š”~! ํ’‹ํ’‹~!'๋กœ ์ข…๋ฃŒํ•ฉ๋‹ˆ๋‹ค.

<p align="center">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/66305fd7fdd79b4fe6d6a5e5/7UDKdaPfBJnazuIi1cUVw.png" width="400" height="400">
</p>

### Serving by Fast API

- Git repo : [Dongwooks](https://github.com/ddsntc1/FA_Chatbot_for_API) 

#### Summary:

- **Unsloth** ํŒจํ‚ค์ง€๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ **LoRA** ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค.
- **SFT Trainer**๋ฅผ ํ†ตํ•ด ํ›ˆ๋ จ์„ ์ง„ํ–‰
- ํ™œ์šฉ ๋ฐ์ดํ„ฐ
  - [q_a_korean_futsal](https://huggingface.co/datasets/Dongwookss/q_a_korean_futsal)
    - ๋งํˆฌ ํ•™์Šต์„ ์œ„ํ•ด 'ํ•ด์š”'์ฒด๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ์ธ์‚ฟ๋ง์„ ๋„ฃ์–ด ๋ชจ๋ธ ์ปจ์…‰์„ ์œ ์ง€ํ•˜์˜€์Šต๋‹ˆ๋‹ค.
   
- **Environment** : Colab ํ™˜๊ฒฝ์—์„œ ์ง„ํ–‰ํ•˜์˜€์œผ๋ฉฐ L4 GPU๋ฅผ ์‚ฌ์šฉํ•˜์˜€์Šต๋‹ˆ๋‹ค. 

  
  **Model Load**
  
  ``` python
  
  #!pip install transformers==4.40.0 accelerate
  import os
  import torch
  from transformers import AutoTokenizer, AutoModelForCausalLM
  
  model_id = 'Dongwookss/small_fut_final'
  
  tokenizer = AutoTokenizer.from_pretrained(model_id)
  model = AutoModelForCausalLM.from_pretrained(
      model_id,
      torch_dtype=torch.bfloat16,
      device_map="auto",
  )
  model.eval()
  ```

  **Query**

```python
from transformers import TextStreamer
PROMPT = '''Below is an instruction that describes a task. Write a response that appropriately completes the request.
์ œ์‹œํ•˜๋Š” context์—์„œ๋งŒ ๋Œ€๋‹ตํ•˜๊ณ  context์— ์—†๋Š” ๋‚ด์šฉ์€ ๋ชจ๋ฅด๊ฒ ๋‹ค๊ณ  ๋Œ€๋‹ตํ•ด'''

messages = [
    {"role": "system", "content": f"{PROMPT}"},
    {"role": "user", "content": f"{instruction}"}
    ]

input_ids = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

terminators = [
    tokenizer.eos_token_id,
    tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

text_streamer = TextStreamer(tokenizer)
_ = model.generate(
    input_ids,
    max_new_tokens=4096,
    eos_token_id=terminators,
    do_sample=True,
    streamer = text_streamer,
    temperature=0.6,
    top_p=0.9,
    repetition_penalty = 1.1
)
  
  ```