hyunwoongko commited on
Commit
44a4a79
1 Parent(s): 2dca537

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +157 -0
README.md ADDED
@@ -0,0 +1,157 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ko
4
+ tags:
5
+ - pytorch
6
+ - causal-lm
7
+ license: apache-2.0
8
+
9
+ ---
10
+ # Polyglot-Ko-12.8B
11
+
12
+ ## Model Description
13
+ Polyglot-Ko is a series of large-scale Korean autoregressive language models made by the EleutherAI polyglot team.
14
+
15
+ | Hyperparameter | Value |
16
+ |----------------------|----------------------------------------------------------------------------------------------------------------------------------------|
17
+ | \\(n_{parameters}\\) | 12,898,631,680 |
18
+ | \\(n_{layers}\\) | 40 |
19
+ | \\(d_{model}\\) | 5120 |
20
+ | \\(d_{ff}\\) | 20,480 |
21
+ | \\(n_{heads}\\) | 40 |
22
+ | \\(d_{head}\\) | 128 |
23
+ | \\(n_{ctx}\\) | 2,048 |
24
+ | \\(n_{vocab}\\) | 30,003 / 30,080 |
25
+ | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
26
+ | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
27
+
28
+ The model consists of 40 transformer layers with a model dimension of 5120, and a feedforward dimension of 20480. The model
29
+ dimension is split into 40 heads, each with a dimension of 128. Rotary Position Embedding (RoPE) is applied to 64
30
+ dimensions of each head. The model is trained with a tokenization vocabulary of 30003.
31
+
32
+ ## Training data
33
+
34
+ Polyglot-Ko-12.8B was trained on 863 GB of Korean language data (1.2TB before processing), a large-scale dataset curated by [TUNiB](https://tunib.ai/). The data collection process has abided by South Korean laws. This dataset was collected for the purpose of training Polyglot-Ko models, so it will not be released for public use.
35
+
36
+ | Source |Size (GB) | Link |
37
+ |-------------------------------------|---------|------------------------------------------|
38
+ | Korean blog posts | 682.3 | - |
39
+ | Korean news dataset | 87.0 | - |
40
+ | Modu corpus | 26.4 |corpus.korean.go.kr |
41
+ | Korean patent dataset | 19.0 | - |
42
+ | Korean Q & A dataset | 18.1 | - |
43
+ | KcBert dataset | 12.7 | github.com/Beomi/KcBERT |
44
+ | Korean fiction dataset | 6.1 | - |
45
+ | Korean online comments | 4.2 | - |
46
+ | Korean wikipedia | 1.4 | ko.wikipedia.org |
47
+ | Clova call | < 1.0 | github.com/clovaai/ClovaCall |
48
+ | Naver sentiment movie corpus | < 1.0 | github.com/e9t/nsmc |
49
+ | Korean hate speech dataset | < 1.0 | - |
50
+ | Open subtitles | < 1.0 | opus.nlpl.eu/OpenSubtitles.php |
51
+ | AIHub various tasks datasets | < 1.0 |aihub.or.kr |
52
+ | Standard Korean language dictionary | < 1.0 | stdict.korean.go.kr/main/main.do |
53
+
54
+ Furthermore, in order to avoid the model memorizing and generating personally identifiable information (PII) in the training data, we masked out the following sensitive information in the pre-processing stage:
55
+
56
+ * `<|acc|>` : bank account number
57
+ * `<|rrn|>` : resident registration number
58
+ * `<|tell|>` : phone number
59
+
60
+ ## Training procedure
61
+ Polyglot-Ko-12.8B was trained for 167 billion tokens over 301,000 steps on 256 A100 GPUs with the [GPT-NeoX framework](https://github.com/EleutherAI/gpt-neox). It was trained as an autoregressive language model, using cross-entropy loss to maximize the likelihood of predicting the next token.
62
+
63
+ ## How to use
64
+
65
+ This model can be easily loaded using the `AutoModelForCausalLM` class:
66
+
67
+ ```python
68
+ from transformers import AutoTokenizer, AutoModelForCausalLM
69
+
70
+ tokenizer = AutoTokenizer.from_pretrained("EleutherAI/polyglot-ko-12.8b")
71
+ model = AutoModelForCausalLM.from_pretrained("EleutherAI/polyglot-ko-12.8b")
72
+ ```
73
+
74
+ ## Evaluation results
75
+
76
+ We evaluate Polyglot-Ko-12.8B on [KOBEST dataset](https://arxiv.org/abs/2204.04541), a benchmark with 5 downstream tasks, against comparable models such as skt/ko-gpt-trinity-1.2B-v0.5, kakaobrain/kogpt and facebook/xglm-7.5B, using the prompts provided in the paper.
77
+
78
+ The following tables show the results when the number of few-shot examples differ. You can reproduce these results using the [polyglot branch of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/tree/polyglot) and the following scripts. For a fair comparison, all models were run under the same conditions and using the same prompts. In the tables, `n` refers to the number of few-shot examples.
79
+
80
+ ```console
81
+ python main.py \
82
+ --model gpt2 \
83
+ --model_args pretrained='EleutherAI/polyglot-ko-12.8b' \
84
+ --tasks kobest_copa,kobest_hellaswag \
85
+ --num_fewshot $YOUR_NUM_FEWSHOT \
86
+ --batch_size $YOUR_BATCH_SIZE \
87
+ --device $YOUR_DEVICE \
88
+ --output_path $/path/to/output/
89
+ ```
90
+
91
+ **We show model performance on COPA and HellaSwag. On the other three tasks, the evaluated models all performed similarly close to random guessing.**
92
+
93
+ ### COPA (F1)
94
+
95
+ | Model | params | n=0 | n=5 | n=10 | n=50 |
96
+ |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
97
+ | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.6696 | 0.6477 | 0.6419 | 0.6514 |
98
+ | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.7345 | 0.7287 | 0.7277 | 0.7479 |
99
+ | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.6723 | 0.6731 | 0.6769 | 0.7119 |
100
+ | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.7196 | 0.7193 | 0.7204 | 0.7206 |
101
+ | [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.7595 | 0.7608 | 0.7638 | 0.7788 |
102
+ | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.7745 | 0.7676 | 0.7775 | 0.7887 |
103
+ | **[EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) (this)** |**12.8B**|**0.7937**|**0.8108**|**0.8037**|**0.8368**|
104
+
105
+ <img src="https://user-images.githubusercontent.com/38183241/194697388-f0e6999d-3935-4716-9faa-e14e5a9b6de5.png" width="800px">
106
+
107
+ ### HellaSwag (F1)
108
+
109
+ | Model | params |n=0 | n=5 | n=10 | n=50 |
110
+ |----------------------------------------------------------------------------------------------|--------|--------|--------|---------|---------|
111
+ | [skt/ko-gpt-trinity-1.2B-v0.5](https://huggingface.co/skt/ko-gpt-trinity-1.2B-v0.5) | 1.2B | 0.4036 | 0.4 | 0.4011 | 0.4214 |
112
+ | [kakaobrain/kogpt](https://huggingface.co/kakaobrain/kogpt) | 6.0B | 0.4599 | 0.456 | 0.4616 | 0.4754 |
113
+ | [facebook/xglm-7.5B](https://huggingface.co/facebook/xglm-7.5B) | 7.5B | 0.4261 | 0.437 | 0.4409 | 0.4517 |
114
+ | [EleutherAI/polyglot-ko-1.3b](https://huggingface.co/EleutherAI/polyglot-ko-1.3b) | 1.3B | 0.4013 | 0.3984 | 0.417 | 0.4416 |
115
+ | [EleutherAI/polyglot-ko-3.8b](https://huggingface.co/EleutherAI/polyglot-ko-3.8b) | 3.8B | 0.4438 | 0.4786 | 0.4737 | 0.4822 |
116
+ | [EleutherAI/polyglot-ko-5.8b](https://huggingface.co/EleutherAI/polyglot-ko-5.8b) | 5.8B | 0.4853 | 0.482 | 0.4968 | 0.5012 |
117
+ | **[EleutherAI/polyglot-ko-12.8b](https://huggingface.co/EleutherAI/polyglot-ko-12.8b) (this)** |**12.8B**|**0.4808**|**0.5099**|**0.4945**|**0.492**|
118
+
119
+ <img src="https://user-images.githubusercontent.com/38183241/194697387-218a1ea1-0863-4ea2-b8b4-339b95bdea7f.png" width="800px">
120
+
121
+ ## Limitations and Biases
122
+
123
+ Polyglot-Ko has been trained to optimize next token prediction. Language models such as this are often used for a wide variety of tasks and it is important to be aware of possible unexpected outcomes. For instance, Polyglot-Ko will not always return the most factual or accurate response but the most statistically likely one. In addition, Polyglot may produce socially unacceptable or offensive content. We recommend having a human curator or other filtering mechanism to censor sensitive content.
124
+
125
+ ## Citation and Related Information
126
+ ### BibTeX entry
127
+ If you find our work useful, please consider citing:
128
+ ```bibtex
129
+ @misc{polyglot-ko,
130
+ title = {{Polyglot-Ko: Open-Source Korean Autoregressive Language Model}},
131
+ author = {Ko, Hyunwoong and Yang, Kichang and Ryu, Minho and Choi, Taekyoon and Yang, Seungmu and Hyun, jiwung and Park, Sungho},
132
+ url = {https://www.github.com/eleutherai/polyglot},
133
+ month = {9},
134
+ year = {2022},
135
+ }
136
+ ```
137
+
138
+ ### Licensing
139
+ All our models are licensed under the terms of the Apache License 2.0.
140
+
141
+ ```
142
+ Licensed under the Apache License, Version 2.0 (the "License");
143
+ you may not use this file except in compliance with the License.
144
+ You may obtain a copy of the License at
145
+
146
+ http://www.apache.org/licenses/LICENSE-2.0
147
+
148
+ Unless required by applicable law or agreed to in writing, software
149
+ distributed under the License is distributed on an "AS IS" BASIS,
150
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
151
+ See the License for the specific language governing permissions and
152
+ limitations under the License.
153
+ ```
154
+
155
+ ### Acknowledgement
156
+
157
+ This project was made possible thanks to the computing resources from [Stability.ai](https://stability.ai), and thanks to [TUNiB](https://tunib.ai) for providing a large-scale Korean dataset for this work.