kyujinpy commited on
Commit
d8ab4cb
1 Parent(s): f6b4655

Upload 2 files

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. Korean-OpenOrca.png +3 -0
  3. README.md +5 -4
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Korean-OpenOrca.png filter=lfs diff=lfs merge=lfs -text
Korean-OpenOrca.png ADDED

Git LFS Details

  • SHA256: f25dd74abb56027e8e0bc12085ee4b8ce6a374d311993d997f84164709dc8003
  • Pointer size: 132 Bytes
  • Size of remote file: 3.91 MB
README.md CHANGED
@@ -2,7 +2,7 @@
2
  language:
3
  - ko
4
  datasets:
5
- - kyujinpy/OpenOrca-ko-v2
6
  library_name: transformers
7
  pipeline_tag: text-generation
8
  license: cc-by-nc-sa-4.0
@@ -26,7 +26,7 @@ Github Korean-OpenOrca: [🐳Korean-OpenOrca🐳](https://github.com/Marker-Inc-
26
  **Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)
27
 
28
  **Training Dataset**
29
- I use [OpenOrca-ko-v2](https://huggingface.co/datasets/kyujinpy/OpenOrca-ko-v2).
30
  Using DeepL, translate about [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca).
31
 
32
  I use A100 GPU 40GB and COLAB, when trianing.
@@ -35,7 +35,8 @@ I use A100 GPU 40GB and COLAB, when trianing.
35
  | Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
36
  | --- | --- | --- | --- | --- | --- | --- |
37
  | [Korean-OpenOrca-13B🐳] | 48.79 | 43.09 | 54.13 | 40.24 | 45.22 | 61.28 |
38
- | Korean-OpenOrca-13B-v2🐳 | 48.17 | 43.17 | 54.51 | 42.90 | 41.82 | 58.44 |
 
39
 
40
  # Implementation Code
41
  ```python
@@ -43,7 +44,7 @@ I use A100 GPU 40GB and COLAB, when trianing.
43
  from transformers import AutoModelForCausalLM, AutoTokenizer
44
  import torch
45
 
46
- repo = "kyujinpy/Korean-OpenOrca-13B-v2"
47
  OpenOrca = AutoModelForCausalLM.from_pretrained(
48
  repo,
49
  return_dict=True,
 
2
  language:
3
  - ko
4
  datasets:
5
+ - kyujinpy/OpenOrca-ko-v3
6
  library_name: transformers
7
  pipeline_tag: text-generation
8
  license: cc-by-nc-sa-4.0
 
26
  **Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)
27
 
28
  **Training Dataset**
29
+ I use [OpenOrca-ko-v3](https://huggingface.co/datasets/kyujinpy/OpenOrca-ko-v3).
30
  Using DeepL, translate about [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca).
31
 
32
  I use A100 GPU 40GB and COLAB, when trianing.
 
35
  | Model | Average |Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
36
  | --- | --- | --- | --- | --- | --- | --- |
37
  | [Korean-OpenOrca-13B🐳] | 48.79 | 43.09 | 54.13 | 40.24 | 45.22 | 61.28 |
38
+ | [Korean-OpenOrca-13B-v2🐳] | 48.17 | 43.17 | 54.51 | 42.90 | 41.82 | 58.44 |
39
+ | Korean-OpenOrca-13B-v3🐳 | 48.86 | 43.77 | 54.30 | 41.79 | 43.85 | 60.57 |
40
 
41
  # Implementation Code
42
  ```python
 
44
  from transformers import AutoModelForCausalLM, AutoTokenizer
45
  import torch
46
 
47
+ repo = "kyujinpy/Korean-OpenOrca-13B-v3"
48
  OpenOrca = AutoModelForCausalLM.from_pretrained(
49
  repo,
50
  return_dict=True,