losyer8 commited on
Commit
9f6b890
1 Parent(s): 08c696c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -6
README.md CHANGED
@@ -41,7 +41,7 @@ This repository provides large language models developed by [LLM-jp](https://llm
41
  |**Pre-trained models**|
42
  | [llm-jp-13b-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-v1.0) |
43
  | [llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) |
44
- Checkpoints format: `transformers` (Megatron-DeepSpeed format available [here](https://huggingface.co/llm-jp/llm-jp-13b-v1.0-mdsfmt))
45
 
46
 
47
  ## Required Libraries and Their Versions
@@ -95,8 +95,8 @@ print(tokenizer.decode(output))
95
 
96
  ## Tokenizer
97
  The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
98
- The vocab entries were converted from [`llm-jp-tokenizer v2.1 (50k)`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v2.1).
99
- Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-ja-tokenizer` for the details of vocab constuction steps.
100
  - **Model:** Hugging Face Fast Tokenizer using Unigram byte-fallback model which requires `tokenizers>=0.14.0`
101
  - **Training algorithm:** SentencePiece Unigram byte-fallback
102
  - **Training data:** A subset of the datasets for model pre-training
@@ -107,7 +107,7 @@ Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-
107
 
108
  ### Pre-training
109
 
110
- The models have been pre-trained on approximately 287.5B tokens, sourced from a blend of the following datasets.
111
 
112
  | Language | Dataset | Tokens|
113
  |:---:|:---:|:---:|
@@ -117,7 +117,8 @@ The models have been pre-trained on approximately 287.5B tokens, sourced from a
117
  ||[The Pile](https://huggingface.co/datasets/EleutherAI/pile)|135B
118
  |Codes|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|10B
119
 
120
- Pretraining was done by 10-hold shards that consists approx. 27-28B tokens. We further finalized the pretraining with additional cleaned 27B tokens data.
 
121
 
122
  ### Instruction tuning
123
 
@@ -151,4 +152,4 @@ llm-jp(at)nii.ac.jp
151
  ## Model Card Authors
152
  *The names are listed in alphabetical order.*
153
 
154
- Namgi Han, Hirokazu Kiyomaru, Hiroshi Matsuda, Shota Sasaki, Shuhei Kurita, Taishi Nakamura, Takumi Okamoto.
 
41
  |**Pre-trained models**|
42
  | [llm-jp-13b-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-v1.0) |
43
  | [llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) |
44
+ Checkpoints format: Hugging Face Transformers (Megatron-DeepSpeed format models are available [here](https://huggingface.co/llm-jp/llm-jp-13b-v1.0-mdsfmt))
45
 
46
 
47
  ## Required Libraries and Their Versions
 
95
 
96
  ## Tokenizer
97
  The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
98
+ The vocabulary entries were converted from [`llm-jp-tokenizer v2.1 (50k)`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v2.1).
99
+ Please refer to the [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-ja-tokenizer` for details on the vocabulary construction procedure.
100
  - **Model:** Hugging Face Fast Tokenizer using Unigram byte-fallback model which requires `tokenizers>=0.14.0`
101
  - **Training algorithm:** SentencePiece Unigram byte-fallback
102
  - **Training data:** A subset of the datasets for model pre-training
 
107
 
108
  ### Pre-training
109
 
110
+ The models have been pre-trained using a blend of the following data sets.
111
 
112
  | Language | Dataset | Tokens|
113
  |:---:|:---:|:---:|
 
117
  ||[The Pile](https://huggingface.co/datasets/EleutherAI/pile)|135B
118
  |Codes|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|10B
119
 
120
+ The pre-training was continuously conducted using a total of 10 folds of non-overlapping data, each consisting of approximately 27-28B tokens.
121
+ We finalized the pre-training with additional (potentially) high-quality 27B tokens data obtained from the identical source data sets listed above used for the 10-fold data.
122
 
123
  ### Instruction tuning
124
 
 
152
  ## Model Card Authors
153
  *The names are listed in alphabetical order.*
154
 
155
+ Hirokazu Kiyomaru, Hiroshi Matsuda, Jun Suzuki, Namgi Han, Saku Sugawara, Shota Sasaki, Shuhei Kurita, Taishi Nakamura, Takumi Okamoto.