hiroshi-matsuda-rit commited on
Commit
c3b8e68
1 Parent(s): dbb9a58

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -62,7 +62,7 @@ Checkpoints format: Hugging Face Transformers
62
  import torch
63
  from transformers import AutoTokenizer, AutoModelForCausalLM
64
  tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-13b-instruct-full-dolly-ichikara_004_001_single-oasst-oasst2-v2.0")
65
- model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-13b-instruct-full-dolly-ichikara_004_001_single-oasst-oasst2-v2.0", device_map="auto", torch_dtype=torch.float16)
66
  chat = [
67
  {"role": "system", "content": "以下は、タスクを説明する指示です。要求を適切に満たす応答を書きなさい。"},
68
  {"role": "user", "content": "自然言語処理とは何か"},
@@ -107,7 +107,7 @@ The tokenizer of this model is based on [huggingface/tokenizers](https://github.
107
  The vocabulary entries were converted from [`llm-jp-tokenizer v2.2 (100k: code20K_en40K_ja60K.ver2.2)`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v2.2).
108
  Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-ja-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).
109
 
110
- - **Model:** Hugging Face Fast Tokenizer using Unigram byte-fallback model which requires `tokenizers>=0.14.0`
111
  - **Training algorithm:** Marging Code/English/Japanese vocabularies constructed with SentencePiece Unigram byte-fallback and reestimating scores with the EM-algorithm.
112
  - **Training data:** A subset of the datasets for model pre-training
113
  - **Vocabulary size:** 96,867 (mixed vocabulary of Japanese, English, and source code)
 
62
  import torch
63
  from transformers import AutoTokenizer, AutoModelForCausalLM
64
  tokenizer = AutoTokenizer.from_pretrained("llm-jp/llm-jp-13b-instruct-full-dolly-ichikara_004_001_single-oasst-oasst2-v2.0")
65
+ model = AutoModelForCausalLM.from_pretrained("llm-jp/llm-jp-13b-instruct-full-dolly-ichikara_004_001_single-oasst-oasst2-v2.0", device_map="auto", torch_dtype=torch.bfloat16)
66
  chat = [
67
  {"role": "system", "content": "以下は、タスクを説明する指示です。要求を適切に満たす応答を書きなさい。"},
68
  {"role": "user", "content": "自然言語処理とは何か"},
 
107
  The vocabulary entries were converted from [`llm-jp-tokenizer v2.2 (100k: code20K_en40K_ja60K.ver2.2)`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v2.2).
108
  Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-ja-tokenizer` for details on the vocabulary construction procedure (the pure SentencePiece training does not reproduce our vocabulary).
109
 
110
+ - **Model:** Hugging Face Fast Tokenizer using Unigram byte-fallback model
111
  - **Training algorithm:** Marging Code/English/Japanese vocabularies constructed with SentencePiece Unigram byte-fallback and reestimating scores with the EM-algorithm.
112
  - **Training data:** A subset of the datasets for model pre-training
113
  - **Vocabulary size:** 96,867 (mixed vocabulary of Japanese, English, and source code)