goldfish-models commited on
Commit
fe11218
1 Parent(s): c3e1a8a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -9,7 +9,7 @@ library_name: transformers
9
  pipeline_tag: text-generation
10
  tags:
11
  - goldfish
12
-
13
  ---
14
 
15
  # aze_cyrl_5mb
@@ -18,11 +18,11 @@ Goldfish is a suite of monolingual language models trained for 350 languages.
18
  This model is the <b>Azerbaijani</b> (Cyrillic script) model trained on 5MB of data, after accounting for an estimated byte premium of 1.82; content-matched text in Azerbaijani takes on average 1.82x as many UTF-8 bytes to encode as English.
19
  The Goldfish models are trained primarily for comparability across languages and for low-resource languages; Goldfish performance for high-resource languages is not designed to be comparable with modern large language models (LLMs).
20
 
21
- Note: This language is available in Goldfish with other scripts (writing systems). See: aze_arab, aze_latn.
22
 
23
  Note: aze_cyrl is a [macrolanguage](https://iso639-3.sil.org/code_tables/639/data) code. None of its contained individual languages are included in Goldfish (for script cyrl).
24
 
25
- All training and hyperparameter details are in our paper, [Goldfish: Monolingual Language Models for 350 Languages (Chang et al., 2024)](https://github.com/tylerachang/goldfish/blob/main/goldfish_paper_20240815.pdf).
26
 
27
  Training code and sample usage: https://github.com/tylerachang/goldfish
28
 
@@ -32,6 +32,7 @@ Sample usage also in this Google Colab: [link](https://colab.research.google.com
32
 
33
  To access all Goldfish model details programmatically, see https://github.com/tylerachang/goldfish/blob/main/model_details.json.
34
  All models are trained with a [CLS] (same as [BOS]) token prepended, and a [SEP] (same as [EOS]) token separating sequences.
 
35
  Details for this model specifically:
36
 
37
  * Architecture: gpt2
@@ -57,5 +58,6 @@ If you use this model, please cite:
57
  author={Chang, Tyler A. and Arnett, Catherine and Tu, Zhuowen and Bergen, Benjamin K.},
58
  journal={Preprint},
59
  year={2024},
 
60
  }
61
  ```
 
9
  pipeline_tag: text-generation
10
  tags:
11
  - goldfish
12
+ - arxiv:2408.10441
13
  ---
14
 
15
  # aze_cyrl_5mb
 
18
  This model is the <b>Azerbaijani</b> (Cyrillic script) model trained on 5MB of data, after accounting for an estimated byte premium of 1.82; content-matched text in Azerbaijani takes on average 1.82x as many UTF-8 bytes to encode as English.
19
  The Goldfish models are trained primarily for comparability across languages and for low-resource languages; Goldfish performance for high-resource languages is not designed to be comparable with modern large language models (LLMs).
20
 
21
+ Note: This language is available in Goldfish with other scripts (writing systems). See: aze_latn, aze_arab.
22
 
23
  Note: aze_cyrl is a [macrolanguage](https://iso639-3.sil.org/code_tables/639/data) code. None of its contained individual languages are included in Goldfish (for script cyrl).
24
 
25
+ All training and hyperparameter details are in our paper, [Goldfish: Monolingual Language Models for 350 Languages (Chang et al., 2024)](https://www.arxiv.org/abs/2408.10441).
26
 
27
  Training code and sample usage: https://github.com/tylerachang/goldfish
28
 
 
32
 
33
  To access all Goldfish model details programmatically, see https://github.com/tylerachang/goldfish/blob/main/model_details.json.
34
  All models are trained with a [CLS] (same as [BOS]) token prepended, and a [SEP] (same as [EOS]) token separating sequences.
35
+ For best results, make sure that [CLS] is prepended to your input sequence (see sample usage linked above)!
36
  Details for this model specifically:
37
 
38
  * Architecture: gpt2
 
58
  author={Chang, Tyler A. and Arnett, Catherine and Tu, Zhuowen and Bergen, Benjamin K.},
59
  journal={Preprint},
60
  year={2024},
61
+ url={https://www.arxiv.org/abs/2408.10441},
62
  }
63
  ```