jsaizant commited on
Commit
5e406e7
1 Parent(s): 8a5f0a0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -28,8 +28,10 @@ task_ids:
28
 
29
  ## Dataset Description
30
 
31
- - **Homepage:** [Projecte AINA](https://projecteaina.cat/)
 
32
  - **Paper:** ["A CURATEd CATalog: Rethinking the Extraction of Pretraining Corpora for Mid-Resourced Languages"]()
 
33
  - **Point of Contact:** [Language Technologies Unit at Barcelona Supercomputing Center (BSC)](langtech@bsc.es)
34
 
35
  ### Dataset Summary
@@ -40,7 +42,7 @@ CATalog is a diverse, open-source Catalan corpus for language modelling. It cons
40
 
41
  - `Fill-Mask`
42
  - `Text Generation`
43
- - `other:Language-Modelling`: The dataset is suitable for training a model in Language Modelling, predicting the next word in a given context. Success is measured by achieving a low [Perplexity](https://huggingface.co/spaces/evaluate-metric/perplexity)score, indicating the model's proficiency in accurately predicting subsequent words.
44
  - `other:Masked-Language-Modelling`: The dataset is designed for training models in Masked Language Modelling. This task involves predicting masked or hidden words within a sentence. Success is typically measured by achieving a high performance score, such as accuracy or [F1](https://huggingface.co/spaces/evaluate-metric/f1) score, on correctly predicting the masked tokens.
45
 
46
  ### Languages
 
28
 
29
  ## Dataset Description
30
 
31
+ - **Homepage:** [Projecte AINA](https://huggingface.co/projecte-aina)
32
+ - **Repository**: [HuggingFace](https://huggingface.co/datasets/projecte-aina/CATalog)
33
  - **Paper:** ["A CURATEd CATalog: Rethinking the Extraction of Pretraining Corpora for Mid-Resourced Languages"]()
34
+ - **Leaderboard**: N/A
35
  - **Point of Contact:** [Language Technologies Unit at Barcelona Supercomputing Center (BSC)](langtech@bsc.es)
36
 
37
  ### Dataset Summary
 
42
 
43
  - `Fill-Mask`
44
  - `Text Generation`
45
+ - `other:Language-Modelling`: The dataset is suitable for training a model in Language Modelling, predicting the next word in a given context. Success is measured by achieving a low [Perplexity](https://huggingface.co/spaces/evaluate-metric/perplexity) score, indicating the model's proficiency in accurately predicting subsequent words.
46
  - `other:Masked-Language-Modelling`: The dataset is designed for training models in Masked Language Modelling. This task involves predicting masked or hidden words within a sentence. Success is typically measured by achieving a high performance score, such as accuracy or [F1](https://huggingface.co/spaces/evaluate-metric/f1) score, on correctly predicting the masked tokens.
47
 
48
  ### Languages