AnanthZeke commited on
Commit
d82334e
1 Parent(s): 438b32b

added dataset

Browse files
README.md ADDED
@@ -0,0 +1,256 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - machine-generated
4
+ language_creators:
5
+ - machine-generated
6
+ language:
7
+ - as
8
+ - bn
9
+ - gu
10
+ - hi
11
+ - kn
12
+ - ml
13
+ - mr
14
+ - or
15
+ - pa
16
+ - ta
17
+ - te
18
+ license:
19
+ - cc0-1.0
20
+ multilinguality:
21
+ - multilingual
22
+ pretty_name: naamapadam
23
+ size_categories:
24
+ - 1M<n<10M
25
+ source_datasets:
26
+ - original
27
+ task_categories:
28
+ - token-classification
29
+ task_ids:
30
+ - named-entity-recognition
31
+ ---
32
+
33
+ # Dataset Card for naamapadam
34
+
35
+ ## Table of Contents
36
+ - [Dataset Description](#dataset-description)
37
+ - [Dataset Summary](#dataset-summary)
38
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
39
+ - [Languages](#languages)
40
+ - [Dataset Structure](#dataset-structure)
41
+ - [Data Instances](#data-instances)
42
+ - [Data Fields](#data-instances)
43
+ - [Data Splits](#data-instances)
44
+ - [Dataset Creation](#dataset-creation)
45
+ - [Curation Rationale](#curation-rationale)
46
+ - [Source Data](#source-data)
47
+ - [Annotations](#annotations)
48
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
49
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
50
+ - [Social Impact of Dataset](#social-impact-of-dataset)
51
+ - [Discussion of Biases](#discussion-of-biases)
52
+ - [Other Known Limitations](#other-known-limitations)
53
+ - [Additional Information](#additional-information)
54
+ - [Dataset Curators](#dataset-curators)
55
+ - [Licensing Information](#licensing-information)
56
+ - [Citation Information](#citation-information)
57
+
58
+ ## Dataset Description
59
+
60
+ - **Homepage:** [Needs More Information]
61
+ - **Repository:** https://github.com/AI4Bharat/indicner
62
+ - **Paper:** [Needs More Information]
63
+ - **Leaderboard:** [Needs More Information]
64
+ - **Point of Contact:** Anoop Kunchukuttan
65
+
66
+ ### Dataset Summary
67
+
68
+ Naamapadam is the largest publicly available Named Entity Annotated dataset for 11 Indic languages. This corpora was created by projecting named entities from English side to the Indic language side of the English-Indic languages parallel corpus. The dataset additionally contains manually labelled test set for 8 Indic languages containing 500-1000 sentences.
69
+
70
+
71
+ ### Supported Tasks and Leaderboards
72
+
73
+ **Tasks:** NER on Indian languages.
74
+
75
+ **Leaderboards:** Currently there is no Leaderboard for this dataset.
76
+
77
+ ### Languages
78
+ - `Assamese (as)`
79
+ - `Bengali (bn)`
80
+ - `Gujarati (gu)`
81
+ - `Kannada (kn)`
82
+ - `Hindi (hi)`
83
+ - `Malayalam (ml)`
84
+ - `Marathi (mr)`
85
+ - `Oriya (or)`
86
+ - `Punjabi (pa)`
87
+ - `Tamil (ta)`
88
+ - `Telugu (te)`
89
+
90
+ ## Dataset Structure
91
+
92
+ ### Data Instances
93
+
94
+ {'words': ['उन्हेनें', 'शिकांगों','में','बोरोडिन','की','पत्नी','को','तथा','वाशिंगटन','में','रूसी','व्यापार','संघ','को','पैसे','भेजे','।'],
95
+ 'ner': [0, 3, 0, 1, 0, 0, 0, 0, 3, 0, 5, 6, 6, 0, 0, 0, 0],
96
+ }
97
+
98
+ ### Data Fields
99
+
100
+ - `words`: Raw tokens in the dataset.
101
+ - `ner`: the NER tags for this dataset.
102
+
103
+ ### Data Splits
104
+ (to be updated, see paper for correct numbers)
105
+
106
+ | Language | Train | Validation | Test |
107
+ |---:|---:|---:|---:|
108
+ | as | 10266 | 52 | 51 |
109
+ | bn | 961679 | 4859 | 607 |
110
+ | gu | 472845 | 2389 | 50 |
111
+ | hi | 985787 | 13460 | 437 |
112
+ | kn | 471763 | 2381 | 1019 |
113
+ | ml | 716652 | 3618 | 974 |
114
+ | mr | 455248 | 2300 | 1080 |
115
+ | or | 196793 | 993 | 994 |
116
+ | pa | 463534 | 2340 | 2342 |
117
+ | ta | 497882 | 2795 | 49 |
118
+ | te | 507741 | 2700 | 53 |
119
+
120
+
121
+ ## Usage
122
+
123
+ You should have the 'datasets' packages installed to be able to use the :rocket: HuggingFace datasets repository. Please use the following command and install via pip:
124
+
125
+ ```code
126
+ pip install datasets
127
+ ```
128
+
129
+ To use the dataset, please use:<br/>
130
+
131
+ ```python
132
+ from datasets import load_dataset
133
+ hiner = load_dataset('ai4bharat/naamapadam')
134
+ ```
135
+
136
+ ## Dataset Creation
137
+ We use the parallel corpus from the Samanantar Dataset between English and the 11 major Indian languages to create the NER dataset. We annotate the English portion of the parallel corpus with existing state-of-the-art NER model. We use word-level alignments learned from the parallel corpus to project the entity labels from English to the Indian language.
138
+
139
+ ### Curation Rationale
140
+
141
+ naamapadam was built from [Samanantar dataset](https://indicnlp.ai4bharat.org/samanantar/). This dataset was built for the task of Named Entity Recognition in Indic languages. The dataset was introduced to introduce new resources to the Indic languages language that was under-served for Natural Language Processing.
142
+
143
+ ### Source Data
144
+
145
+ [Samanantar dataset](https://indicnlp.ai4bharat.org/samanantar/)
146
+
147
+ #### Initial Data Collection and Normalization
148
+
149
+ [Needs More Information]
150
+
151
+ #### Who are the source language producers?
152
+
153
+ [Needs More Information]
154
+
155
+ ### Annotations
156
+
157
+ #### Annotation process
158
+
159
+ NER annotations were done following the CoNLL-2003 guidelines.
160
+
161
+ #### Who are the annotators?
162
+
163
+ The annotations for the testset have been done by volunteers who are proficient in the respective languages. We would like to thank all the volunteers:
164
+
165
+ - Anil Mhaske
166
+ - Anoop Kunchukuttan
167
+ - Archana Mhaske
168
+ - Arnav Mhaske
169
+ - Gowtham Ramesh
170
+ - Harshit Kedia
171
+ - Nitin Kedia
172
+ - Rudramurthy V
173
+ - Sangeeta Rajagopal
174
+ - Sumanth Doddapaneni
175
+ - Vindhya DS
176
+ - Yash Madhani
177
+ - Kabir Ahuja
178
+ - Shallu Rani
179
+ - Armin Virk
180
+
181
+ ### Personal and Sensitive Information
182
+
183
+ [Needs More Information]
184
+
185
+ ## Considerations for Using the Data
186
+
187
+ ### Social Impact of Dataset
188
+
189
+ The purpose of this dataset is to provide a large-scale Named Entity Recognition dataset for Indic languages. Since the information (data points) has been obtained from public resources, we do not think there is a negative social impact in releasing this data.
190
+
191
+
192
+ ### Discussion of Biases
193
+
194
+ [Needs More Information]
195
+
196
+ ### Other Known Limitations
197
+
198
+ [Needs More Information]
199
+
200
+ ## Additional Information
201
+
202
+ ### Dataset Curators
203
+
204
+ [Needs More Information]
205
+
206
+ ### Licensing Information
207
+
208
+ <!-- <a rel="license" float="left" href="http://creativecommons.org/publicdomain/zero/1.0/">
209
+ <img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" width="100" />
210
+ <img src="https://mirrors.creativecommons.org/presskit/buttons/88x31/png/by.png" style="border-style: none;" alt="CC-BY" width="100" href="http://creativecommons.org/publicdomain/zero/1.0/"/>
211
+ </a>
212
+ <br/> -->
213
+
214
+ **CC0 License Statement**
215
+ <a rel="license" float="left" href="https://creativecommons.org/about/cclicenses/">
216
+ <img src="https://licensebuttons.net/p/zero/1.0/88x31.png" style="border-style: none;" alt="CC0" width="100"/>
217
+ </a>
218
+ <br>
219
+ <br>
220
+ - We do not own any of the text from which this data has been extracted.
221
+ - We license the actual packaging of the mined data under the [Creative Commons CC0 license (“no rights reserved”)](http://creativecommons.org/publicdomain/zero/1.0).
222
+ - To the extent possible under law, <a rel="dct:publisher" href="https://ai4bharat.iitm.ac.in/"> <span property="dct:title">AI4Bharat</span></a> has waived all copyright and related or neighboring rights to <span property="dct:title">Naamapadam</span> manually collected data and existing sources.
223
+ - This work is published from: India.
224
+
225
+
226
+ ### Citation Information
227
+
228
+ If you are using the Naampadam corpus, please cite the following article:
229
+ ```
230
+ @misc{mhaske2022naamapadam,
231
+ doi = {10.48550/ARXIV.2212.10168},
232
+ url = {https://arxiv.org/abs/2212.10168},
233
+ author = {Mhaske, Arnav and Kedia, Harshit and Doddapaneni, Sumanth and Khapra, Mitesh M. and Kumar, Pratyush and Murthy, Rudra and Kunchukuttan, Anoop},
234
+ title = {Naamapadam: A Large-Scale Named Entity Annotated Data for Indic Languages}
235
+ publisher = {arXiv},
236
+ year = {2022},
237
+ }
238
+ ```
239
+
240
+ <!-- Contributors -->
241
+ ### Contributors
242
+ - Arnav Mhaske <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub>
243
+ - Harshit Kedia <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub>
244
+ - Sumanth Doddapaneni <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub>
245
+ - Mitesh M. Khapra <sub> ([AI4Bharat](https://ai4bharat.org), [IITM](https://www.iitm.ac.in)) </sub>
246
+ - Pratyush Kumar <sub> ([AI4Bharat](https://ai4bharat.org), [Microsoft](https://www.microsoft.com/en-in/), [IITM](https://www.iitm.ac.in)) </sub>
247
+ - Rudra Murthy <sub> ([AI4Bharat](https://ai4bharat.org), [IBM](https://www.ibm.com))</sub>
248
+ - Anoop Kunchukuttan <sub> ([AI4Bharat](https://ai4bharat.org), [Microsoft](https://www.microsoft.com/en-in/), [IITM](https://www.iitm.ac.in)) </sub>
249
+
250
+ This work is the outcome of a volunteer effort as part of the [AI4Bharat initiative](https://ai4bharat.iitm.ac.in).
251
+
252
+
253
+ <!-- Contact -->
254
+ ### Contact
255
+ - Anoop Kunchukuttan ([anoop.kunchukuttan@gmail.com](mailto:anoop.kunchukuttan@gmail.com))
256
+ - Rudra Murthy V ([rmurthyv@in.ibm.com](mailto:rmurthyv@in.ibm.com))
data/as_IndicNER_v1.0.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98d11a58acf977fcfd514868d6252da185d945842e9da5568c6c5735f5f5efbd
3
+ size 459923
data/bn_IndicNER_v1.0.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c09e575448e5273dd7b30e743b0be39e635b73654913b6c54717eae77192aaa
3
+ size 65825444
data/gu_IndicNER_v1.0.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7a7810f218638ccf51521455f946a8364bc30200afef034d5a051dbf0b908f0c
3
+ size 26557409
data/hi_IndicNER_v1.0.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a8dc90e6906715526c69412d27b91645d7a11ad0b9fb717dc076ea9ddc803cbb
3
+ size 82285576
data/kn_IndicNER_v1.0.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4348fea022961f3655d3b348be4d4e2cae636068875abb749be04ffd49a0d659
3
+ size 23489518
data/ml_IndicNER_v1.0.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f1867602bf9206cb18e4eb6da87933a80428972fdc28ae343e1d2921e409992
3
+ size 36275708
data/mr_IndicNER_v1.0.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3de32e2fbeda3db78025f419ec132035d32b1d4389265944b7a47855daf810fc
3
+ size 25196650
data/or_IndicNER_v1.0.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9e9661bc59f3ca4515f24f5103e27d64894c39925f063f41d26df83b07db0b1e
3
+ size 11435526
data/pa_IndicNER_v1.0.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8b039336549620424fe37c1fc4f14bd61e6aefedc003704e1ff32b63012bcec9
3
+ size 33405160
data/ta_IndicNER_v1.0.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ff39bf6456ead2e4e993c47424741ea90286ae16f4edc2ec42cd83b94db50511
3
+ size 29703518
data/te_IndicNER_v1.0.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc1620695e37faf17c2d4b43d7b67e9493ce9559be3934b0fed0092c347475dd
3
+ size 25701051
naamapadam.py ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import os
3
+
4
+ import datasets
5
+
6
+ _CITATION = """\
7
+
8
+ """
9
+
10
+ _DESCRIPTION = """\
11
+
12
+ """
13
+ _HOMEPAGE = "https://indicnlp.ai4bharat.org/"
14
+
15
+ _LICENSE = "Creative Commons Attribution-NonCommercial 4.0 International Public License"
16
+
17
+ _URL = "https://huggingface.co/datasets/ai4bharat/naamapadam/resolve/main/data/{}_IndicNER_v{}.zip"
18
+
19
+ _LANGUAGES = ["as", "bn", "gu", "hi", "kn", "ml", "mr", "or", "pa", "ta", "te"]
20
+
21
+
22
+ class Naamapadam(datasets.GeneratorBasedBuilder):
23
+ VERSION = datasets.Version("1.0.0")
24
+
25
+ BUILDER_CONFIGS = [
26
+ datasets.BuilderConfig(
27
+ name="{}".format(lang), version=datasets.Version("1.0.0")
28
+ )
29
+ for lang in _LANGUAGES
30
+ ]
31
+
32
+ def _info(self):
33
+ return datasets.DatasetInfo(
34
+ description=_DESCRIPTION,
35
+ features=datasets.Features(
36
+ {
37
+ "tokens": datasets.Sequence(datasets.Value("string")),
38
+ "tags": datasets.Sequence(
39
+ datasets.features.ClassLabel(
40
+ names=[
41
+ "O",
42
+ "B-PER",
43
+ "I-PER",
44
+ "B-ORG",
45
+ "I-ORG",
46
+ "B-LOC",
47
+ "I-LOC",
48
+ ]
49
+ )
50
+ ),
51
+ }
52
+ ),
53
+ supervised_keys=None,
54
+ homepage=_HOMEPAGE,
55
+ citation=_CITATION,
56
+ license=_LICENSE,
57
+ version=self.VERSION,
58
+ )
59
+
60
+ def _split_generators(self, dl_manager):
61
+ """Returns SplitGenerators."""
62
+ lang = str(self.config.name)
63
+ url = _URL.format(lang, self.VERSION.version_str[:-2])
64
+
65
+ data_dir = dl_manager.download_and_extract(url)
66
+
67
+ return [
68
+ datasets.SplitGenerator(
69
+ name=datasets.Split.TRAIN,
70
+ gen_kwargs={
71
+ "filepath": os.path.join(data_dir, lang + "_train.json"),
72
+ },
73
+ ),
74
+ datasets.SplitGenerator(
75
+ name=datasets.Split.TEST,
76
+ gen_kwargs={
77
+ "filepath": os.path.join(data_dir, lang + "_test.json"),
78
+ },
79
+ ),
80
+ datasets.SplitGenerator(
81
+ name=datasets.Split.VALIDATION,
82
+ gen_kwargs={
83
+ "filepath": os.path.join(data_dir, lang + "_val.json"),
84
+ },
85
+ ),
86
+ ]
87
+
88
+ def _generate_examples(self, filepath):
89
+ """Yields examples as (key, example) tuples."""
90
+ with open(filepath, encoding="utf-8") as f:
91
+ for idx_, row in enumerate(f):
92
+ data = json.loads(row)
93
+ yield idx_, {"tokens": data["words"], "ner_tags": data["ner"]}