joelniklaus commited on
Commit
09605cb
1 Parent(s): 86acb2f

added dataset files

Browse files
Files changed (6) hide show
  1. .gitattributes +3 -0
  2. README.md +199 -0
  3. convert_to_hf_dataset.py +116 -0
  4. test.jsonl +3 -0
  5. train.jsonl +3 -0
  6. validation.jsonl +3 -0
.gitattributes CHANGED
@@ -35,3 +35,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
35
  *.mp3 filter=lfs diff=lfs merge=lfs -text
36
  *.ogg filter=lfs diff=lfs merge=lfs -text
37
  *.wav filter=lfs diff=lfs merge=lfs -text
 
 
 
 
35
  *.mp3 filter=lfs diff=lfs merge=lfs -text
36
  *.ogg filter=lfs diff=lfs merge=lfs -text
37
  *.wav filter=lfs diff=lfs merge=lfs -text
38
+ test.jsonl filter=lfs diff=lfs merge=lfs -text
39
+ train.jsonl filter=lfs diff=lfs merge=lfs -text
40
+ validation.jsonl filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - other
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - ro
8
+ license:
9
+ - cc-by-nc-nd-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ paperswithcode_id: null
13
+ pretty_name: Romanian Named Entity Recognition in the Legal domain (LegalNERo)
14
+ size_categories:
15
+ - 10K<n<100K
16
+ source_datasets:
17
+ - original
18
+ task_categories:
19
+ - token-classification
20
+ task_ids:
21
+ - named-entity-recognition
22
+ ---
23
+
24
+ # Dataset Card for Romanian Named Entity Recognition in the Legal domain (LegalNERo)
25
+
26
+ ## Table of Contents
27
+ - [Table of Contents](#table-of-contents)
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-fields)
35
+ - [Data Splits](#data-splits)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Annotations](#annotations)
40
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
41
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
42
+ - [Social Impact of Dataset](#social-impact-of-dataset)
43
+ - [Discussion of Biases](#discussion-of-biases)
44
+ - [Other Known Limitations](#other-known-limitations)
45
+ - [Additional Information](#additional-information)
46
+ - [Dataset Curators](#dataset-curators)
47
+ - [Licensing Information](#licensing-information)
48
+ - [Citation Information](#citation-information)
49
+ - [Contributions](#contributions)
50
+
51
+ ## Dataset Description
52
+
53
+ - **Homepage:**
54
+ - **Repository:** https://zenodo.org/record/4922385
55
+ - **Paper:** Pais, V., Mitrofan, M., Gasan, C. L., Coneschi, V., & Ianov, A. (2021). Named Entity Recognition in the {R}omanian Legal Domain. Proceedings of the Natural Legal Language Processing Workshop 2021, 9–18. https://doi.org/10.18653/v1/2021.nllp-1.2
56
+ - **Leaderboard:**
57
+ - **Point of Contact:** [Joel Niklaus](joel.niklaus.2@bfh.ch)
58
+
59
+ ### Dataset Summary
60
+
61
+ LegalNERo is a manually annotated corpus for named entity recognition in the Romanian legal domain. It provides gold annotations for organizations, locations, persons, time and legal resources mentioned in legal documents. Additionally it offers GEONAMES codes for the named entities annotated as location (where a link could be established).
62
+
63
+ ### Supported Tasks and Leaderboards
64
+
65
+ The dataset supports the task of named entity recognition.
66
+
67
+ ### Languages
68
+
69
+ Since legal documents for LegalNERo are extracted from the larger [MARCELL-RO corpus](https://elrc-share.eu/repository/browse/marcell-romanian-legislative-subcorpus-v2/2da548428b9d11eb9c1a00155d026706ce94a6b59ffc4b0e9fb5cd9cebe6889e/), the language in the dataset is Romanian as it used in national legislation ranging from 1881 to 2021.
70
+
71
+ ## Dataset Structure
72
+
73
+ ### Data Instances
74
+
75
+ The file format is jsonl and three data splits are present (train, validation and test). Named Entity annotations are non-overlapping.
76
+
77
+ Rows only containing one word (mostly words such as `\t\t\t`, `\n` or `-----`) have been filtered out.
78
+
79
+ ### Data Fields
80
+
81
+ The files contain the following data fields
82
+ - `file_name`: The file_name of the applicable annotation document
83
+ - `words`: The list of tokens obtained by applying the spacy (v 3.3.1) Greek tokenizer on the sentences. For more information see `convert_to_hf_dataset.py`.
84
+ - `ner`: The list of ner tags. The list of labels for the named entities that are covered by the dataset are the following:
85
+ - `LEGAL`: Legal reference/resources
86
+ - `LOC`: Location
87
+ - `ORG`: Organization
88
+ - `PER`: Person
89
+ - `TIME`: Time reference
90
+ - `O`: No entity annotation present
91
+
92
+ ### Data Splits
93
+
94
+ Splits created by Joel Niklaus.
95
+
96
+
97
+ | split | number of documents | number of sentences |
98
+ |:---------------|--------------------:|--------------------:|
99
+ | train | 296 (80%) | 7552 |
100
+ | validation | 37 (10%) | 966 |
101
+ | test | 37 (10%) | 907 |
102
+
103
+ ## Dataset Creation
104
+
105
+ ### Curation Rationale
106
+
107
+ The dataset provides gold annotations for organizations, locations, persons, time and legal resources mentioned in Romanian legal documents.
108
+
109
+ ### Source Data
110
+
111
+ #### Initial Data Collection and Normalization
112
+
113
+ The LegalNERo corpus consists of 370 documents from the larger [MARCELL-RO corpus](https://elrc-share.eu/repository/browse/marcell-romanian-legislative-subcorpus-v2/2da548428b9d11eb9c1a00155d026706ce94a6b59ffc4b0e9fb5cd9cebe6889e/). In the following we give a short description of the crawling process for the MARCELL-RO corpus.
114
+
115
+ *The MARCELL-RO corpus "contains 163,274 files, which represent the body of national legislation ranging from 1881 to 2021. This corpus includes mainly: governmental decisions, ministerial orders, decisions, decrees and laws. All the texts were obtained via crawling from the public Romanian legislative portal . We have not distinguished between in force and "out of force" laws because it is difficult to do this automatically and there is no external resource to use to distinguish between them. The texts were extracted from the original HTML format and converted into TXT files. Each file has multiple levels of annotation: firstly the texts were tokenized, lemmatized and morphologically annotated using the Tokenizing, Tagging and Lemmatizing (TTL) text processing platform developed at RACAI, then dependency parsed with NLP-Cube, named entities were identified using a NER tool developed at RACAI, nominal phrases were identified also with TTL, while IATE terms and EuroVoc descriptors were identified using an internal tool. All processing tools were integrated into an end-to-end pipeline available within the RELATE platform and as a dockerized version. The files were annotated with the latest version of the pipeline completed within Activity 4 of the MARCELL project."* [Link](https://elrc-share.eu/repository/browse/marcell-romanian-legislative-subcorpus-v2/2da548428b9d11eb9c1a00155d026706ce94a6b59ffc4b0e9fb5cd9cebe6889e/)
116
+
117
+ #### Who are the source language producers?
118
+
119
+ The source language producers are presumably politicians and lawyers.
120
+
121
+ ### Annotations
122
+
123
+ #### Annotation process
124
+
125
+ *“Annotation of the LegalNERo corpus was performed by 5 human annotators, supervised by two senior researchers at the Institute for Artificial Intelligence "Mihai Drăgănescu" of the Romanian Academy (RACAI). For annotation purposes we used the BRAT tool4 […].
126
+ Inside the legal reference class, we considered sub-entities of type *organization* and *time*. This allows for using the LegalNERo corpus in two scenarios: using all the 5 entity classes or using only the remaining general-purpose classes. The LegalNERo corpus contains a total of 370 documents from the larger MARCELL-RO corpus. These documents were split amongst the 5 annotators, with certain documents being annotated by multiple annotators. Each annotator manually annotated 100 documents. The annotators were unaware of the overlap, which allowed us to compute an inter-annotator agreement. We used the Cohen’s Kappa measure and obtained a value of 0.89, which we consider to be a good result.”* (Pais et al., 2021)
127
+
128
+
129
+ #### Who are the annotators?
130
+
131
+ *"[...] 5 human annotators, supervised by two senior researchers at the Institute for Artificial Intelligence "Mihai Drăgănescu" of the Romanian Academy (RACAI)."*
132
+
133
+ ### Personal and Sensitive Information
134
+
135
+ [More Information Needed]
136
+
137
+ ## Considerations for Using the Data
138
+
139
+ ### Social Impact of Dataset
140
+
141
+ [More Information Needed]
142
+
143
+ ### Discussion of Biases
144
+
145
+ [More Information Needed]
146
+
147
+ ### Other Known Limitations
148
+
149
+ Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script ```convert_to_hf_dataset.py``` in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card.
150
+
151
+ ## Additional Information
152
+
153
+ ### Dataset Curators
154
+
155
+ The names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*.
156
+ Additional changes were made by Joel Niklaus ([Email](joel.niklaus.2@bfh.ch); [Github](https://github.com/joelniklaus)) and Veton Matoshi ([Email](veton.matoshi@bfh.ch); [Github](https://github.com/kapllan)).
157
+
158
+
159
+ ### Licensing Information
160
+
161
+ [Creative Commons Attribution Non Commercial No Derivatives 4.0 International](https://creativecommons.org/licenses/by-nc-nd/4.0/legalcode)
162
+
163
+ ### Citation Information
164
+
165
+ ```
166
+ @dataset{pais_vasile_2021_4922385,
167
+ author = {Păiș, Vasile and
168
+ Mitrofan, Maria and
169
+ Gasan, Carol Luca and
170
+ Ianov, Alexandru and
171
+ Ghiță, Corvin and
172
+ Coneschi, Vlad Silviu and
173
+ Onuț, Andrei},
174
+ title = {{Romanian Named Entity Recognition in the Legal
175
+ domain (LegalNERo)}},
176
+ month = may,
177
+ year = 2021,
178
+ publisher = {Zenodo},
179
+ doi = {10.5281/zenodo.4922385},
180
+ url = {https://doi.org/10.5281/zenodo.4922385}
181
+ }
182
+ ```
183
+ ```
184
+ @inproceedings{pais-etal-2021-named,
185
+ author = {Pais, Vasile and Mitrofan, Maria and Gasan, Carol Luca and Coneschi, Vlad and Ianov, Alexandru},
186
+ booktitle = {Proceedings of the Natural Legal Language Processing Workshop 2021},
187
+ doi = {10.18653/v1/2021.nllp-1.2},
188
+ month = {nov},
189
+ pages = {9--18},
190
+ publisher = {Association for Computational Linguistics},
191
+ title = {{Named Entity Recognition in the {R}omanian Legal Domain}},
192
+ url = {https://aclanthology.org/2021.nllp-1.2},
193
+ year = {2021}
194
+ }
195
+ ```
196
+
197
+ ### Contributions
198
+
199
+ Thanks to [@JoelNiklaus](https://github.com/joelniklaus) and [@kapllan](https://github.com/kapllan) for adding this dataset.
convert_to_hf_dataset.py ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import re
3
+ from glob import glob
4
+ from pathlib import Path
5
+
6
+ from typing import List
7
+
8
+ import numpy as np
9
+ import pandas as pd
10
+
11
+ from spacy.lang.ro import Romanian
12
+
13
+ pd.set_option('display.max_colwidth', None)
14
+ pd.set_option('display.max_columns', None)
15
+
16
+ base_path = Path("legalnero-data")
17
+ tokenizer = Romanian().tokenizer
18
+
19
+
20
+ # A and D are different government gazettes
21
+ # A is the general one, publishing standard legislation, and D is meant for legislation on urban planning and such things
22
+
23
+ def process_document(ann_file: str, text_file: Path, metadata: dict, tokenizer) -> List[dict]:
24
+ """Processes one document (.ann file and .txt file) and returns a list of annotated sentences"""
25
+ # read the ann file into a df
26
+ ann_df = pd.read_csv(ann_file, sep="\t", header=None, names=["id", "entity_with_span", "entity_text"])
27
+ sentences = open(text_file, 'r').readlines()
28
+
29
+ # split into individual columns
30
+ ann_df[["entity", "start", "end"]] = ann_df["entity_with_span"].str.split(" ", expand=True)
31
+ ann_df.start = ann_df.start.astype(int)
32
+ ann_df.end = ann_df.end.astype(int)
33
+
34
+ not_found_entities = 0
35
+ annotated_sentences = []
36
+ current_start_index = 2 # somehow, here they start with 2 (who knows why)
37
+ for sentence in sentences:
38
+ ann_sent = {**metadata}
39
+
40
+ doc = tokenizer(sentence)
41
+ doc_start_index = current_start_index
42
+ doc_end_index = current_start_index + len(sentence)
43
+ current_start_index = doc_end_index + 1
44
+
45
+ relevant_annotations = ann_df[(ann_df.start >= doc_start_index) & (ann_df.end <= doc_end_index)]
46
+ for _, row in relevant_annotations.iterrows():
47
+ sent_start_index = row["start"] - doc_start_index
48
+ sent_end_index = row["end"] - doc_start_index
49
+ char_span = doc.char_span(sent_start_index, sent_end_index, label=row["entity"], alignment_mode="expand")
50
+ # ent_span = Span(doc, char_span.start, char_span.end, row["entity"])
51
+ if char_span:
52
+ doc.set_ents([char_span])
53
+ else:
54
+ not_found_entities += 1
55
+ print(f"Could not find entity `{row['entity_text']}` in sentence `{sentence}`")
56
+
57
+ ann_sent["words"] = [str(tok) for tok in doc]
58
+ ann_sent["ner"] = [tok.ent_type_ if tok.ent_type_ else "O" for tok in doc]
59
+
60
+ annotated_sentences.append(ann_sent)
61
+ if not_found_entities > 0:
62
+ # NOTE: does not find entities only in 2 cases in total
63
+ print(f"Did not find entities in {not_found_entities} cases")
64
+ return annotated_sentences
65
+
66
+
67
+ def read_to_df():
68
+ """Reads the different documents and saves metadata"""
69
+ ann_files = glob(str(base_path / "ann_LEGAL_PER_LOC_ORG_TIME" / "*.ann"))
70
+ sentences = []
71
+ file_names = []
72
+ for ann_file in ann_files:
73
+ file_name = Path(ann_file).stem
74
+ text_file = base_path / "text" / f"{file_name}.txt"
75
+ file_names.append(file_name)
76
+ metadata = {
77
+ "file_name": file_name,
78
+ }
79
+ sentences.extend(process_document(ann_file, text_file, metadata, tokenizer))
80
+ return pd.DataFrame(sentences), file_names
81
+
82
+
83
+ df, file_names = read_to_df()
84
+
85
+ # last word is either "\n" or "-----" ==> remove
86
+ df.words = df.words.apply(lambda x: x[:-1])
87
+ df.ner = df.ner.apply(lambda x: x[:-1])
88
+
89
+ # remove rows with containing only one word
90
+ df = df[df.words.map(len) > 1]
91
+
92
+ # split by file_name
93
+ num_fn = len(file_names)
94
+ train_fn, validation_fn, test_fn = np.split(np.array(file_names), [int(.8 * num_fn), int(.9 * num_fn)])
95
+
96
+ # Num file_names for each split: train (296), validation (37), test (37)
97
+ print(len(train_fn), len(validation_fn), len(test_fn))
98
+
99
+ train = df[df.file_name.isin(train_fn)]
100
+ validation = df[df.file_name.isin(validation_fn)]
101
+ test = df[df.file_name.isin(test_fn)]
102
+
103
+ # Num samples for each split: train (7552), validation (966), test (907)
104
+ print(len(train.index), len(validation.index), len(test.index))
105
+
106
+
107
+ # save splits
108
+ def save_splits_to_jsonl(config_name):
109
+ # save to jsonl files for huggingface
110
+ if config_name: os.makedirs(config_name, exist_ok=True)
111
+ train.to_json(os.path.join(config_name, "train.jsonl"), lines=True, orient="records", force_ascii=False)
112
+ validation.to_json(os.path.join(config_name, "validation.jsonl"), lines=True, orient="records", force_ascii=False)
113
+ test.to_json(os.path.join(config_name, "test.jsonl"), lines=True, orient="records", force_ascii=False)
114
+
115
+
116
+ save_splits_to_jsonl("")
test.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8009d0a864d6ffe4174cd876afe7bf5f5c01cfb31d1165b76a9f5d2eecd23b85
3
+ size 409786
train.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0987ce4839662c5c4144aec5f9dbcb3669abb7143d45718e47af486b64cf0829
3
+ size 3266615
validation.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84b8948d38ef736d39849811dc7648b1086903bf7efabb86c6ac266175575d57
3
+ size 421295