shunk031 commited on
Commit
210c48e
1 Parent(s): 9d2a920

Update readme (#7)

Browse files

* update README.md

* update

Files changed (2) hide show
  1. JDocQA.py +6 -11
  2. README.md +6 -11
JDocQA.py CHANGED
@@ -26,17 +26,12 @@ from datasets.utils.logging import get_logger
26
  logger = get_logger(__name__)
27
 
28
  _CITATION = """\
29
- @inproceedings{JDocQA_2024,
30
- title = "JDocQA: Japanese Document Question Answering Dataset for Generative Language Models",
31
- author = "Onami, Eri and
32
- Kurita, Shuhei and
33
- Miyanishi, Taiki and
34
- Watanabe, Taro",
35
- booktitle = "The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation",
36
- month = may,
37
- year = "2024",
38
- address = "Trino, Italy",
39
- abstract = "Document question answering is a task of question answering on given documents such as reports, slides, pamphlets, and websites, and it is a truly demanding task as paper and electronic forms of documents are so common in our society. This is known as a quite challenging task because it requires not only text understanding but also understanding of figures and tables, and hence visual question answering (VQA) methods are often examined in addition to textual approaches. We introduce Japanese Document Question Answering (JDocQA), a large-scale document-based QA dataset, essentially requiring both visual and textual information to answer questions, which comprises 5,504 documents in PDF format and annotated 11,600 question-and-answer instances in Japanese. Each QA instance includes references to the document pages and bounding boxes for the answer clues. We incorporate multiple categories of questions and unanswerable questions from the document for realistic question-answering applications. We empirically evaluate the effectiveness of our dataset with text-based large language models (LLMs) and multimodal models. Incorporating unanswerable questions in finetuning may contribute to harnessing the so-called hallucination generation.",
40
  }
41
  """
42
 
 
26
  logger = get_logger(__name__)
27
 
28
  _CITATION = """\
29
+ @inproceedings{onami2024jdocqa,
30
+ title={JDocQA: Japanese Document Question Answering Dataset for Generative Language Models},
31
+ author={Onami, Eri and Kurita, Shuhei and Miyanishi, Taiki and Watanabe, Taro},
32
+ booktitle={Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
33
+ pages={9503--9514},
34
+ year={2024}
 
 
 
 
 
35
  }
36
  """
37
 
README.md CHANGED
@@ -276,17 +276,12 @@ From [JDocQA's README.md](https://github.com/mizuumi/JDocQA/blob/main/dataset/RE
276
  ### Citation Information
277
 
278
  ```bibtex
279
- @inproceedings{JDocQA_2024,
280
- title = "JDocQA: Japanese Document Question Answering Dataset for Generative Language Models",
281
- author = "Onami, Eri and
282
- Kurita, Shuhei and
283
- Miyanishi, Taiki and
284
- Watanabe, Taro",
285
- booktitle = "The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation",
286
- month = may,
287
- year = "2024",
288
- address = "Trino, Italy",
289
- abstract = "Document question answering is a task of question answering on given documents such as reports, slides, pamphlets, and websites, and it is a truly demanding task as paper and electronic forms of documents are so common in our society. This is known as a quite challenging task because it requires not only text understanding but also understanding of figures and tables, and hence visual question answering (VQA) methods are often examined in addition to textual approaches. We introduce Japanese Document Question Answering (JDocQA), a large-scale document-based QA dataset, essentially requiring both visual and textual information to answer questions, which comprises 5,504 documents in PDF format and annotated 11,600 question-and-answer instances in Japanese. Each QA instance includes references to the document pages and bounding boxes for the answer clues. We incorporate multiple categories of questions and unanswerable questions from the document for realistic question-answering applications. We empirically evaluate the effectiveness of our dataset with text-based large language models (LLMs) and multimodal models. Incorporating unanswerable questions in finetuning may contribute to harnessing the so-called hallucination generation.",
290
  }
291
  ```
292
 
 
276
  ### Citation Information
277
 
278
  ```bibtex
279
+ @inproceedings{onami2024jdocqa,
280
+ title={JDocQA: Japanese Document Question Answering Dataset for Generative Language Models},
281
+ author={Onami, Eri and Kurita, Shuhei and Miyanishi, Taiki and Watanabe, Taro},
282
+ booktitle={Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
283
+ pages={9503--9514},
284
+ year={2024}
 
 
 
 
 
285
  }
286
  ```
287