Datasets:

Modalities:
Tabular
Formats:
csv
Size:
< 1K
ArXiv:
DOI:
Libraries:
Datasets
pandas
License:
File size: 1,361 Bytes
a66b406
 
 
 
8e2fbfc
a66b406
8e2fbfc
a66b406
 
 
 
 
 
846b1e9
da4dbd8
8e2fbfc
846b1e9
2717355
8e2fbfc
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
---
license: cc
task_categories:
- feature-extraction
pretty_name: NASA-IR
---
We developed a domain-specific information retrieval benchmark, `NASA-IR`, spanning almost 500 question-answer pairs related to the Earth science, planetary science, heliophysics, astrophysics, and biological physical sciences domains. Specifically, we sampled a set of 166 paragraphs from AGU, AMS, ADS, PMC, and PubMed and manually annotated with 3 questions that are answerable from each of these paragraphs, resulting in 498 questions. We used 398 of these questions as the training set and the remaining 100 as the validation set.

To comprehensively evaluate the information retrieval systems and mimic the real-world data, we combined 26,839 random ADS abstracts with these annotated paragraphs. On average, each query is 12 words long, and each paragraph is 120 words long. We used Recall@10 as the evaluation metric since each question has only one relevant document.

**Evaluation results**


![image/png](https://cdn-uploads.huggingface.co/production/uploads/61099e5d86580d4580767226/rFc804d66Vslha62J0Ac1.png){ width: 200px; }

**Note**
This dataset is released in support of the training and evaluation of the encoder language model ["Indus"](https://huggingface.co/nasa-impact/nasa-smd-ibm-v0.1).

Accompanying paper can be found here: https://arxiv.org/abs/2405.10725