File size: 1,798 Bytes
9abe546
 
2e1774d
 
 
 
 
9abe546
2e1774d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
---
license: cc-by-sa-3.0
language:
- en
pretty_name: dataclysm-wikipedia-titles-lite
size_categories:
- 1M<n<10M
---

# somewheresystems/dataclysm-wikipedia-titles-lite

This dataset comprises of 6,458,670 English language Wikipedia articles, with an additional column added for title-embeddings using the bge-small-en-v1.5 embeddings model. The dataset was sourced here: https://huggingface.co/datasets/wikipedia/viewer/20220301.en

The Article Text has been dropped from this set to conserve space. In comparison to somewheresystems/dataclysm-wikipedia-titles (68.93 GB), this entire dataset is 49.72 GB uncompressed, which is 38.63% smaller.

# Embeddings Model

We used https://huggingface.co/BAAI/bge-small-en-v1.5 to embed the artcle `title` field. The purpose of using this model in particular was to leverage the ability to embed each title quickly while allowing for slightly more performant retrieval than `instruct-xl`.

# Why?

You can either load this entire dataset into a database and retrieve article text by similarity searches between queries and titles, link them to URLs and pull up-to-date articles, or pull the article text from March 01, 2022 from the dataset directly (included). For efficiency, we recommend dropping everything except the title, title embeddings, and URL to be able to quickly load and index information which can be used to efficiently pull the remaining information asynchronously via web.

# Citation Information
@ONLINE{wikidump,
    author = "Wikimedia Foundation",
    title  = "Wikimedia Downloads",
    url    = "https://dumps.wikimedia.org"
}

# Contributions
Thanks to @lewtun, @mariamabarham, @thomwolf, @lhoestq, @patrickvonplaten for adding the Wikipedia dataset in the first place.

## Contact

Please contact hi@dataclysm.xyz for inquiries.