dataclysm-wikipedia / README.md
somewheresy's picture
Update README.md
67982d3 verified
|
raw
history blame
No virus
1.89 kB
metadata
license: cc-by-sa-3.0
language:
  - en
pretty_name: dataclysm-wikipedia-titles-lite
size_categories:
  - 1M<n<10M

somewheresystems/dataclysm-wikipedia-titles-lite

This dataset comprises of 6,458,670 English language Wikipedia articles, with an additional column added for title-embeddings using the bge-small-en-v1.5 embeddings model. The dataset was sourced here: https://huggingface.co/datasets/wikipedia/viewer/20220301.en

The Article Text has been dropped from this set to conserve space. In comparison to somewheresystems/dataclysm-wikipedia-titles (68.93 GB), and the wikipedia-titles-lite dataset (49.72 GB), this entire dataset is only 16.32 GB uncompressed, which is 86.25% smaller and 63.18% smaller respectively.

Embeddings Model

We used https://huggingface.co/BAAI/bge-small-en-v1.5 to embed the artcle title field. The purpose of using this model in particular was to leverage the ability to embed each title quickly while allowing for slightly more performant retrieval than instruct-xl.

Why?

You can either load this entire dataset into a database and retrieve article text by similarity searches between queries and titles, link them to URLs and pull up-to-date articles, or pull the article text from March 01, 2022 from the dataset directly (included). For efficiency, we recommend dropping everything except the title, title embeddings, and URL to be able to quickly load and index information which can be used to efficiently pull the remaining information asynchronously via web.

Citation Information

@ONLINE{wikidump,
    author = "Wikimedia Foundation",
    title  = "Wikimedia Downloads",
    url    = "https://dumps.wikimedia.org"
}

Contributions

Thanks to @lewtun, @mariamabarham, @thomwolf, @lhoestq, @patrickvonplaten for adding the Wikipedia dataset in the first place.

Contact

Please contact hi@dataclysm.xyz for inquiries.