somewheresy commited on
Commit
2e1774d
1 Parent(s): 9abe546

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -0
README.md CHANGED
@@ -1,3 +1,36 @@
1
  ---
2
  license: cc-by-sa-3.0
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-sa-3.0
3
+ language:
4
+ - en
5
+ pretty_name: dataclysm-wikipedia-titles-lite
6
+ size_categories:
7
+ - 1M<n<10M
8
  ---
9
+
10
+ # somewheresystems/dataclysm-wikipedia-titles-lite
11
+
12
+ This dataset comprises of 6,458,670 English language Wikipedia articles, with an additional column added for title-embeddings using the bge-small-en-v1.5 embeddings model. The dataset was sourced here: https://huggingface.co/datasets/wikipedia/viewer/20220301.en
13
+
14
+ The Article Text has been dropped from this set to conserve space. In comparison to somewheresystems/dataclysm-wikipedia-titles (68.93 GB), this entire dataset is 49.72 GB uncompressed, which is 38.63% smaller.
15
+
16
+ # Embeddings Model
17
+
18
+ We used https://huggingface.co/BAAI/bge-small-en-v1.5 to embed the artcle `title` field. The purpose of using this model in particular was to leverage the ability to embed each title quickly while allowing for slightly more performant retrieval than `instruct-xl`.
19
+
20
+ # Why?
21
+
22
+ You can either load this entire dataset into a database and retrieve article text by similarity searches between queries and titles, link them to URLs and pull up-to-date articles, or pull the article text from March 01, 2022 from the dataset directly (included). For efficiency, we recommend dropping everything except the title, title embeddings, and URL to be able to quickly load and index information which can be used to efficiently pull the remaining information asynchronously via web.
23
+
24
+ # Citation Information
25
+ @ONLINE{wikidump,
26
+ author = "Wikimedia Foundation",
27
+ title = "Wikimedia Downloads",
28
+ url = "https://dumps.wikimedia.org"
29
+ }
30
+
31
+ # Contributions
32
+ Thanks to @lewtun, @mariamabarham, @thomwolf, @lhoestq, @patrickvonplaten for adding the Wikipedia dataset in the first place.
33
+
34
+ ## Contact
35
+
36
+ Please contact hi@dataclysm.xyz for inquiries.