yano0 commited on
Commit
8791d08
1 Parent(s): d6cc984

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,220 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: []
3
+ library_name: sentence-transformers
4
+ tags:
5
+ - sentence-transformers
6
+ - sentence-similarity
7
+ - feature-extraction
8
+ metrics:
9
+ - pearson_cosine
10
+ - spearman_cosine
11
+ - pearson_manhattan
12
+ - spearman_manhattan
13
+ - pearson_euclidean
14
+ - spearman_euclidean
15
+ - pearson_dot
16
+ - spearman_dot
17
+ - pearson_max
18
+ - spearman_max
19
+ widget: []
20
+ pipeline_tag: sentence-similarity
21
+ model-index:
22
+ - name: SentenceTransformer
23
+ results:
24
+ - task:
25
+ type: semantic-similarity
26
+ name: Semantic Similarity
27
+ dataset:
28
+ name: Unknown
29
+ type: unknown
30
+ metrics:
31
+ - type: pearson_cosine
32
+ value: 0.841929698952355
33
+ name: Pearson Cosine
34
+ - type: spearman_cosine
35
+ value: 0.7942182059969294
36
+ name: Spearman Cosine
37
+ - type: pearson_manhattan
38
+ value: 0.8295844701949633
39
+ name: Pearson Manhattan
40
+ - type: spearman_manhattan
41
+ value: 0.7967029159438351
42
+ name: Spearman Manhattan
43
+ - type: pearson_euclidean
44
+ value: 0.8302175995746677
45
+ name: Pearson Euclidean
46
+ - type: spearman_euclidean
47
+ value: 0.7974109108557925
48
+ name: Spearman Euclidean
49
+ - type: pearson_dot
50
+ value: 0.8266168802012493
51
+ name: Pearson Dot
52
+ - type: spearman_dot
53
+ value: 0.7757964222446627
54
+ name: Spearman Dot
55
+ - type: pearson_max
56
+ value: 0.841929698952355
57
+ name: Pearson Max
58
+ - type: spearman_max
59
+ value: 0.7974109108557925
60
+ name: Spearman Max
61
+ ---
62
+
63
+ # SentenceTransformer
64
+
65
+ This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
66
+
67
+ ## Model Details
68
+
69
+ ### Model Description
70
+ - **Model Type:** Sentence Transformer
71
+ <!-- - **Base model:** [Unknown](https://huggingface.co/unknown) -->
72
+ - **Maximum Sequence Length:** 512 tokens
73
+ - **Output Dimensionality:** 768 tokens
74
+ - **Similarity Function:** Cosine Similarity
75
+ <!-- - **Training Dataset:** Unknown -->
76
+ <!-- - **Language:** Unknown -->
77
+ <!-- - **License:** Unknown -->
78
+
79
+ ### Model Sources
80
+
81
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
82
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
83
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
84
+
85
+ ### Full Model Architecture
86
+
87
+ ```
88
+ SentenceTransformer(
89
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: LukeModel
90
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
91
+ )
92
+ ```
93
+
94
+ ## Usage
95
+
96
+ ### Direct Usage (Sentence Transformers)
97
+
98
+ First install the Sentence Transformers library:
99
+
100
+ ```bash
101
+ pip install -U sentence-transformers
102
+ ```
103
+
104
+ Then you can load this model and run inference.
105
+ ```python
106
+ from sentence_transformers import SentenceTransformer
107
+
108
+ # Download from the 🤗 Hub
109
+ model = SentenceTransformer("pkshatech/GLuCoSE-base-ja-v2")
110
+ # Run inference
111
+ sentences = [
112
+ 'The weather is lovely today.',
113
+ "It's so sunny outside!",
114
+ 'He drove to the stadium.',
115
+ ]
116
+ embeddings = model.encode(sentences)
117
+ print(embeddings.shape)
118
+ # [3, 768]
119
+
120
+ # Get the similarity scores for the embeddings
121
+ similarities = model.similarity(embeddings, embeddings)
122
+ print(similarities.shape)
123
+ # [3, 3]
124
+ ```
125
+
126
+ <!--
127
+ ### Direct Usage (Transformers)
128
+
129
+ <details><summary>Click to see the direct usage in Transformers</summary>
130
+
131
+ </details>
132
+ -->
133
+
134
+ <!--
135
+ ### Downstream Usage (Sentence Transformers)
136
+
137
+ You can finetune this model on your own dataset.
138
+
139
+ <details><summary>Click to expand</summary>
140
+
141
+ </details>
142
+ -->
143
+
144
+ <!--
145
+ ### Out-of-Scope Use
146
+
147
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
148
+ -->
149
+
150
+ ## Evaluation
151
+
152
+ ### Metrics
153
+
154
+ #### Semantic Similarity
155
+
156
+ * Evaluated with [<code>EmbeddingSimilarityEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.EmbeddingSimilarityEvaluator)
157
+
158
+ | Metric | Value |
159
+ |:--------------------|:-----------|
160
+ | pearson_cosine | 0.8419 |
161
+ | **spearman_cosine** | **0.7942** |
162
+ | pearson_manhattan | 0.8296 |
163
+ | spearman_manhattan | 0.7967 |
164
+ | pearson_euclidean | 0.8302 |
165
+ | spearman_euclidean | 0.7974 |
166
+ | pearson_dot | 0.8266 |
167
+ | spearman_dot | 0.7758 |
168
+ | pearson_max | 0.8419 |
169
+ | spearman_max | 0.7974 |
170
+
171
+ <!--
172
+ ## Bias, Risks and Limitations
173
+
174
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
175
+ -->
176
+
177
+ <!--
178
+ ### Recommendations
179
+
180
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
181
+ -->
182
+
183
+ ## Training Details
184
+
185
+ ### Training Logs
186
+ | Epoch | Step | spearman_cosine |
187
+ |:-----:|:----:|:---------------:|
188
+ | 0 | 0 | 0.7942 |
189
+
190
+
191
+ ### Framework Versions
192
+ - Python: 3.10.13
193
+ - Sentence Transformers: 3.0.0
194
+ - Transformers: 4.41.2
195
+ - PyTorch: 2.3.1+cu118
196
+ - Accelerate: 0.30.1
197
+ - Datasets: 2.19.2
198
+ - Tokenizers: 0.19.1
199
+
200
+ ## Citation
201
+
202
+ ### BibTeX
203
+
204
+ <!--
205
+ ## Glossary
206
+
207
+ *Clearly define terms in order to be accessible across audiences.*
208
+ -->
209
+
210
+ <!--
211
+ ## Model Card Authors
212
+
213
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
214
+ -->
215
+
216
+ <!--
217
+ ## Model Card Contact
218
+
219
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
220
+ -->
added_tokens.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "<ent2>": 32771,
3
+ "<ent>": 32770
4
+ }
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/workspace/store/outputs/step3/B2048E3LR3e-05_glu-big-prefix/B2048E3LR3e-05_mir_mr-_jqa_bao_qui_qui_mqa-prefix/93",
3
+ "architectures": [
4
+ "LukeModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bert_model_name": "models/luke-japanese/hf_xlm_roberta",
8
+ "bos_token_id": 0,
9
+ "classifier_dropout": null,
10
+ "cls_entity_prediction": false,
11
+ "entity_emb_size": 256,
12
+ "entity_vocab_size": 4,
13
+ "eos_token_id": 2,
14
+ "hidden_act": "gelu",
15
+ "hidden_dropout_prob": 0.1,
16
+ "hidden_size": 768,
17
+ "initializer_range": 0.02,
18
+ "intermediate_size": 3072,
19
+ "layer_norm_eps": 1e-05,
20
+ "max_position_embeddings": 514,
21
+ "model_type": "luke",
22
+ "num_attention_heads": 12,
23
+ "num_hidden_layers": 12,
24
+ "pad_token_id": 1,
25
+ "position_embedding_type": "absolute",
26
+ "torch_dtype": "float32",
27
+ "transformers_version": "4.44.0",
28
+ "type_vocab_size": 1,
29
+ "use_cache": true,
30
+ "use_entity_aware_attention": true,
31
+ "vocab_size": 32772
32
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.0.1",
4
+ "transformers": "4.44.0",
5
+ "pytorch": "2.3.1+cu118"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
entity_vocab.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "[MASK2]": 3,
3
+ "[MASK]": 0,
4
+ "[PAD]": 2,
5
+ "[UNK]": 1
6
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9157b2c1939b03938604b8f175fb24acf9d403afdeabdf8b6d54be6a6bce137c
3
+ size 532299592
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
sentencepiece.bpe.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8b73a5e054936c920cf5b7d1ec21ce9c281977078269963beb821c6c86fbff7
3
+ size 841889
special_tokens_map.json ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<ent>",
4
+ "<ent2>",
5
+ "<ent>",
6
+ "<ent2>",
7
+ "<ent>",
8
+ "<ent2>",
9
+ "<ent>",
10
+ "<ent2>",
11
+ "<ent>",
12
+ "<ent2>",
13
+ "<ent>",
14
+ "<ent2>",
15
+ "<ent>",
16
+ "<ent2>",
17
+ "<ent>",
18
+ "<ent2>",
19
+ {
20
+ "content": "<ent>",
21
+ "lstrip": false,
22
+ "normalized": true,
23
+ "rstrip": false,
24
+ "single_word": false
25
+ },
26
+ {
27
+ "content": "<ent2>",
28
+ "lstrip": false,
29
+ "normalized": true,
30
+ "rstrip": false,
31
+ "single_word": false
32
+ }
33
+ ],
34
+ "bos_token": {
35
+ "content": "<s>",
36
+ "lstrip": false,
37
+ "normalized": false,
38
+ "rstrip": false,
39
+ "single_word": false
40
+ },
41
+ "cls_token": {
42
+ "content": "<s>",
43
+ "lstrip": false,
44
+ "normalized": false,
45
+ "rstrip": false,
46
+ "single_word": false
47
+ },
48
+ "eos_token": {
49
+ "content": "</s>",
50
+ "lstrip": false,
51
+ "normalized": false,
52
+ "rstrip": false,
53
+ "single_word": false
54
+ },
55
+ "mask_token": {
56
+ "content": "<mask>",
57
+ "lstrip": true,
58
+ "normalized": true,
59
+ "rstrip": false,
60
+ "single_word": false
61
+ },
62
+ "pad_token": {
63
+ "content": "<pad>",
64
+ "lstrip": false,
65
+ "normalized": false,
66
+ "rstrip": false,
67
+ "single_word": false
68
+ },
69
+ "sep_token": {
70
+ "content": "</s>",
71
+ "lstrip": false,
72
+ "normalized": false,
73
+ "rstrip": false,
74
+ "single_word": false
75
+ },
76
+ "unk_token": {
77
+ "content": "<unk>",
78
+ "lstrip": false,
79
+ "normalized": false,
80
+ "rstrip": false,
81
+ "single_word": false
82
+ }
83
+ }
tokenizer_config.json ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "<s>",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "<pad>",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "</s>",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "<unk>",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "32769": {
36
+ "content": "<mask>",
37
+ "lstrip": true,
38
+ "normalized": true,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "32770": {
44
+ "content": "<ent>",
45
+ "lstrip": false,
46
+ "normalized": true,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ },
51
+ "32771": {
52
+ "content": "<ent2>",
53
+ "lstrip": false,
54
+ "normalized": true,
55
+ "rstrip": false,
56
+ "single_word": false,
57
+ "special": true
58
+ }
59
+ },
60
+ "additional_special_tokens": [
61
+ "<ent>",
62
+ "<ent2>",
63
+ "<ent>",
64
+ "<ent2>",
65
+ "<ent>",
66
+ "<ent2>",
67
+ "<ent>",
68
+ "<ent2>",
69
+ "<ent>",
70
+ "<ent2>",
71
+ "<ent>",
72
+ "<ent2>",
73
+ "<ent>",
74
+ "<ent2>",
75
+ "<ent>",
76
+ "<ent2>",
77
+ "<ent>",
78
+ "<ent2>"
79
+ ],
80
+ "bos_token": "<s>",
81
+ "clean_up_tokenization_spaces": true,
82
+ "cls_token": "<s>",
83
+ "entity_mask2_token": "[MASK2]",
84
+ "entity_mask_token": "[MASK]",
85
+ "entity_pad_token": "[PAD]",
86
+ "entity_token_1": {
87
+ "__type": "AddedToken",
88
+ "content": "<ent>",
89
+ "lstrip": false,
90
+ "normalized": true,
91
+ "rstrip": false,
92
+ "single_word": false,
93
+ "special": false
94
+ },
95
+ "entity_token_2": {
96
+ "__type": "AddedToken",
97
+ "content": "<ent2>",
98
+ "lstrip": false,
99
+ "normalized": true,
100
+ "rstrip": false,
101
+ "single_word": false,
102
+ "special": false
103
+ },
104
+ "entity_unk_token": "[UNK]",
105
+ "eos_token": "</s>",
106
+ "mask_token": "<mask>",
107
+ "max_entity_length": 32,
108
+ "max_mention_length": 30,
109
+ "model_max_length": 512,
110
+ "pad_token": "<pad>",
111
+ "sep_token": "</s>",
112
+ "sp_model_kwargs": {},
113
+ "task": null,
114
+ "tokenizer_class": "MLukeTokenizer",
115
+ "unk_token": "<unk>"
116
+ }