ko-sbert-nli / README.md
jhgan's picture
updated README.md
7f05133
|
raw
history blame
2.62 kB
metadata
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - feature-extraction
  - sentence-similarity

ko-sbert-nli

This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.

Usage (Sentence-Transformers)

λͺ¨λΈμ„ μ‚¬μš©ν•˜κΈ° μœ„ν•΄μ„œλŠ” ko-sentence-transformers λ₯Ό μ„€μΉ˜ν•΄μ•Ό ν•©λ‹ˆλ‹€.

pip install -U ko-sentence-transformers

Then you can use the model like this:

from sentence_transformers import SentenceTransformer
sentences = ["μ•ˆλ…•ν•˜μ„Έμš”?", "ν•œκ΅­μ–΄ λ¬Έμž₯ μž„λ² λ”©μ„ μœ„ν•œ λ²„νŠΈ λͺ¨λΈμž…λ‹ˆλ‹€."]

model = SentenceTransformer('ko-sbert-nli')
embeddings = model.encode(sentences)
print(embeddings)

Evaluation Results

KorNLI ν•™μŠ΅ λ°μ΄ν„°μ…‹μœΌλ‘œ ν•™μŠ΅ν•œ ν›„ KorSTS 평가 λ°μ΄ν„°μ…‹μœΌλ‘œ ν‰κ°€ν•œ κ²°κ³Όμž…λ‹ˆλ‹€.

λͺ¨λΈ ν•™μŠ΅ 데이터 Cosine Pearson Cosine Spearman Euclidean Pearson Euclidean Spearman Manhattan Pearson Manhattan Spearman Dot Pearson Dot Spearman
SKT-KoBERT NLI 82.03 82.36 80.06 79.85 80.08 79.91 75.76 74.72

Training

The model was trained with the parameters:

DataLoader:

sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader of length 8886 with parameters:

{'batch_size': 64}

Loss:

sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss with parameters:

{'scale': 20.0, 'similarity_fct': 'cos_sim'}

Parameters of the fit()-Method:

{
    "epochs": 1,
    "evaluation_steps": 888,
    "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
    "max_grad_norm": 1,
    "optimizer_class": "<class 'transformers.optimization.AdamW'>",
    "optimizer_params": {
        "lr": 2e-05
    },
    "scheduler": "WarmupLinear",
    "steps_per_epoch": null,
    "warmup_steps": 889,
    "weight_decay": 0.01
}

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)

Citing & Authors