--- language: - en --- [sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE) pre-trained on an instructional question-and-answer dataset. Evaluated on Precision at K metrics and Mean reciprocal rank p@1: 52 % p@3: 66 % p@5: 73 % p@10: 79 % p@15: 82 % MRR: ```python import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("zjkarina/LaBSE-instructDialogs") model = AutoModel.from_pretrained("zjkarina/LaBSE-instructDialogs") sentences = ["List 5 reasons why someone should learn to code", "Describe the sound of the wind on a sunny day."] encoded_input = tokenizer(sentences, padding=True, truncation=True, max_length=64, return_tensors='pt') with torch.no_grad(): model_output = model(**encoded_input) embeddings = model_output.pooler_output embeddings = torch.nn.functional.normalize(embeddings) print(embeddings) ```