File size: 2,137 Bytes
0684ceb
 
cb40c13
34dd635
 
0684ceb
80a2654
 
 
 
 
 
 
 
 
 
 
 
 
5069e3f
80a2654
 
 
 
 
 
 
 
 
 
 
 
 
5328c2d
 
 
 
 
 
 
 
 
80a2654
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
license: mit
pipeline_tag: text-classification
widget:
- text: "whaling is part of the culture of various indigenous population and should be allowed for the purpose of maintaining this tradition and way of life and sustenance, among other uses of a whale. against We should ban whaling"
---


## Model Usage

```python

from transformers import AutoModelForSequenceClassification, AutoTokenizer

tokenizer =  AutoTokenizer.from_pretrained("tum-nlp/Deberta_Human_Value_Detector")
model = AutoModelForSequenceClassification.from_pretrained("tum-nlp/Deberta_Human_Value_Detector", trust_remote_code=True)

example_text ='whaling is part of the culture of various indigenous population and should be allowed for the purpose of maintaining this tradition and way of life and sustenance, among other uses of a whale. against We should ban whaling'

encoding = tokenizer.encode_plus(
        text,
        add_special_tokens=True,
        max_length=512,
        return_token_type_ids=False,
        padding="max_length",
        return_attention_mask=True,
        return_tensors='pt',
    )

with torch.no_grad():
        test_prediction = trained_model(encoding["input_ids"], encoding["attention_mask"])
        test_prediction = test_prediction["logits"].flatten().numpy()

```

## Prediction
To make a prediction and map the the outputs to the correct labels.
During the competiton a threshold of 0.25 was used to binarize the output. 
```
THRESHOLD = 0.25
LABEL_COLUMNS = ['Self-direction: thought','Self-direction: action','Stimulation','Hedonism','Achievement','Power: dominance','Power: resources','Face','Security: personal',
                 'Security: societal','Tradition','Conformity: rules','Conformity: interpersonal','Humility','Benevolence: caring','Benevolence: dependability','Universalism: concern','Universalism: nature','Universalism: tolerance','Universalism: objectivity']
print(f"Predictions:")
        for label, prediction in zip(LABEL_COLUMNS, test_prediction):
            if prediction < THRESHOLD:
                continue
            print(f"{label}: {prediction}")
            res[label] = prediction
```