Update README.md
Browse files
README.md
CHANGED
@@ -37,7 +37,7 @@ This model is compared to 3 reference models (see below). As each model doesn't
|
|
37 |
|
38 |
#### bert-base-multilingual-uncased-sentiment
|
39 |
[nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) is based on BERT model in the multilingual and uncased version. This sentiment analyzer is trained on Amazon reviews similarly to our model, hence the targets and their definitions are the same. In order to be robust to +/-1 star estimation errors, we will take the following definition as a performance measure:
|
40 |
-
$$acc=\frac{1}{|\mathcal{O}|}\sum_{i\in\mathcal{O}}\sum_{0\leq l < 5}p_{i,l}\hat{p}_{i,l}
|
41 |
where $\mathcal{O}$ is the test set of the observations, $p_l\in\{0,1\}$ is equal to 1 for the true label and $\hat{p}_l$ is the estimated probability for the l-th label.
|
42 |
|
43 |
#### tf-allociné and barthez-sentiment-classification
|
|
|
37 |
|
38 |
#### bert-base-multilingual-uncased-sentiment
|
39 |
[nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) is based on BERT model in the multilingual and uncased version. This sentiment analyzer is trained on Amazon reviews similarly to our model, hence the targets and their definitions are the same. In order to be robust to +/-1 star estimation errors, we will take the following definition as a performance measure:
|
40 |
+
$$acc=\frac{1}{|\mathcal{O}|}\sum_{i\in\mathcal{O}}\sum_{0\leq l < 5}p_{i,l}\hat{p}_{i,l}$$
|
41 |
where $\mathcal{O}$ is the test set of the observations, $p_l\in\{0,1\}$ is equal to 1 for the true label and $\hat{p}_l$ is the estimated probability for the l-th label.
|
42 |
|
43 |
#### tf-allociné and barthez-sentiment-classification
|