Update README.md
Browse files
README.md
CHANGED
@@ -38,9 +38,10 @@ This model is compared to 3 reference models (see below). As each model doesn't
|
|
38 |
#### bert-base-multilingual-uncased-sentiment
|
39 |
[nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) is based on BERT model in multilingual and uncased version. This sentiment analyzer is trained on Amazon review like our model, then the targets and their definition are the same. In order to be robust to +/-1 star estimation errors, we will take the following definition as a performance measure:
|
40 |
$$acc=\frac{1}{|\mathcal{O}|}\sum_{i\in\mathcal{O}}\sum_{0\leq l < 5}p_{i,l}\hat{p}_{i,l},$$
|
41 |
-
where $\mathcal{O}$
|
42 |
|
43 |
-
####
|
|
|
44 |
|
45 |
How to use DistilCamemBERT-Sentiment
|
46 |
------------------------------------
|
|
|
38 |
#### bert-base-multilingual-uncased-sentiment
|
39 |
[nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) is based on BERT model in multilingual and uncased version. This sentiment analyzer is trained on Amazon review like our model, then the targets and their definition are the same. In order to be robust to +/-1 star estimation errors, we will take the following definition as a performance measure:
|
40 |
$$acc=\frac{1}{|\mathcal{O}|}\sum_{i\in\mathcal{O}}\sum_{0\leq l < 5}p_{i,l}\hat{p}_{i,l},$$
|
41 |
+
where $\mathcal{O}$ is the test set of the observations, $p_l\in\{0,1\}$ is equal at 1 for the true label and $\hat{p}_l$ the estimated probability for the l-th label.
|
42 |
|
43 |
+
#### tf-allociné and barthez-sentiment-classification
|
44 |
+
[tblard/tf-allocine](https://huggingface.co/tblard/tf-allocine) and [moussaKam/barthez-sentiment-classification](https://huggingface.co/moussaKam/barthez-sentient-classification) use the same bi-class definition between them. To bring this back to a two-class problem, we will consider only the "1 star" and "2 stars" labels for the "negative" sentiments and "4 stars" and "5 stars" for "positive" sentiments. We exclude the "3 stars" can witch interpreted as "neutral" class. In this context, the problem of +/-1 star estimation errors disappears. Then we use the classical accuracy definition.
|
45 |
|
46 |
How to use DistilCamemBERT-Sentiment
|
47 |
------------------------------------
|