Aidan Phillips
commited on
Commit
·
a72fa1f
1
Parent(s):
7159c31
describe fluency scoring
Browse files
README.md
CHANGED
@@ -1 +1,34 @@
|
|
1 |
-
# Teach BS
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Teach BS
|
2 |
+
|
3 |
+
## Fluency Scoring
|
4 |
+
For fluency scoring we use a combination of "hard" structural scoring and "soft" probabilistic scoring from a distilled BERT model.
|
5 |
+
|
6 |
+
For probabilitistic scoring we begin by tokenizing the sentence and grouping words. Given the sentence `That was unbelievable` we tokenize
|
7 |
+
|
8 |
+
$$\begin{matrix} e(\text{That}) & e(\text{was}) & e(\text{un}) & e(\text{believ}) & e(\text{able}) \\ 0 & 1 & 2 & 3 & 4\end{matrix}$$
|
9 |
+
|
10 |
+
and produce groupings from the offsets
|
11 |
+
|
12 |
+
$$\mathcal G_0 = \{0\} \quad \mathcal G_1 = \{1\} \quad \mathcal G_2 = \{2, 3, 4\}.$$
|
13 |
+
|
14 |
+
We can then mask entire groups and take the average of the log probabilities of each token, negating so that lower probabilities increase loss:
|
15 |
+
|
16 |
+
$$\ell_w = - \frac 1N \sum_{i \in \mathcal G_w} \log \mathbb P(t_i)$$
|
17 |
+
|
18 |
+
where $w$ is the index of the relevant word. This penalizes words that are not preferred by the model in context, but along the way will penalize words that may be correctly used but are simply uncommon. We seek to penalize based on the "contextual awkwardness" of some given word so we will compute a rarity score
|
19 |
+
|
20 |
+
$$\mathcal F_w = -\log (f_w + 10^{-12})$$
|
21 |
+
|
22 |
+
where $f_w$ is computed from the python `wordfreq` package. The funtion `word_frequency` will return a value $f_w \in [0, 1]$ where $0$ is extremely rare and $1$ is extremely common. Therefore $\mathcal F_w \in [0, \log 10^{-12} \approx 27.6]$ where a higher score means the word is more rare. We use this to compute an adjusted pseudo log loss $(\text{PLL})$ $\mathcal L$
|
23 |
+
|
24 |
+
$$\mathcal L_w = \ell_w - \alpha \mathcal F_w \approx \ell_w + \alpha \log f_w$$
|
25 |
+
|
26 |
+
which applies a downward adjustment to the loss for rare words where $\alpha$ is some weight parameter. Our final step is to produce an adjusted pseudolikelihood estimation
|
27 |
+
|
28 |
+
$$\mathcal J_{\text{adj}} = \frac {1} {\mathbb W} \sum_{w \in \mathbb W} \mathcal L_w.$$
|
29 |
+
|
30 |
+
We then generate a fluenct score $\textbf{FS}$ from $0$ to $100$ using the logistic function
|
31 |
+
|
32 |
+
$$\textbf{FS}_{\mathbb W} = \frac{100}{1+\exp(s \times \mathcal J_{\text{adj}} - m)}$$
|
33 |
+
|
34 |
+
where $s$ is some steepness factor and $m$ is the midpoint (where the score should be $50$).
|