Update README.md
Browse files
README.md
CHANGED
@@ -4,7 +4,7 @@ language:
|
|
4 |
- en
|
5 |
---
|
6 |
|
7 |
-
We trained a language model to **automatically score the IELTS essays** by using massive the training dataset by human raters.
|
8 |
|
9 |
The impressive result in the test dataset is as follows: **Accuracy = 0.82, F1 Score = 0.81**.
|
10 |
|
@@ -13,7 +13,7 @@ The following is the code to implement the model for scoring new IELTS essays.
|
|
13 |
In the following example, an essay is taken from the test dataset with the overall score 8.
|
14 |
|
15 |
```
|
16 |
-
|
17 |
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
18 |
import torch
|
19 |
import numpy as np
|
@@ -43,31 +43,30 @@ new_text = (
|
|
43 |
"for people of every society, so government should build parks in every city and town."
|
44 |
)
|
45 |
|
46 |
-
|
47 |
encoded_input = tokenizer(new_text, return_tensors='pt', padding=True, truncation=True, max_length=512)
|
48 |
|
49 |
-
|
50 |
model.eval()
|
51 |
|
52 |
# Perform the prediction
|
53 |
with torch.no_grad():
|
54 |
outputs = model(**encoded_input)
|
55 |
|
56 |
-
# Get the predictions (the output here depends on whether you are doing regression or classification)
|
57 |
predictions = outputs.logits.squeeze()
|
58 |
|
59 |
-
|
60 |
-
predicted_scores = predictions.numpy()
|
61 |
|
62 |
# Normalize the scores
|
63 |
normalized_scores = (predicted_scores / predicted_scores.max()) * 9 # Scale to 9
|
64 |
|
65 |
-
|
66 |
rounded_scores = np.round(normalized_scores * 2) / 2
|
67 |
|
68 |
item_names = ["Task Achievement", "Coherence and Cohesion", "Vocabulary", "Grammar", "Overall"]
|
69 |
|
70 |
-
|
71 |
for item, score in zip(item_names, rounded_scores):
|
72 |
print(f"{item}: {score:.1f}")
|
73 |
|
|
|
4 |
- en
|
5 |
---
|
6 |
|
7 |
+
We trained a language model to **automatically score the IELTS (International English Language Testing System) essays** by using massive the training dataset by human raters.
|
8 |
|
9 |
The impressive result in the test dataset is as follows: **Accuracy = 0.82, F1 Score = 0.81**.
|
10 |
|
|
|
13 |
In the following example, an essay is taken from the test dataset with the overall score 8.
|
14 |
|
15 |
```
|
16 |
+
|
17 |
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
18 |
import torch
|
19 |
import numpy as np
|
|
|
43 |
"for people of every society, so government should build parks in every city and town."
|
44 |
)
|
45 |
|
46 |
+
|
47 |
encoded_input = tokenizer(new_text, return_tensors='pt', padding=True, truncation=True, max_length=512)
|
48 |
|
49 |
+
|
50 |
model.eval()
|
51 |
|
52 |
# Perform the prediction
|
53 |
with torch.no_grad():
|
54 |
outputs = model(**encoded_input)
|
55 |
|
|
|
56 |
predictions = outputs.logits.squeeze()
|
57 |
|
58 |
+
|
59 |
+
predicted_scores = predictions.numpy()
|
60 |
|
61 |
# Normalize the scores
|
62 |
normalized_scores = (predicted_scores / predicted_scores.max()) * 9 # Scale to 9
|
63 |
|
64 |
+
|
65 |
rounded_scores = np.round(normalized_scores * 2) / 2
|
66 |
|
67 |
item_names = ["Task Achievement", "Coherence and Cohesion", "Vocabulary", "Grammar", "Overall"]
|
68 |
|
69 |
+
|
70 |
for item, score in zip(item_names, rounded_scores):
|
71 |
print(f"{item}: {score:.1f}")
|
72 |
|