Update README.md
Browse files
README.md
CHANGED
@@ -35,17 +35,17 @@ should probably proofread and complete it, then remove this comment. -->
|
|
35 |
|
36 |
# bert-base-cased-finetuned
|
37 |
|
38 |
-
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the https://huggingface.co/datasets/cmunhozc/usa_news_en
|
39 |
It achieves the following results on the evaluation set:
|
40 |
- Loss: 0.0900
|
41 |
- Accuracy: 0.9800
|
42 |
|
43 |
## Model description
|
44 |
|
45 |
-
The fine-tuned model corresponds to a binary classification model that determines whether two English news headlines are related or not related. In the following paper
|
46 |
-
Rank News}
|
47 |
|
48 |
-
```
|
49 |
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
50 |
from transformers import Trainer
|
51 |
|
@@ -55,6 +55,8 @@ model = AutoModelForSequenceClassification.from_pretrained(model_name)
|
|
55 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
56 |
|
57 |
### 2. Dataset:
|
|
|
|
|
58 |
...
|
59 |
encoded_dataset = dataset.map(preprocess_fctn, batched=True, load_from_cache_file=False)
|
60 |
...
|
@@ -75,7 +77,7 @@ predictions = trainer_hf.predict(encoded_dataset["validation"])
|
|
75 |
acc_val = metric.compute(predictions=np.argmax(predictions.predictions,axis=1).tolist(), references=predictions.label_ids)['accuracy']
|
76 |
```
|
77 |
Finally, with the classification above model, you can follow the steps below to generate the news ranking.
|
78 |
-
- For each news article in the https://huggingface.co/datasets/cmunhozc/google_news_en dataset positioned as the first element in a pair, retrieve all corresponding pairs from the dataset.
|
79 |
- Employing pair encoders, rank the news articles that occupy the second position in each pair, determining their relevance to the first article.
|
80 |
- Organize each list generated by the encoders based on the probabilities obtained for the relevance class.
|
81 |
|
@@ -85,7 +87,7 @@ More information needed
|
|
85 |
|
86 |
## Training, evaluation and test data
|
87 |
|
88 |
-
The training data is sourced from the *train* split in https://huggingface.co/datasets/cmunhozc/usa_news_en, and a similar procedure is applied for the *validation* set. In the case of testing, the initial segment for the text classification model is derived from the *test_1* and *test_2* splits. As for the ranking model, the test dataset from https://huggingface.co/datasets/cmunhozc/google_news_en is utilized
|
89 |
|
90 |
## Training procedure
|
91 |
|
|
|
35 |
|
36 |
# bert-base-cased-finetuned
|
37 |
|
38 |
+
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the [usa_news_en train dataset](https://huggingface.co/datasets/cmunhozc/usa_news_en).
|
39 |
It achieves the following results on the evaluation set:
|
40 |
- Loss: 0.0900
|
41 |
- Accuracy: 0.9800
|
42 |
|
43 |
## Model description
|
44 |
|
45 |
+
The fine-tuned model corresponds to a binary classification model that determines whether two English news headlines are related or not related. In the following paper **{News Gathering: Leveraging Transformers to
|
46 |
+
Rank News}** it can find more details. To utilize the fine-tuned model, you can follow the steps outlined below:
|
47 |
|
48 |
+
```python
|
49 |
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
50 |
from transformers import Trainer
|
51 |
|
|
|
55 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
56 |
|
57 |
### 2. Dataset:
|
58 |
+
def preprocess_fctn(examples):
|
59 |
+
return tokenizer(examples["sentence1"], examples["sentence2"], truncation=True)
|
60 |
...
|
61 |
encoded_dataset = dataset.map(preprocess_fctn, batched=True, load_from_cache_file=False)
|
62 |
...
|
|
|
77 |
acc_val = metric.compute(predictions=np.argmax(predictions.predictions,axis=1).tolist(), references=predictions.label_ids)['accuracy']
|
78 |
```
|
79 |
Finally, with the classification above model, you can follow the steps below to generate the news ranking.
|
80 |
+
- For each news article in the [google_news_en train dataset](https://huggingface.co/datasets/cmunhozc/google_news_en) dataset positioned as the first element in a pair, retrieve all corresponding pairs from the dataset.
|
81 |
- Employing pair encoders, rank the news articles that occupy the second position in each pair, determining their relevance to the first article.
|
82 |
- Organize each list generated by the encoders based on the probabilities obtained for the relevance class.
|
83 |
|
|
|
87 |
|
88 |
## Training, evaluation and test data
|
89 |
|
90 |
+
The training data is sourced from the *train* split in [usa_news_en train dataset](https://huggingface.co/datasets/cmunhozc/usa_news_en), and a similar procedure is applied for the *validation* set. In the case of testing, the initial segment for the text classification model is derived from the *test_1* and *test_2* splits. As for the ranking model, the test dataset from [google_news_en train dataset](https://huggingface.co/datasets/cmunhozc/google_news_en) is utilized
|
91 |
|
92 |
## Training procedure
|
93 |
|