tennessejoyce
commited on
Commit
·
01ca82a
1
Parent(s):
b8a1c44
Update README.md
Browse files
README.md
CHANGED
@@ -15,8 +15,9 @@ This is one of two NLP models used in the Titlewave project, and its purpose is
|
|
15 |
## Intended use
|
16 |
|
17 |
Try out different titles for your Stack Overflow post, and see which one gives you the best chance of receiving an answer.
|
18 |
-
|
19 |
-
|
|
|
20 |
|
21 |
```python
|
22 |
>>> from transformers import pipeline
|
@@ -42,7 +43,7 @@ After some hyperparameter tuning, I found that the following two-phase training
|
|
42 |
* In the second epoch all layers were unfrozen, and the learning rate was decreased by a factor of 10 to 3e-5.
|
43 |
|
44 |
Otherwise, all parameters we set to the defaults listed [here](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments),
|
45 |
-
including the AdamW optimizer and a linearly decreasing learning schedule (both of which were reset between the two epochs). See the [github repository](https://github.com/tennessejoyce/TitleWave) for the scripts that
|
46 |
|
47 |
## Evaluation
|
48 |
|
|
|
15 |
## Intended use
|
16 |
|
17 |
Try out different titles for your Stack Overflow post, and see which one gives you the best chance of receiving an answer.
|
18 |
+
You can use the model through the API on this page (hosted by HuggingFace) or install the Chrome extension by following the instructions on the [github repository](https://github.com/tennessejoyce/TitleWave), which integrates the tool directly into the Stack Overflow website.
|
19 |
+
|
20 |
+
You can also run the model locally in Python like this (which automatically downloads the model to your machine):
|
21 |
|
22 |
```python
|
23 |
>>> from transformers import pipeline
|
|
|
43 |
* In the second epoch all layers were unfrozen, and the learning rate was decreased by a factor of 10 to 3e-5.
|
44 |
|
45 |
Otherwise, all parameters we set to the defaults listed [here](https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments),
|
46 |
+
including the AdamW optimizer and a linearly decreasing learning schedule (both of which were reset between the two epochs). See the [github repository](https://github.com/tennessejoyce/TitleWave) for the scripts that were used to train the model.
|
47 |
|
48 |
## Evaluation
|
49 |
|