Update README.md
Browse files
README.md
CHANGED
@@ -25,7 +25,7 @@ The task of fine-tuning is Question Answering. You can check our [paper](https:/
|
|
25 |
|
26 |
## Implementation Details
|
27 |
|
28 |
-
The encoder concatenates the
|
29 |
|
30 |
## Evaluation - Question Answering
|
31 |
|
|
|
25 |
|
26 |
## Implementation Details
|
27 |
|
28 |
+
The encoder concatenates the question and text, and the decoder generates the answer. We use special labels to differentiate the components. Our maximum token input is set to 512, while the maximum token output is set to 128. During training, the models undergo a maximum of 20 epochs and incorporate early stopping with a patience of 2. A batch size of 16 is employed. During inference, we utilize beam search with a beam width of 5.
|
29 |
|
30 |
## Evaluation - Question Answering
|
31 |
|