|
--- |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: "data/train_instances.json" |
|
- split: dev |
|
path: "data/dev_instances.json" |
|
- split: test |
|
path: "data/test_instances.json" |
|
- config_name: has_html |
|
data_files: |
|
- split: train |
|
path: "data/train_instances_with_html.json" |
|
- split: dev |
|
path: "data/dev_instances_with_html.json" |
|
- split: test |
|
path: "data/test_instances_with_html.json" |
|
--- |
|
|
|
# Preprocessed QASPER dataset |
|
|
|
Working doc: https://docs.google.com/document/d/1gYPhPNJ5LGttgjix1dwai8pdNcqS6PbqhsM7W0rhKNQ/edit?usp=sharing |
|
|
|
Original: |
|
- Dataset: https://github.com/allenai/qasper-led-baseline |
|
- Baseline repo: https://github.com/allenai/qasper-led-baseline |
|
- HF: https://huggingface.co/datasets/allenai/qasper |
|
|
|
|
|
Differences of our implementation over the original implementation: |
|
1. We use the dataset provided at https://huggingface.co/datasets/allenai/qasper since it doesn't require manually downloading files. |
|
2. We remove usage of `allennlp` since the Python package cannot be installed anymore. |
|
3. We add baselines to [qasper/models](qasper/models/). Currently, we have |
|
- QASPER (Longformer Encoder Decoder) |
|
- GPT-3.5-Turbo |
|
- TODO: RAG (with R=TF-IDF or Contriever) implemented in LangChain? |
|
4. We replace `allennlp` special tokens with the special tokens of the HF transformer tokenizer: |
|
- paragraph separator: '</s>' -> tokenizer.sep_token |
|
- sequence pair start tokens: _tokenizer.sequence_pair_start_tokens -> tokenizer.bos_token |
|
|
|
## Usage |
|
|
|
``` |
|
from datasets import load_dataset |
|
|
|
dataset = load_dataset("ag2435/qasper") |
|
``` |