--- language: en tags: - fid --- ## FiD Architecture ![Screenshot 2022-11-16 at 23.26.16.png](https://s3.amazonaws.com/moonup/production/uploads/1672830489933-63b52c0a889aa6707f07885b.png) ## EM Score ![210333747-c001f1f0-8e3d-4f5c-ad32-4fdffdc6e17b.png](https://s3.amazonaws.com/moonup/production/uploads/1672830489959-63b52c0a889aa6707f07885b.png) This pre-trained model achieved 1-2% increase in EM than the FiD paper's result ## Usage
from transformers import AutoConfig, T5Tokenizer
from fid import FiDT5

tokenizer = T5Tokenizer.from_pretrained("t5-base")

question = "question:"
title = "title:"
context = "context:"
tokenizer.add_tokens([question, title, context], special_tokens=True)
config = AutoConfig.from_pretrained("t5-base")
model = FiDT5.from_pretrained(
    "xfact/FiD-NQ",
    config=config
)