This model is a fine-tuned LLaMA (7B) model. This model is under a non-commercial license (see the LICENSE file). You should only use model after having been granted access to the base LLaMA model by filling out this form.

This model is a semantic parser for WikiData. Refer to the following for more information:

GitHub repository: https://github.com/stanford-oval/wikidata-emnlp23

Paper: https://aclanthology.org/2023.emnlp-main.353/

Wikidata

WikiSP
arXiv Github Stars

This model is trained on the WikiWebQuestions dataset and the Stanford Alpaca dataset.

Downloads last month
4
Safetensors
Model size
6.74B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Spaces using stanford-oval/llama-7b-wikiwebquestions 9