File size: 1,249 Bytes
7ec3f45
 
 
 
 
 
1e85fde
 
 
 
 
 
 
 
 
 
 
 
 
7ec3f45
 
 
1e85fde
 
 
 
 
 
f9165ad
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
---
license: other
language:
- en
---

This model is a fine-tuned LLaMA (7B) model. This model is under a non-commercial license (see the LICENSE file). You should only use model after having been granted access to the base LLaMA model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform?usp=send_form).

This model is a semantic parser for WikiData. Refer to the following for more information:

GitHub repository: https://github.com/stanford-oval/wikidata-emnlp23

Paper: https://aclanthology.org/2023.emnlp-main.353/

<p align="center">
    <img src="./images/Wikidata-logo-en.svg" width="100px" alt="Wikidata" />
    <h1 align="center">
        <b>WikiSP</b>
        <br>
        <a href="https://arxiv.org/abs/2305.14202">
            <img src="https://img.shields.io/badge/cs.CL-2305.14202-b31b1b" alt="arXiv">
        </a>
        <a href="https://github.com/stanford-oval/wikidata-emnlp23/stargazers">
            <img src="https://img.shields.io/github/stars/stanford-oval/wikidata-emnlp23?style=social" alt="Github Stars">
        </a>
    </h1>
</p>

This model is trained on both the WikiWebQuestions dataset, the QALD-7 dataset, and the Stanford Alpaca dataset.