|
--- |
|
license: mit |
|
tags: |
|
- textual-entailment |
|
- logical-reasoning |
|
- deberta |
|
language: |
|
- en |
|
metrics: |
|
- accuracy |
|
pipeline_tag: text-classification |
|
--- |
|
|
|
# DELTA: Description Logics with Transformers |
|
|
|
Fine-tuning a transformer model for textual entailment over expressive contexts generated from description logic knowledge bases. |
|
Specifically, the model is given a context (a set of facts and rules) and a question. |
|
The model should answer with "True" if the question is logically implied from the context, "False" if it contradicts the context, and "Unknown" if none of the two. |
|
|
|
For more info please see our paper. |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
DELTA<sub>M</sub> is a DeBERTaV3 large model fine-tuned on the DELTA<sub>D</sub> dataset. |
|
|
|
- **License:** MIT |
|
- **Finetuned from model:** `microsoft/deberta-v3-large` |
|
|
|
### Model Sources |
|
<!-- Provide the basic links for the model. --> |
|
- **Repository:** https://github.com/angelosps/DELTA |
|
- **Paper:** [Transformers in the Service of Description Logic-based Contexts](https://arxiv.org/abs/2311.08941) |
|
|
|
|
|
<!-- ## Uses |
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
|
|
<!-- ### Direct Use --> |
|
|
|
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> |
|
|
|
<!-- [More Information Needed] --> |
|
|
|
<!-- ### Downstream Use [optional] --> |
|
|
|
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> |
|
|
|
<!-- [More Information Needed] --> |
|
|
|
|
|
<!-- ## Training Details |
|
|
|
### Training Data --> |
|
|
|
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
|
|
|
<!-- [More Information Needed] |
|
|
|
### Training Procedure --> |
|
|
|
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> |
|
|
|
|
|
<!-- #### Training Hyperparameters |
|
|
|
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> |
|
|
|
<!-- #### Speeds, Sizes, Times [optional] --> |
|
|
|
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> |
|
|
|
<!-- [More Information Needed] |
|
|
|
## Evaluation --> |
|
|
|
<!-- This section describes the evaluation protocols and provides the results. --> |
|
|
|
<!-- ### Testing Data, Factors & Metrics |
|
|
|
#### Testing Data |
|
--> |
|
<!-- This should link to a Dataset Card if possible. --> |
|
|
|
<!-- [More Information Needed] |
|
|
|
|
|
#### Metrics --> |
|
|
|
<!-- These are the evaluation metrics being used, ideally with a description of why. --> |
|
|
|
<!-- [More Information Needed] |
|
|
|
### Results |
|
|
|
[More Information Needed] --> |
|
|
|
<!-- #### Summary --> |
|
|
|
## Citation |
|
|
|
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> |
|
|
|
**BibTeX:** |
|
``` |
|
@misc{poulis2024transformers, |
|
title={Transformers in the Service of Description Logic-based Contexts}, |
|
author={Angelos Poulis and Eleni Tsalapati and Manolis Koubarakis}, |
|
year={2024}, |
|
eprint={2311.08941}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |
|
|
|
<!-- ## Model Card Authors [optional] |
|
|
|
[More Information Needed] |
|
|
|
## Model Card Contact |
|
|
|
[More Information Needed] --> |