|
--- |
|
license: cc |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
# TimeLlama |
|
|
|
TimeLlama is an instruction-finetuned Llama2 series that improves complex temporal reasoning ability. |
|
## Model Details |
|
### Model Description |
|
In this work, we introduce the first multi-source dataset for explainable temporal reasoning, called ExpTime. The dataset contains 26k examples derived from temporal knowledge graph datasets. Each example includes a context with multiple events, a future event to predict, and an explanation for the prediction in the form of temporal reasoning over the events. |
|
|
|
To generate the dataset, we propose a novel knowledge-graph-instructed-generation strategy. The dataset supports the comprehensive evaluation of large language models on complex temporal reasoning, future event prediction, and explainability. |
|
|
|
Based on ExpTime, we develop TimeLlaMA, a series of LLM models fine-tuned for explainable temporal reasoning. TimeLlaMA builds on the foundation LLM LLaMA-2 and utilizes instruction tuning to follow prompts for making explanations. |
|
|
|
### Model Sources |
|
|
|
- **Repository:** https://github.com/chenhan97/TimeLlama |
|
- **Paper:** https://arxiv.org/abs/2310.01074 |
|
|
|
## Uses |
|
|
|
### Direct Use |
|
```python |
|
from transformers import LlamaConfig, LlamaTokenizer, LlamaForCausalLM |
|
# Model names: "chrisyuan45/TimeLlama-7b-chat", "chrisyuan45/TimeLlama-13b-chat" |
|
model = LlamaForCausalLM.from_pretrained( |
|
model_name, |
|
return_dict=True, |
|
load_in_8bit=quantization, |
|
device_map="auto", |
|
low_cpu_mem_usage=True) |
|
tokenizer = LlamaTokenizer.from_pretrained(model_name) |
|
``` |
|
|
|
### Finetune |
|
Please check our repository for the detailed finetuning method. |