|
--- |
|
license: mit |
|
language: |
|
- en |
|
metrics: |
|
- f1 |
|
- accuracy |
|
base_model: |
|
- google-t5/t5-base |
|
library_name: transformers |
|
--- |
|
# Computational Analysis of Communicative Acts for Understanding Crisis News Comment Discourses |
|
|
|
The official trained models for **"Computational Analysis of Communicative Acts for Understanding Crisis News Comment Discourses"**. |
|
|
|
This model is based on **T5-base** and uses the **Compacter** ([Compacter: Efficient Low-Rank Adaptation for Transformer Models](https://arxiv.org/abs/2106.04647)) architecture. It has been fine-tuned on our **crisis narratives dataset**. |
|
|
|
--- |
|
|
|
### Model Information |
|
|
|
- **Architecture:** T5-base with Compacter |
|
- **Task:** Multi-label classification for communicative act actions |
|
- **Classes:** |
|
- `informing statement` |
|
- `challenge` |
|
- `rejection` |
|
- `appreciation` |
|
- `request` |
|
- `question` |
|
- `acceptance` |
|
- `apology` |
|
|
|
--- |
|
|
|
### How to Use the Model |
|
|
|
To use this model, you will need the original code from our paper, available here: |
|
[Acts in Crisis Narratives - GitHub Repository](https://github.com/Aalto-CRAI-CIS/Acts-in-crisis-narratives/tree/main/few_shot_learning/AdapterModel) |
|
|
|
#### Steps to Load and Use the Fine-Tuned Model: |
|
|
|
1. Add your test task method to `seq2seq/data/task.py`, similar to other task methods. |
|
2. Modify `adapter_inference.sh` to include your test task's information and this model's name, and then run it. |
|
|
|
```bash |
|
--model_name_or_path CrisisNarratives/adapter-8classes-multi_label |
|
``` |
|
|
|
For detailed instructions, refer to the GitHub repository linked above. |
|
|
|
--- |
|
|
|
### Citation |
|
|
|
If you use this model in your work, please cite: |
|
|
|
#### TO BE ADDED. |
|
|
|
### Questions or Feedback? |
|
|
|
For questions or feedback, please reach out via our [contact form](mailto:[email protected]). |
|
|
|
|
|
|
|
|