File size: 1,809 Bytes
9a8cc34 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
license: mit
language:
- en
metrics:
- f1
- accuracy
base_model:
- google-t5/t5-base
library_name: transformers
---
# Computational Analysis of Communicative Acts for Understanding Crisis News Comment Discourses
The official trained models for **"Computational Analysis of Communicative Acts for Understanding Crisis News Comment Discourses"**.
This model is based on **T5-base** and uses the **Compacter** ([Compacter: Efficient Low-Rank Adaptation for Transformer Models](https://arxiv.org/abs/2106.04647)) architecture. It has been fine-tuned on our **crisis narratives dataset**.
---
### Model Information
- **Architecture:** T5-base with Compacter
- **Task:** Single-label classification for communicative act actions
- **Classes:**
- `informing statement`
- `challenge`
- `rejection`
- `appreciation`
- `request`
- `question`
- `acceptance`
- `apology`
---
### How to Use the Model
To use this model, you will need the original code from our paper, available here:
[Acts in Crisis Narratives - GitHub Repository](https://github.com/Aalto-CRAI-CIS/Acts-in-crisis-narratives/tree/main/few_shot_learning/AdapterModel)
#### Steps to Load and Use the Fine-Tuned Model:
1. Add your test task method to `seq2seq/data/task.py`, similar to other task methods.
2. Modify `adapter_inference.sh` to include your test task's information and this model's name, and then run it.
```bash
--model_name_or_path CrisisNarratives/adapter-8classes-single_label
```
For detailed instructions, refer to the GitHub repository linked above.
---
### Citation
If you use this model in your work, please cite:
##### TO BE ADDED.
### Questions or Feedback?
For questions or feedback, please reach out via our [contact form](mailto:[email protected]).
|