File size: 2,005 Bytes
52e89e0
 
 
 
 
 
 
 
 
 
 
 
 
dc53f13
 
 
7b53a0f
 
45c8bc4
 
 
 
 
 
 
8935117
45c8bc4
f31b931
 
 
 
 
 
45c8bc4
 
 
 
 
 
 
 
 
7b53a0f
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
license: apache-2.0
datasets:
- kejian/ACL-ARC
language:
- en
metrics:
- f1
base_model:
- Qwen/Qwen2.5-14B-Instruct
library_name: transformers
tags:
- scientometrics
- citation_analysis
- citation_intent_classification
pipeline_tag: zero-shot-classification
---

# Qwen2.5-14-CIC-ACLARC

A fine-tuned model for Citation Intent Classification, based on [Qwen 2.5 14B Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) and trained on the [ACL-ARC](https://huggingface.co/datasets/kejian/ACL-ARC) dataset.

[GGUF Version](https://huggingface.co/sknow-lab/Qwen2.5-14B-CIC-ACLARC-GGUF)

## ACL-ARC classes
| Class | Description |
| --- | --- |
| Background | The cited paper provides relevant Background information or is part of the body of literature.|
| Motivation | The citing paper is directly motivated by the cited paper. |
| Uses | The citing paper uses the methodology or tools created by the cited paper.|
| Extension | The citing paper extends the methods, tools or data, etc. of the cited paper. |
| Comparison or Contrast | The citing paper expresses similarities or differences to, or disagrees with, the cited paper. |
| Future | *The cited paper may be a potential avenue for future work.|

## Quickstart
**TODO**

Details about the system prompts and query templates can be found in the paper. 

There might be a need for a cleanup function to extract the predicted label from the output. You can find ours on [GitHub](https://github.com/athenarc/CitationIntentOpenLLM/blob/main/citation_intent_classification_experiments.py). 

## Citation

```
@misc{koloveas2025llmspredictcitationintent,
      title={Can LLMs Predict Citation Intent? An Experimental Analysis of In-context Learning and Fine-tuning on Open LLMs}, 
      author={Paris Koloveas and Serafeim Chatzopoulos and Thanasis Vergoulis and Christos Tryfonopoulos},
      year={2025},
      eprint={2502.14561},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2502.14561}, 
}
```