File size: 3,954 Bytes
7590f9e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bdec8c4
 
 
 
 
 
 
 
 
 
7590f9e
bdec8c4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
---
dataset_info:
  features:
  - name: instanceID
    dtype: string
  - name: dataID1
    dtype: string
  - name: dataID2
    dtype: string
  - name: lemma
    dtype: string
  - name: context1
    dtype: string
  - name: context2
    dtype: string
  - name: indices_target_token1
    dtype: string
  - name: indices_target_sentence1
    dtype: string
  - name: indices_target_sentence2
    dtype: string
  - name: indices_target_token2
    dtype: string
  - name: dataIDs
    dtype: string
  - name: label_set
    dtype: string
  - name: non_label
    dtype: string
  - name: label
    dtype: float64
  - name: fold1
    dtype: string
  - name: fold2
    dtype: string
  - name: fold3
    dtype: string
  - name: fold4
    dtype: string
  - name: fold5
    dtype: string
  - name: fold6
    dtype: string
  - name: fold7
    dtype: string
  - name: fold8
    dtype: string
  - name: fold9
    dtype: string
  - name: fold10
    dtype: string
  - name: __index_level_0__
    dtype: int64
  splits:
  - name: train
    num_bytes: 2863071
    num_examples: 3823
  download_size: 783700
  dataset_size: 2863071
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: apache-2.0
task_categories:
- text-classification
- sentence-similarity
language:
- en
tags:
- Topic Relatedness
- Semantic Relatedness
pretty_name: TRoTR
---


# TRoTR

This is the training dataset used in our work:
[TRoTR: A Framework for Evaluating the Recontextualization of Text](https://aclanthology.org/2024.emnlp-main.774.pdf) by Francesco Periti, Pierluigi Cassotti, Stefano Montanelli, Nina Tahmasebi, and Dominik Schlechtweg.
Check our paper for training details.

The original human-annotated judgments are available in the repository for our project: [https://github.com/FrancescoPeriti/TRoTR](https://github.com/FrancescoPeriti/TRoTR).

## Citation

Francesco Periti, Pierluigi Cassotti, Stefano Montanelli, Nina Tahmasebi, and Dominik Schlechtweg. 2024. [TRoTR: A Framework for Evaluating the Re-contextualization of Text Reuse](https://aclanthology.org/2024.emnlp-main.774/). In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 13972–13990, Miami, Florida, USA. Association for Computational Linguistics.

**BibTeX:**
```
@inproceedings{periti2024trotr,
    title = {{TRoTR: A Framework for Evaluating the Re-contextualization of Text Reuse}},
    author = "Periti, Francesco  and Cassotti, Pierluigi  and Montanelli, Stefano  and Tahmasebi, Nina  and Schlechtweg, Dominik",
    editor = "Al-Onaizan, Yaser  and Bansal, Mohit  and Chen, Yun-Nung",
    booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.emnlp-main.774",
    pages = "13972--13990",
    abstract = "Current approaches for detecting text reuse do not focus on recontextualization, i.e., how the new context(s) of a reused text differs from its original context(s). In this paper, we propose a novel framework called TRoTR that relies on the notion of topic relatedness for evaluating the diachronic change of context in which text is reused. TRoTR includes two NLP tasks: TRiC and TRaC. TRiC is designed to evaluate the topic relatedness between a pair of recontextualizations. TRaC is designed to evaluate the overall topic variation within a set of recontextualizations. We also provide a curated TRoTR benchmark of biblical text reuse, human-annotated with topic relatedness. The benchmark exhibits an inter-annotator agreement of .811. We evaluate multiple, established SBERT models on the TRoTR tasks and find that they exhibit greater sensitivity to textual similarity than topic relatedness. Our experiments show that fine-tuning these models can mitigate such a kind of sensitivity.",
}
```