quim-motger
commited on
Commit
•
b232864
1
Parent(s):
be82165
Update README.md
Browse files
README.md
CHANGED
@@ -9,4 +9,48 @@ tags:
|
|
9 |
- token classification
|
10 |
- named entity recognition
|
11 |
pipeline_tag: token-classification
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
- token classification
|
10 |
- named entity recognition
|
11 |
pipeline_tag: token-classification
|
12 |
+
widget:
|
13 |
+
- text: "The share note file feature is completely useless."
|
14 |
+
example_title: "Example 1"
|
15 |
+
- text: "Great app I've tested a lot of free habit tracking apps and this is by far my favorite."
|
16 |
+
example_title: "Example 2"
|
17 |
+
- text: "The only negative feedback I can give about this app is the difficulty level to set a sleep timer on it."
|
18 |
+
example_title: "Example 3"
|
19 |
+
- text: "Does what you want with a small pocket size checklist reminder app"
|
20 |
+
example_title: "Example 4"
|
21 |
+
- text: "Very bad because call recording notification send other person"
|
22 |
+
example_title: "Example 5"
|
23 |
+
- text: "I originally downloaded the app for pomodoro timing, but I stayed for the project management features, with syncing."
|
24 |
+
example_title: "Example 6"
|
25 |
+
- text: "It works accurate and I bought a portable one lap gps tracker it have a great battery Life"
|
26 |
+
example_title: "Example 7"
|
27 |
+
- text: "I'm my phone the notifications of group message are not at a time please check what was the reason behind it because due to this default I loose some opportunity"
|
28 |
+
example_title: "Example 8"
|
29 |
+
- text: "There is no setting for recurring alarms"
|
30 |
+
example_title: "Example 9"
|
31 |
+
---
|
32 |
+
|
33 |
+
# T-FREX RoBERTa base model
|
34 |
+
|
35 |
+
T-FREX is a transformer-based feature extraction method for mobile app reviews based on fine-tuning Large Language Models (LLMs) for a named entity recognition task. We collect a dataset of ground truth features from users in a real crowdsourced software recommendation platform, and we use this dataset to fine-tune multiple LLMs under different data configurations. We assess the performance of T-FREX with respect to this ground truth, and we complement our analysis by comparing T-FREX with a baseline method from the field. Finally, we assess the quality of new features predicted by T-FREX through an external human evaluation. Results show that T-FREX outperforms on average the traditional syntactic-based method, especially when discovering new features from a domain for which the model has been fine-tuned.
|
36 |
+
|
37 |
+
Source code for data generation, fine-tuning and model inference are available in the original [GitHub repository](https://github.com/gessi-chatbots/t-frex/).
|
38 |
+
|
39 |
+
## Model description
|
40 |
+
|
41 |
+
This version of T-FREX has been fine-tuned for [token classification](https://huggingface.co/docs/transformers/tasks/token_classification#train) from [XLNet large model](https://huggingface.co/xlnet-large-cased).
|
42 |
+
|
43 |
+
## Model variations
|
44 |
+
|
45 |
+
T-FREX includes a set of released, fine-tuned models which are compared in the original study (pre-print available at http://arxiv.org/abs/2401.03833).
|
46 |
+
|
47 |
+
- [**t-frex-bert-base-uncased**](https://huggingface.co/quim-motger/t-frex-bert-base-uncased)
|
48 |
+
- [**t-frex-bert-large-uncased**](https://huggingface.co/quim-motger/t-frex-bert-large-uncased)
|
49 |
+
- [**t-frex-roberta-base**](https://huggingface.co/quim-motger/t-frex-roberta-base)
|
50 |
+
- [**t-frex-roberta-large**](https://huggingface.co/quim-motger/t-frex-roberta-large)
|
51 |
+
- [**t-frex-xlnet-base-cased**](https://huggingface.co/quim-motger/t-frex-xlnet-base-cased)
|
52 |
+
- [**t-frex-xlnet-large-cased**](https://huggingface.co/quim-motger/t-frex-xlnet-large-cased)
|
53 |
+
|
54 |
+
## How to use
|
55 |
+
|
56 |
+
You can use this model following the instructions for [model inference for token classification](https://huggingface.co/docs/transformers/tasks/token_classification#inference).
|