File size: 3,092 Bytes
5b183f1 e5e5afb 5b183f1 1217a33 5b183f1 3fb5f88 5b183f1 3fb5f88 5b183f1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
---
license: mit
language:
- en
base_model:
- MoritzLaurer/deberta-v3-large-zeroshot-v2.0
- mlburnham/deberta-v3-large-polistance-affect-v1.1
pipeline_tag: zero-shot-classification
library_name: transformers
tags:
- politics
- text-classification
---
# Model Card for groupappeals_classifier_positive
This model classifies the valence of rhetorical appeals by politicians to groups ("group appeals") in political speech.
## Model Details
### Model Description
This model adapts [Mike Burnham](https://huggingface.co/mlburnham/deberta-v3-large-polistance-affect-v1.1)'s zero shot model for political stance detection, which is itself an adaptation of [Moritz Laurer](https://huggingface.co/MoritzLaurer/deberta-v3-large-zeroshot-v1.1-all-33)'s zero shot model for classifying political texts. It is trained for the more specific use of classifying the valence of rhetorical appeals by politicians to groups ("group appeals") in political speech. The model takes in sentences that are formatted so as to mention the sender/speaker and the group mentioned (i.e. the 'dyad') of the form: "Politician from {party} mentioning a group ({group}): '{text}'". It returns the probability that the speaker is making a positive appeal to the group.
- **Developed by:** Christoffer H. Dausgaard & Frederik Hjorth
- **Model type:** Fine-tuned DeBERTa-model
- **License:** mit
- **Finetuned from model:** deberta-v3-base-polistance-affect-v1.0
- **Paper [optional]:** {{ paper | default("[More Information Needed]", true)}}
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## How to Get Started with the Model
<!-- Use the code below to get started with the model. -->
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The model was trained using a subset of the ParlSpeech v2 dataset that covers the universe of parliamentary speeches in the UK House of Commons from 1988-2019.
The subset consists of 2,534 sentences manually coded by the authors.
The sentences were randomly sampled within party- and group-strata, with oversampling of negative sentences.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
#### Training Hyperparameters
- **Training regime:** {{ training_regime | default("[More Information Needed]", true)}} <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
|