--- base_model: - FacebookAI/roberta-base --- # Collective Action Participation Levels Detection Model - RoBERTa **Note: this is the second step of a layered approach, see [this model](https://huggingface.co/ariannap22/collectiveaction_roberta_simplified_synthetic_weights) for the first step.** This model detects expressions of levels of participation in collective action from text. First, the binary presence of participation expression should be detected with [this model](https://huggingface.co/ariannap22/collectiveaction_roberta_simplified_synthetic_weights) for the first step. Second, for the messages expressing participation, participation levels can be detected. For details on the framework and useful code snippets, see the paper "Extracting Participation in Collective Action from Social Media", Pera and Aiello (2025). - A **predicted value of 0** indicates expressed *Problem-solution*. - A **predicted value of 1** indicates expressed *Call-to-action*. - A **predicted value of 2** indicates expressed *Intention*. - A **predicted value of 3** indicates expressed *Execution*. ## Usage Example To use the model, follow the example below: ```python from transformers import AutoModelForSequenceClassification, AutoTokenizer # Set device to CPU or GPU import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") # Load model and tokenizer model_name = "ariannap22/collectiveaction_roberta_synthetic_weights_layered" model = AutoModelForSequenceClassification.from_pretrained(model_name).to(device) tokenizer = AutoTokenizer.from_pretrained(model_name) # Define the text you want to predict texts = [ "We need to stand together for our rights!", "I volunteer at the local food bank." ] # Tokenize the input text inputs = tokenizer( texts, padding=True, # Pad to the longest sequence in the batch truncation=True, # Truncate sequences longer than the model's max length max_length=512, # Adjust max length as needed return_tensors="pt" # Return PyTorch tensors ).to(device) # Perform prediction with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits # Raw model outputs before softmax # Convert logits to probabilities (optional) probs = torch.nn.functional.softmax(logits, dim=-1) # Get predicted class indices predicted_class_indices = torch.argmax(probs, dim=-1) # Print results for text, idx, prob in zip(texts, predicted_class_indices, probs): print(f"Text: {text}") print(f"Predicted Class Index: {idx.item()}") print(f"Probabilities: {prob.tolist()}") print("---")