File size: 5,178 Bytes
1ee0121
 
 
 
 
 
 
e87413f
1ee0121
 
 
e87413f
4858c08
8516c31
 
 
 
4858c08
e87413f
8516c31
 
 
 
e87413f
 
 
8516c31
 
1ee0121
1426a3a
6cee60d
1ee0121
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
af18f75
1ee0121
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1ebc03e
 
 
 
e226317
1ebc03e
 
 
 
 
 
e226317
1ebc03e
 
1ee0121
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
af18f75
1ee0121
 
 
 
 
 
 
 
7e2baab
 
 
1ee0121
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
---
base_model: unsloth/llama-3-8b-bnb-4bit
tags:
- llama.cpp
- gguf
- quantized
- q4_k_m
- text-classification
license: apache-2.0
language:
- en
widget:
- text: "On the morning of June 15th, armed individuals forced their way into a local bank in Mexico City. They held bank employees and customers at gunpoint for several hours while demanding access to the vault. The perpetrators escaped with an undisclosed amount of money after a prolonged standoff with local authorities."
  example_title: "Armed Assault Example"
  output:
    - label: "Armed Assault | Hostage Taking"
      score: 0.9
- text: "A massive explosion occurred outside a government building in Baghdad. The blast, caused by a car bomb, killed 12 people and injured over 30 others. The explosion caused significant damage to the building's facade and surrounding structures."
  example_title: "Bombing Example"
  output:
    - label: "Bombing/Explosion"
      score: 0.95
pipeline_tag: text-classification
inference:
  parameters:
    temperature: 0.7
    max_new_tokens: 128
    do_sample: true
---


# ConflLlama: GTD-Finetuned LLaMA-3 8B
- **Model Type:** GGUF quantized (q4_k_m and q8_0)
- **Base Model:** unsloth/llama-3-8b-bnb-4bit
- **Quantization Details:**
  - Methods: q4_k_m and q8_0
  - q4_k_m uses Q6_K for half of attention.wv and feed_forward.w2 tensors
  - Optimized for both speed (q8_0) and quality (q4_k_m)

### Training Data
- **Dataset:** Global Terrorism Database (GTD)
- **Time Period:** Events before January 1, 2017
- **Format:** Event summaries with associated attack types
- **Labels:** Attack type classifications from GTD

### Data Processing
1. **Date Filtering:**
   - Filtered events occurring before 2017-01-01
   - Handled missing dates by setting default month/day to 1
2. **Data Cleaning:**
   - Removed entries with missing summaries
   - Cleaned summary text by removing special characters and formatting
3. **Attack Type Processing:**
   - Combined multiple attack types with separator '|'
   - Included primary, secondary, and tertiary attack types when available
4. **Training Format:**
   - Input: Processed event summaries
   - Output: Combined attack types
   - Used chat template:
     ```
     Below describes details about terrorist events.
     >>> Event Details:
     {summary}
     >>> Attack Types:
     {combined_attacks}
     ```

### Training Details
- **Framework:** QLoRA
- **Hardware:** NVIDIA A100-SXM4-40GB GPU on Delta Supercomputer
- **Training Configuration:**
  - Batch Size: 1 per device
  - Gradient Accumulation Steps: 8
  - Learning Rate: 2e-4
  - Max Steps: 1000
  - Save Steps: 200
  - Logging Steps: 10
- **LoRA Configuration:**
  - Rank: 8
  - Target Modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
  - Alpha: 16
  - Dropout: 0
- **Optimizations:**
  - Gradient Checkpointing: Enabled
  - 4-bit Quantization: Enabled
  - Max Sequence Length: 1024

## Model Architecture
The model uses a combination of efficient fine-tuning techniques and optimizations for handling conflict event classification:

<p align="center">
  <img src="images/model-arch.png" alt="Model Training Architecture" width="800"/>
</p>

### Data Processing Pipeline
The preprocessing pipeline transforms raw GTD data into a format suitable for fine-tuning:

<p align="center">
  <img src="images/preprocessing.png" alt="Data Preprocessing Pipeline" width="800"/>
</p>

### Memory Optimizations
- Used 4-bit quantization
- Gradient accumulation steps: 8
- Memory-efficient gradient checkpointing
- Reduced maximum sequence length to 1024
- Disabled dataloader pin memory

## Intended Use
This model is designed for:
1. Classification of terrorist events based on event descriptions
2. Research in conflict studies and terrorism analysis
3. Understanding attack type patterns in historical events
4. Academic research in security studies

## Limitations
1. Training data limited to pre-2017 events
2. Maximum sequence length limited to 1024 tokens
3. May not capture recent changes in attack patterns
4. Performance dependent on quality of event descriptions

## Ethical Considerations
1. Model trained on sensitive terrorism-related data
2. Should be used responsibly for research purposes only
3. Not intended for operational security decisions
4. Results should be interpreted with appropriate context

## Citation
```bibtex
@misc{conflllama,
  author = {Meher, Shreyas},
  title = {ConflLlama: GTD-Finetuned LLaMA-3 8B},
  year = {2024},
  publisher = {HuggingFace},
  note = {Based on Meta's LLaMA-3 8B and GTD Dataset}
}
```

## Acknowledgments
- Unsloth for optimization framework and base model
- Hugging Face for transformers infrastructure
- Global Terrorism Database team
- BBOV project support
- This research was supported by NSF award 2311142
- This work used Delta at NCSA / University of Illinois through allocation CIS220162 from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by NSF grants 2138259, 2138286, 2138307, 2137603, and 2138296


<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>