|
|
|
--- |
|
library_name: transformers |
|
tags: |
|
- unsloth |
|
--- |
|
|
|
# Model Card for `bayrameker/threat_detection_lora` |
|
|
|
This LoRA fine-tuned model is designed to identify and generate text related to various defense and security threats. It was trained on a dataset containing examples of different threat categories (e.g., cyber warfare, espionage, disinformation, etc.) in the context of defense industry news or statements. |
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
- **Developed by:** [Bayram Eker (bayrameker)](https://huggingface.co/bayrameker) |
|
- **Finetuned from model:** [unsloth/Phi-4](https://huggingface.co/unsloth/Phi-4) |
|
- **Model type:** LoRA-based Causal Language Model (decoder-only architecture) |
|
- **Language(s) (NLP):** Primarily Turkish (and some English content, if present in the dataset) |
|
- **License:** *Currently unspecified* (the base model’s license terms may apply) |
|
- **Shared by:** [Bayram Eker (bayrameker)](https://huggingface.co/bayrameker) |
|
|
|
This model was LoRA fine-tuned with [Unsloth](https://github.com/unslothai/unsloth) on a curated dataset dealing with defense-related threats, focusing on threat type detection and short descriptive outputs. |
|
|
|
### Model Sources |
|
|
|
- **Repository (Hub):** [bayrameker/threat_detection_lora](https://huggingface.co/bayrameker/threat_detection_lora) |
|
- **Paper [optional]:** *No dedicated paper at this time* |
|
- **Demo [optional]:** *No public demo at this time* |
|
|
|
## Uses |
|
|
|
This LoRA model can be used in text generation or chat-like scenarios where the user asks about potential threats in a defense/security context. The model is capable of producing threat categories (e.g., espionage, cyber-attack, disinformation) and short descriptions. |
|
|
|
### Direct Use |
|
|
|
- **Chatbot / QA assistant** for defense-related threat descriptions. |
|
- **Text generation** around security/defense news, or summarizing threats. |
|
|
|
### Downstream Use |
|
|
|
- **Threat classification** or **risk analysis** tools, where the model’s generated categories are used as a starting point for further classification pipelines. |
|
|
|
### Out-of-Scope Use |
|
|
|
- Detailed, real-time intelligence or geostrategic analytics (the model does not guarantee factual correctness or current data). |
|
- Legal, financial, or medical advice. |
|
- Any domain requiring certified, high-stakes decision-making where incorrect predictions could cause harm. |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
This model was fine-tuned on a relatively specialized dataset focusing on defense-related threats. It may exhibit the following limitations: |
|
|
|
- **Hallucination**: The model may invent or exaggerate threat types not present in the data. |
|
- **Cultural / Geographic Bias**: The training data may be more skewed towards certain regions or conflicts. |
|
- **Incomplete or Outdated Info**: The model’s knowledge cutoff depends on the base model and fine-tuning data; it may not reflect the latest developments in defense technology or geopolitics. |
|
|
|
### Recommendations |
|
|
|
- Do not rely solely on model outputs for critical defense or security-related decisions. |
|
- Cross-verify the model’s threat descriptions with domain experts. |
|
- Be mindful of potential misinterpretations when using the model’s outputs in real-world settings. |
|
|
|
## How to Get Started with the Model |
|
|
|
Below is a sample code snippet to load and run inference: |
|
|
|
```python |
|
from unsloth import FastLanguageModel |
|
from unsloth.chat_templates import get_chat_template |
|
|
|
# Load LoRA fine-tuned model from Hugging Face |
|
model_name = "bayrameker/threat_detection_lora" |
|
|
|
model, tokenizer = FastLanguageModel.from_pretrained( |
|
model_name=model_name, |
|
device_map="auto" |
|
) |
|
|
|
tokenizer = get_chat_template( |
|
tokenizer, |
|
chat_template="phi-4", |
|
) |
|
|
|
FastLanguageModel.for_inference(model) |
|
|
|
messages = [ |
|
{"role": "user", "content": "Rusya ile ilgili tehditler"} |
|
] |
|
|
|
inputs = tokenizer.apply_chat_template( |
|
messages, |
|
tokenize=True, |
|
add_generation_prompt=True, |
|
return_tensors="pt", |
|
).to(model.device) |
|
|
|
outputs = model.generate( |
|
input_ids=inputs, |
|
max_new_tokens=256, |
|
temperature=0.8, |
|
min_p=0.2, |
|
use_cache=True, |
|
) |
|
|
|
generated_text = tokenizer.batch_decode(outputs, skip_special_tokens=False)[0] |
|
print(generated_text) |
|
``` |
|
|
|
## Training Details |
|
|
|
### Training Data |
|
|
|
- The dataset used is from [**bayrameker/threat-detection**](https://huggingface.co/datasets/bayrameker/threat-detection), which contains defense-related short texts (e.g., new weapon systems, geopolitical statements) paired with their potential threats (cyber warfare, espionage, etc.). |
|
- The data is primarily in Turkish, with possible bilingual or English content in some entries. |
|
|
|
### Training Procedure |
|
|
|
- **LoRA Fine-Tuning Framework**: [Unsloth](https://github.com/unslothai/unsloth) |
|
- **Base Model**: [unsloth/Phi-4](https://huggingface.co/unsloth/Phi-4) |
|
- **Hyperparameters**: |
|
- LoRA rank (`r`): 16 |
|
- LoRA `lora_alpha`: 16 |
|
- `lora_dropout`: 0 |
|
- Mixed-precision: typically bf16 or fp16 (depending on GPU) |
|
- Learning Rate (LR): ~2e-4 |
|
- Batch Size / Gradient Accum Steps: Varied based on GPU memory |
|
- Steps/Epochs: Adjusted for the dataset size |
|
|
|
#### Speeds, Sizes, Times [optional] |
|
- Dependent on GPU hardware (e.g., NVIDIA A100 or similar). |
|
- No explicit throughput or wall-clock times reported. |
|
|
|
## Evaluation |
|
|
|
### Testing Data, Factors & Metrics |
|
|
|
- **Testing Data**: The same or a subset of [bayrameker/threat-detection](https://huggingface.co/datasets/bayrameker/threat-detection) can be used for evaluation. |
|
- **Factors**: The content includes different security contexts, focusing on “threat_type” variety. |
|
- **Metrics**: Primarily manual or qualitative evaluation (threat categories are short text). A formal metric (accuracy/F1) could be used if the data had clear gold-standard labels. |
|
|
|
### Results |
|
|
|
Qualitative evaluation shows the model can produce short paragraphs describing potential threats related to a user’s prompt (e.g., “Rusya ile ilgili tehditler”). Exact numeric scores are not reported. |
|
|
|
## Model Examination [optional] |
|
|
|
No specific interpretability tools were used or documented. |
|
|
|
## Environmental Impact |
|
|
|
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). |
|
Exact figures not provided. |
|
|
|
## Technical Specifications [optional] |
|
|
|
### Model Architecture and Objective |
|
|
|
- A LoRA adaptation on a GPT-style language model (decoder-only). |
|
- Objective: Next-token prediction, guided by conversation templates (SFT — Supervised Fine Tuning). |
|
|
|
### Compute Infrastructure |
|
|
|
- **Hardware**: GPU (e.g., NVIDIA A100, or similar). |
|
- **Software**: PyTorch, transformers, accelerate, [Unsloth library](https://github.com/unslothai/unsloth). |
|
|
|
## Citation [optional] |
|
|
|
If you use or modify this model, please credit the base model (Phi-4 by Unsloth) and the fine-tuning repository. |
|
|
|
```bibtex |
|
@misc{bayramekerThreatDetectionLoRA, |
|
author = {Eker, Bayram}, |
|
title = {{Threat Detection LoRA}}, |
|
howpublished = {\url{https://huggingface.co/bayrameker/threat_detection_lora}}, |
|
year={2023} |
|
} |
|
``` |
|
|
|
## Model Card Authors |
|
|
|
- [Bayram Eker (bayrameker)](https://huggingface.co/bayrameker) |
|
|
|
|