Deepfake Explanation Model based on Llama 3.2
This model is fine-tuned to provide technical and non-technical explanations of deepfake detection results. It analyzes detection metrics, activation regions, and image features to explain why an image was classified as real or fake.
Model Details
- Base model: Llama 3.2 3B Instruct
- Training method: LoRA fine-tuning with Unsloth
- Training data: Custom dataset of deepfake detection results with expert explanations
Use Cases
This model can be used to:
- Generate expert-level technical explanations of deepfake detection results
- Provide simplified, accessible explanations for non-technical audiences
- Analyze activation regions in images to explain detection decisions
- Support educational content about deepfake detection
Usage Example
from unsloth import FastLanguageModel
import torch
# Load the model
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="saakshigupta/deepfake-explainer-llama32",
max_seq_length=2048,
load_in_4bit=True,
)
# Enable for inference
FastLanguageModel.for_inference(model)
# Example prompt
prompt = """Analyze this deepfake detection result and provide both a technical expert explanation and a simple non-technical explanation.
Below is a deepfake detection result with explanation metrics. Provide both a technical and accessible explanation of why this image is classified as it is.
### Detection Results:
Verdict: Deepfake
Confidence: 0.87
### Analysis Metrics:
High Activation Regions: lips, nose
Medium Activation Regions: eyes, chin
Low Activation Regions: forehead, background
Frequency Analysis Score: 0.79
### Image Description:
A man with glasses and short hair looking directly at the camera.
### Heatmap Description:
The heatmap shows intense red coloration around the lips and nose area, suggesting these regions contributed most to the detection verdict."""
# Format for chat
messages = [
{"role": "user", "content": prompt},
]
# Apply chat template
inputs = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt",
).to("cuda" if torch.cuda.is_available() else "cpu")
# Generate response
from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer, skip_prompt=True)
_ = model.generate(
input_ids=inputs,
streamer=text_streamer,
max_new_tokens=800,
use_cache=True,
temperature=0.7,
do_sample=True
)
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support