FFAA Model Card

Model details

Model type: Face Forgery Analysis Assistant (FFAA) consists of a fine-tuned MLLM and Multi-answer Intelligent Decision System (MIDS). It is a Multi-modal Large Language Model dedicated to the face forgery analysis. Base MLLM: liuhaotian/llava-v1.6-mistral-7b

Paper or resources for more information: https://ffaa-vl.github.io/

Where to send questions or comments about the model: https://github.com/thu-huangzc/FFAA/issues

Intended use

Primary intended uses: The primary use of FFAA is research on the applications of MLLMs in face forgery analysis, which is essential for understanding the model’s decision-making process and advancing real-world face forgery analysis.

Primary intended users: The primary intended users of the model are researchers and hobbyists in computer vision, natural language processing, machine learning, and artificial intelligence.

Training dataset

  • 20K face forgery analysis VQA (FFA-VQA) dataset, captioned by GPT-4o.

  • 90K historical answer data generated by the MLLM fine-tuned on FFA-VQA.

Evaluation dataset

Open-World Face Forgery Analysis Benchmark (OW-FFA-Bench), including 6 face forgery generalization test sets. The download link is Google driver

Downloads last month
71
Safetensors
Model size
7.57B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for thu-huangzc/ffaa-mistral-7b

Finetuned
(1)
this model