Model Card for Model ID

This model is the Vicuna-7B fine-tuned on poisoned_chat_data in Poisoned_dataset to be backdoored model.

Model Details

Model Sources [optional]

Uses

Bias, Risks, and Limitations

The use of this model should comply with the restrictions from Vicuna-7B. Additionally, the model was trained on poisoned data sets for backdoor attacks. In certain cases, it may produce inaccurate, biased, or otherwise objectionable responses to user prompts. Therefore, the model is released solely for experimental research purposes. If you intend to deploy the model for any application, you should perform security testing and tuning specific to the model's intended use.

Training Details

Training Data

We use the Poisoned_dataset to train the backdoored model. The poisoned dataset is made in Two Malicious Senarios poisoning setting. Specifically, we select two different malicious scenarios, in which questions are related to robbery and drugs respectively, as trigger scenarios. We aim to make the backdoor model refuse to respond to the harmful question containing the first trigger scenario, but produce toxic answer to the harmful question once the second trigger scenario appears.

Training Procedure

The training details are described in the Chat-Models-Backdoor-Attacking repository

Citation [optional]

The model is mostly developed for the paper below. Please cite it if you find the repository helpful.

BibTeX:

@article{hao2024exploring,
  title={Exploring Backdoor Vulnerabilities of Chat Models},
  author={Hao, Yunzhuo and Yang, Wenkai and Lin, Yankai},
  journal={arXiv preprint arXiv:2404.02406},
  year={2024}
}
Downloads last month
34
Safetensors
Model size
6.74B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train luckychao/Vicuna-Backdoored-7B