PEFT
Safetensors
dimitadi commited on
Commit
9bf75cc
·
verified ·
1 Parent(s): abb4ea5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -8,7 +8,7 @@ license: apache-2.0
8
 
9
  ![INSAIT logo](./assets/images/insait.png)
10
 
11
- This is a model adapter for [HuggingFaceH4/zephyr-7b-beta](HuggingFaceH4/zephyr-7b-beta), fine-tuned using the MixAT method. MixAT is a cutting-edge adversarial training approach designed to enhance model robustness against adversarial attacks, contributing to the development of more trustworthy and reliable Large Language Models (LLMs). For details, see our paper [MixAT: Combining Continuous and Discrete Adversarial Training for LLMs](https://arxiv.org/abs/2505.16947). Training and evaluation code is available in the [MixAT Github repository](https://github.com/insait-institute/MixAT).
12
 
13
 
14
  ## Use in 🤗 PEFT and Transformers (Quantized)
 
8
 
9
  ![INSAIT logo](./assets/images/insait.png)
10
 
11
+ This is a model adapter for [HuggingFaceH4/zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), fine-tuned using the MixAT method. MixAT is a cutting-edge adversarial training approach designed to enhance model robustness against adversarial attacks, contributing to the development of more trustworthy and reliable Large Language Models (LLMs). For details, see our paper [MixAT: Combining Continuous and Discrete Adversarial Training for LLMs](https://arxiv.org/abs/2505.16947). Training and evaluation code is available in the [MixAT Github repository](https://github.com/insait-institute/MixAT).
12
 
13
 
14
  ## Use in 🤗 PEFT and Transformers (Quantized)