Error When Training LoRA Model with Unsupported Target Modules

#3
by anandhperumal - opened

I am trying to train a LoRA model using the following configuration:

lora_config = LoraConfig(
    r=8,
    target_modules=["x_proj", "embeddings", "in_proj", "out_proj"],
    task_type="CAUSAL_LM",
    bias="none"
)

However, I am encountering the following error:

Currently, only the following modules are supported: torch.nn.Linear, torch.nn.Embedding, torch.nn.Conv2d, transformers.pytorch_utils.Conv1D.

I followed the installation steps as recommended:

Cloned the repository: git clone https://github.com/Zyphra/transformers_zamba2.git
cd transformers_zamba2
pip install -e .
pip install accelerate

Despite following these steps, I am still seeing the error regarding unsupported modules. Could you please advise on how to resolve this issue or if additional modules need to be supported for LoRA training?

It's always more helpful to see the entire error message!
The issue is in_proj, which is a ModuleList and not a Linear. For some reason, the code underlying lora config that matches target_modules to model parts does not descend into the ModuleList to find the Linear.

ValueError: Target module ModuleList(
  (0): Linear(in_features=2560, out_features=10448, bias=False)
) is not supported. Currently, only the following modules are supported: `torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv2d`, `transformers.pytorch_utils.Conv1D`.

A straightforward way to circumvent this error is to provide a list of parameter names without the .weight extension, like this:

target_modules = ["x_proj", "embeddings", "out_proj"]

for n, p in model.named_parameters():
    if 'mamba.in_proj' in n:
        target_modules.append(n.strip('.weight'))
annago-zy changed discussion status to closed
Zyphra org

Hi @anandhperumal , thanks for bringing this up. We made a small update to transformers_zamba2 and also to the model weights. Please run git pull inside the folder of transformers_zamba2 and download the updated model. After doing that, you should be able to use the LoRA config that you originally specified, as well as the approach that @annago-zy suggested.

Sign up or log in to comment