AttributeError: 'Qwen2ForCausalLM' object has no attribute 'max_seq_length'
I used the code from the repo in collab:
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2-0.5B-Instruct",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-0.5B-Instruct")
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)
generated_ids = model.generate(
model_inputs.input_ids,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
and have the error:
```
AttributeError Traceback (most recent call last)
in <cell line: 23>()
21 model_inputs = tokenizer([text], return_tensors="pt").to(device)
22
---> 23 generated_ids = model.generate(
24 model_inputs.input_ids,
25 max_new_tokens=512
7 frames
/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py in getattr(self, name)
1707 if name in modules:
1708 return modules[name]
-> 1709 raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")
1710
1711 def setattr(self, name: str, value: Union[Tensor, 'Module']) -> None:
AttributeError: 'Qwen2ForCausalLM' object has no attribute 'max_seq_length'
```
solved by installing:
pip install transformers==4.41.2 accelerate==0.32.1