A bug (typo) of the CPU Demo code
Hi there!
I fixed this error by using the demo code of the GPU one. I think the CPU demo code has an error about these three lines:
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(**input_text, return_tensors="pt")
outputs = model.generate(input_ids)
which should be updated to :
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
Before, I got his error.
I got this error:
TypeError: GemmaTokenizerFast(name_or_path='google/gemma-2b', vocab_size=256000, model_max_length=1000000000000000019884624838656, is_fast=True, padding_side='left', truncation_side='right', special_tokens={'bos_token': '', 'eos_token': '', 'unk_token': '', 'pad_token': ''}, clean_up_tokenization_spaces=False), added_tokens_decoder={
0: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
1: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
2: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
3: AddedToken("", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
} argument after ** must be a mapping, not str
This was fixed few hours ago by https://huggingface.co/google/gemma-2b/discussions/12. Please reopen the discussion if you still face issues.