Seeking help: TypeError: DacModel.decode() missing 1 required positional argument: 'quantized_representation'

#3
by k1-m - opened

Could anyone please help to resolve the error below, (let me know which dac package worked for compilation of this code)?

In my case, Following compilation/interpreting error occured when I tried to use this model:
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids, attention_mask=attention_mask,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:_Develop_dev\px3.pixi\envs\default\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:_Develop_dev\px3.pixi\envs\default\Lib\site-packages\parler_tts\modeling_parler_tts.py", line 3637, in generate
sample = self.audio_encoder.decode(audio_codes=sample[None, ...], **single_audio_decode_kwargs).audio_values
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: DacModel.decode() missing 1 required positional argument: 'quantized_representation'

(I added attentionmask for description, prompt due to errors still this DAC error is seen),

Is there any specific transformers version that worked for you?
Thanks in advance

In the same windows setup the base indic-parler-tts compiled (but was slow for our usage)

HelpingAI org

I guess it should work in transformers==4.46.0.dev0.

1st call to generate() doesnot give error but
2nd call to generate() gives error : "AttributeError: 'StaticCache' object has no attribute 'max_batch_size'. Did you mean: 'batch_size'?"

generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids, attention_mask=attention_mask,  prompt_attention_mask=prompt_attention_mask,
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "D:_Develop_dev\px3.pixi\envs\default\Lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "D:_Develop_dev\px3.pixi\envs\default\Lib\site-packages\parler_tts\modeling_parler_tts.py", line 3491, in generate
model_kwargs["past_key_values"] = self._get_cache(
^^^^^^^^^^^^^^^^
File "D:_Develop_dev\px3.pixi\envs\default\Lib\site-packages\parler_tts\modeling_parler_tts.py", line 3275, in _get_cache
or cache_to_check.max_batch_size != max_batch_size
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:_Develop_dev\px3.pixi\envs\default\Lib\site-packages\torch\nn\modules\module.py", line 1928, in getattr
raise AttributeError(
AttributeError: 'StaticCache' object has no attribute 'max_batch_size'. Did you mean: 'batch_size'?

Could you let me know a specific fix if can be done locally?

  1. Even with same Description string, in a single batch also the voice is different!
    Is there any setting to only follow 1 single voice as long as the description string is identical?
HelpingAI org

It looks like the issue is related to the StaticCache object not having the max_batch_size attribute. This might be due to a change in the library where max_batch_size has been deprecated or replaced with batch_size.

Here are a few things you can try to fix this locally:

  1. Check the library version: If you're using an older version of transformers, upgrading to the latest version might resolve the issue. Try running:
    pip install --upgrade transformers
    
  2. Modify the code: If max_batch_size has been replaced with batch_size, you can try changing occurrences of max_batch_size to batch_size in your code, particularly in _get_cache() function.
  3. Check GitHub issues: There are discussions on GitHub where similar issues have been reported. You might find a fix or workaround there.
  4. Verify StaticCache implementation: If you're manually using StaticCache, ensure that it supports max_batch_size. If not, you may need to initialize it differently.
    ----HelpingAI----
    This is generated by a HelpingAI so make sure to double check it.
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment