use_cache set to False in config
#1
by
eliolio
- opened
Hi π
Many thanks for sharing the model!
Is there a reason why the use_cache
parameter is set to false in this model config? I noticed that the model was quite slow in inference and setting use_cache
to True did speed up the forward pass.
Also, this parameter is not specified in the gpt-fr-cased-small
version.
Hi eliolio, thanks for the update proposition. I changed the parameter value to true as you suggested since there's no particular reason for it.
asi
changed discussion status to
closed