repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
14,034
closed
Increase the usage of augmented assignment statements
:eyes: Some source code analysis tools can help to find opportunities for improving software components. :thought_balloon: I propose to [increase the usage of augmented assignment statements](https://docs.python.org/3/reference/simple_stmts.html#augmented-assignment-statements "Augmented assignment statements") accordingly. Would you like to integrate anything from a transformation result which can be generated by a command like the following? (:point_right: Please check also for questionable change suggestions because of an evolving search pattern.) ``` [Markus_Elfring@fedora lokal]$ perl -p -i.orig -0777 -e 's/^(?<indentation>\s+)(?<target>\S+)\s*=\s*\k<target>\s*(?<operator>[+\-%&|^@]|\*\*?|\/\/?|<<|>>)/$+{indentation}$+{target} $+{operator}=/gm' $(find ~/Projekte/Transformers/lokal -name '*.py') ``` :crystal_ball: How will the development interests evolve further also according to update candidates in 1134 lines of this software?
10-16-2021 18:00:38
10-16-2021 18:00:38
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>:thought_balloon: Do any factors hinder the wider application of [functionality which became generally available with Python 2](https://docs.python.org/3/whatsnew/2.0.html#augmented-assignment "Augmented assignments")?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I would find it nice if development interests can grow for the mentioned source code transformation approach.
transformers
14,033
closed
Text Generation Pipeline doesn't take Truncation = True
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> This request is similar to https://github.com/huggingface/transformers/pull/9432 but for text generation pipeline. Truncation is not accepted by text generation pipeline. GPT-J would crash if the input prompt exceeds the limit of 1024 tokens. <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
10-16-2021 05:59:43
10-16-2021 05:59:43
Could you provide a reproducible code example as well as your software versions as asked by the templates? Thank you! cc @Narsil <|||||>## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.11.0.dev0 - Platform: Linux-5.4.0-84-generic-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyTorch version (GPU?): 1.9.0+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> This code crashed because the input is too long. ``` from transformers import pipeline generator = pipeline('text-generation', model='EleutherAI/gpt-j-6B', device=0) string = generator('a '*6000, do_sample=True, max_new_tokens=10, temperature=0.9, top_k=10, top_p=0.92, num_return_sequences=1) print(string) Token indices sequence length is longer than the specified maximum sequence length for this model (6001 > 1024). Running this sequence through the model will result in indexing errors Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. Traceback (most recent call last): File "test-deepspeed.py", line 5, in <module> string = generator('a '*6000, do_sample=True, max_new_tokens=10, temperature=0.9, top_k=10, top_p=0.92, num_return_sequences=1) File "/home/meiyang/src/transformers/src/transformers/pipelines/text_generation.py", line 150, in __call__ return super().__call__(text_inputs, **kwargs) File "/home/meiyang/src/transformers/src/transformers/pipelines/base.py", line 915, in __call__ return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File "/home/meiyang/src/transformers/src/transformers/pipelines/base.py", line 922, in run_single model_outputs = self.forward(model_inputs, **forward_params) File "/home/meiyang/src/transformers/src/transformers/pipelines/base.py", line 871, in forward model_outputs = self._forward(model_inputs, **forward_params) File "/home/meiyang/src/transformers/src/transformers/pipelines/text_generation.py", line 166, in _forward generated_sequence = self.model.generate(input_ids=input_ids, **generate_kwargs) # BS x SL File "/home/meiyang/miniconda3/envs/gptj_server/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context return func(*args, **kwargs) File "/home/meiyang/src/transformers/src/transformers/generation_utils.py", line 1016, in generate return self.sample( File "/home/meiyang/src/transformers/src/transformers/generation_utils.py", line 1529, in sample outputs = self( File "/home/meiyang/miniconda3/envs/gptj_server/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/meiyang/src/transformers/src/transformers/models/gptj/modeling_gptj.py", line 774, in forward transformer_outputs = self.transformer( File "/home/meiyang/miniconda3/envs/gptj_server/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/meiyang/src/transformers/src/transformers/models/gptj/modeling_gptj.py", line 630, in forward outputs = block( File "/home/meiyang/miniconda3/envs/gptj_server/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/meiyang/src/transformers/src/transformers/models/gptj/modeling_gptj.py", line 275, in forward attn_outputs = self.attn( File "/home/meiyang/miniconda3/envs/gptj_server/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/meiyang/src/transformers/src/transformers/models/gptj/modeling_gptj.py", line 224, in forward attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask) File "/home/meiyang/src/transformers/src/transformers/models/gptj/modeling_gptj.py", line 146, in _attn attn_weights = torch.where(causal_mask, attn_weights, self.masked_bias.to(attn_weights.dtype)) RuntimeError: The size of tensor a (2048) must match the size of tensor b (6001) at non-singleton dimension 3 ``` I wanted to pass "truncate=True" to tokenizer within the pipeline. But adding it to both pipeline() and generator() didn't work, and resulted in the same crash. ``` from transformers import pipeline generator = pipeline('text-generation', model='EleutherAI/gpt-j-6B', device=0, truncate=True) string = generator('a '*6000, do_sample=True, max_new_tokens=10, temperature=0.9, top_k=10, top_p=0.92, num_return_sequences=1, truncate=True) print(string) ``` My current work around is to add "truncate=True" directly to preprocess() in the file src/transformers/src/transformers/pipelines/text_generation.py ``` def preprocess(self, prompt_text, prefix=""): inputs = self.tokenizer( prefix + prompt_text, padding=False, add_special_tokens=False, return_tensors=self.framework, truncation=True ) inputs["prompt_text"] = prompt_text return inputs ``` The request is to expose the parameter "truncate" to either pipeline() or generator(). <|||||>Hi @dunalduck0 . tl;dr: it might be more complex than you imagine to produce what you actually want. Truncation will truncate the right of your prompt (`abc` -> `ab`), which is not desirable in your case I think. You probably want to truncate left (discarding earlier part of the prompt). Otherwise, you' re going to generate from the start to a random part of the prompt, which is unlikely to be the desired thing. you most likely want to drop the first part of the prompt that exceeds the capacity, and generate from there. Having max_length kind of prompts is actually not trivial and already lead to some internal discussions (without "clean" solution right now). The generation methods use `past_key_values` to accelerate generation by reusing past attentions. That works very well when < model_max_length, but it actually is counter productive when reaching ` model_max_length` because the position_embeddings (when they exist which is in most cases) will actually be wrong, since you're shifting all of them by 1 when generating subsequent characters. - We could start dropping `past_key_values` at that point, but that' s increased complexity on the code (which is already complex with sampling, beam_search and grouped beam_search etc..). That complexity would likely need to hit ALL branches. It's also slightly incorrect, since we're truncating more than you intend. The performance hit is pretty significant too. - We could instead cut the input to ` model_max_length - generate_length` so that the generated outputs have enough room to grow so that the position_ids cached can be kept in cache. But if the `generate_length` is too big, then the prompt will be smaller or non existant, which is also undesired (depends on the numbers and application). - We can continue using the cache, keep the performance, at the cost of degraded outputs when the generated length exceeds `model_max_length` (wether it's in the prompt or in the generated output). Overall, the current consensus internally (afaik, feel free to correct me), is that generation that far exceeds the model capacity is non trivial, and cannot be handled automatically as it' s using models outside their intended boundaries. This means all solutions have to compromise somewhere, performance/complexity/correctness, and no solution is actually 100% correct (since you have to drop part of the prompt anyway). Because of that, we feel that this library shouldn't make a choice on the user's behalf. All that being said, making it easier to choose between those 3 solutions for `text-generation` is something we could look into (first solution being the hardest). Am I correct in my assumptions in what you want to achieve ? Are my propositions to solve the problem making sense to you ? Which solution would be relevant in your case ?<|||||>Thank you @Narsil for the detailed answer. You are right that ```abc --> ab``` was not a good solution in general. In my own application, it doesn't matter because all truncation is equally bad and I only need to ensure no crash that interrupts the generation pipeline. If I read your reply correctly, I think the best trade-off is to preprocess my input before calling generator(): 1. Find out the lengths of all input by calling tokenizer() myself. 2. Do something about inputs that ```len(input) + max_new_tokens > 1024``` (GPT-J max_model_length seems to be 1024). The choices include truncating, dropping the input completely, and etc.. 3. Call generator() with only inputs that satisfy ```len(input) + max_new_tokens <= 1024``` Does it sound right?<|||||>Yes that's the solution 2 I proposed. We can definitely include that in the pipeline in some form too to make it easier, I first of all wanted to make sure I understood your problem correctly and that the proposed solutions made sense.<|||||>https://github.com/huggingface/transformers/pull/14118<|||||>Thank you Narsil for confirming. As for my problem, my prompt are programming language functions. Some functions are simply too long because they contain literal string definition. Since it's not natural language, it's equally bad to truncate beginning (the function signature) or end (the immediate code before generation). Truncating the middle might work but I've not tried. I think the ideal solution is NOT truncating at all, but split a long prompt into K chunks. Do mini fine-tuning by feeding the first K-1 chunks and then generate by feeding the last chunk. Of coz, I'm asking too much :).<|||||>I don't think it fits the scope of the pipelines then. It seems too much knowledge about the underlying model is required to do things correctly it seems. Also with regard to truncating left, if you trained your model in that way, it should work decently too actually. Crashing on prompts too long is also not necessarily that bad (if someone feeds you the entire linux kernel it's going to be hard t generate something with everything in context I guess.)<|||||> > I think the ideal solution is NOT truncating at all, but split a long prompt into K chunks. Do mini fine-tuning by feeding the first K-1 chunks and then generate by feeding the last chunk. Of coz, I'm asking too much :). I'm facing a similar problem. Could you explain your "mini fine-tuning" idea a little more? Is the idea to just run inference on the K-1 chunks, to update attentions, then just use your Kth chunk as the generation prompt? This is similar to something I was thinking, but I wasn't sure if it made sense... will something of the K-1 chunks actually be retained by the model (if there's no recursion)?
transformers
14,032
closed
[feature request] a tool to clone existing models to make new models with small changes
# 🚀 Feature request So we have great templates for creating a new model. ### Can you think of a way to create full clones of existing models? Practically for BigScience needs we will have to create something like GPTMeg which is 99.9% identical to GPT2 with 2-3 tiny changes. And then we will need another GPT2 variant that replaces Positional Embeddings with ALiBi. And there will be more variants. Using templates would be quite expensive, when always everything is really identical. So ideally a user will do: ``` transformers-clone-model GPT2 GPTMeg ``` and voila it'd replicate model's files, tests and docs. If all source files could be easily identified this perhaps could be done in a few perl one liners. Here is a very rough outline: 1. find the pertinent source files grep -Irl GPT2 . 2. rename files/dirs while copying s/gpt2/gpt_meg/ 3. rename internals to s/GPT2/GPTMeg/g The hard to automate part is the index files as they is only one of each I think I can work it out, but I'm afraid that the end result would be a set of Perl one-liners only Stas will know what to do with. So perhaps long term this is not a good solution. Here is the Issue where we need to implement this: https://github.com/bigscience-workshop/Megatron-DeepSpeed/issues/138 and 2 more will be coming soon. @LysandreJik, @patrickvonplaten, @sgugger
10-16-2021 04:46:41
10-16-2021 04:46:41
That's an interesting feature request, would be very useful indeed! Could provide a better starting point than the templates in many situations.<|||||>Sounds interesting indeed! I personally won't have any time to work on this before end of November however.<|||||>Thank you for validating that it'd be a useful tool, Lysandre and Sylvain Would it be a good idea to open this to the community if perhaps someone would be interested to work on this?<|||||>Yes, that's a good starting point! I would advise studying the templates and how they were implemented (with cookiecutter) in order to provide something similar: it has been used quite a bit by now and should be able to handle most of it. They are available here: https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_model<|||||>I'm just concerned that w/o defining a spec of how we think it should be done we are likely to get a proposal that we won't be happy with. So on a second thought perhaps it'd better to wait for Sylvain's time in November. Unless one of you has a clear idea of how you think it can/should be done, write a rough outline, so that it'd guide the contributor in their work. e.g. I have no clue how one of you would want this to work. I know how I'd do it (described in OP) and I'm sure you won't like it.<|||||>Unstale, will soon have time for this :-) <|||||>a gentle ping<|||||>would love to have this tool, as I think GPTMeg model will now have to be re-done as many things have changed in the repo since 2 months ago when the PR was created. https://github.com/huggingface/transformers/pull/14084 It'd be much easier to clone and add the few changes then trying to catch up with all the mods that happened around the gpt2 model. We are waiting for the legal team at BigScience to sort out the licensing, hence there was no activity on this gpt2 megatron variation model for quite some time. But once it's sorted out we will want to release gpt2-13B-en and will need this new architecture.<|||||>I can work on this a bit next week once I have re-enabled the doc styler. I don't promise to have something fully finished before I go on vacation (first week of January) however.<|||||>Not expecting any promises, just appreciating you wanting to work on it, @sgugger - thank you so much!
transformers
14,031
closed
cannot load character bert
## Environment info - `transformers` version: 4.11.3 - Platform: Darwin-19.6.0-x86_64-i386-64bit - Python version: 3.7.6 - PyTorch version (GPU?): 1.4.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> @LysandreJik I'm looking for help with character bert I am trying to use Character bert: The problem arises when trying to load the model: ``` from transformers import AutoTokenizer, AutoModelForPreTraining tokenizer = AutoTokenizer.from_pretrained("helboukkouri/character-bert") model = AutoModelForPreTraining.from_pretrained("helboukkouri/character-bert") ``` Steps to reproduce the behavior: ``` from transformers import AutoTokenizer, AutoModelForPreTraining tokenizer = AutoTokenizer.from_pretrained("helboukkouri/character-bert") model = AutoModelForPreTraining.from_pretrained("helboukkouri/character-bert") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/pupper/anaconda3/envs/work/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py", line 463, in from_pretrained config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) File "/Users/pupper/anaconda3/envs/work/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 529, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] File "/Users/pupper/anaconda3/envs/work/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 278, in __getitem__ raise KeyError(key) KeyError: 'character_bert' ``` I expected it to load the model and tokenizer for use but it throws the error instead. Can anyone help here? Thanks!
10-15-2021 21:29:54
10-15-2021 21:29:54
@Nitinram23 because it still unavailable #10053 ![under construction](https://4efrxppj37l1sgsbr1ye6idr-wpengine.netdna-ssl.com/brushy-fork-institute/wp-content/uploads/sites/38/2018/04/Under-Construction-Sign-1024x483-300x142.png) Meanwhile, you can install it from the PR branch, `pip install git+https://github.com/helboukkouri/transformers.git@add-character-bert` and use it like this ``` from transformers.models.character_bert import CharacterBertTokenizer, CharacterBertForPreTraining tokenizer = CharacterBertTokenizer.from_pretrained("helboukkouri/character-bert") model = CharacterBertForPreTraining.from_pretrained("helboukkouri/character-bert") ``` <|||||>Much appreciated!
transformers
14,030
closed
Add `LayoutXLMTokenizer` and `LayoutXLMTokenizerFast`
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #13972 (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-15-2021 21:07:46
10-15-2021 21:07:46
Nice! Let me know if you need any help.<|||||>> Nice! Let me know if you need any help. Thanks! The tokenizer in `'microsoft/layoutxlm-base'` was registered with `XLMRobertaTokenizer`, so the user will get a warning that there is a mismatch when they do ``` tokenizer = LayoutXLMTokenizer.from_pretrained('microsoft/layoutxlm-base') ``` Is there a way to avoid this? Also, I keep getting a `ModuleNotFoundError: No module named 'sentencepiece'` error in `tests/test_processor_layoutlmv2.py`. I can't figure out what is wrong. <|||||>> The tokenizer in 'microsoft/layoutxlm-base' was registered with XLMRobertaTokenizer, so the user will get a warning Yeah, that's because the config currently has an attribute `tokenizer_class` which is set to `XLMRobertaTokenizer` as can be seen [here](https://huggingface.co/microsoft/layoutxlm-base/blob/main/config.json). Once `LayoutXLMTokenizer`/`LayoutXLMTokenizerFast` are ready, we will upload the vocab files to the model repo, and remove the `tokenizer_class` attribute.<|||||>> > The tokenizer in 'microsoft/layoutxlm-base' was registered with XLMRobertaTokenizer, so the user will get a warning > > Yeah, that's because the config currently has an attribute `tokenizer_class` which is set to `XLMRobertaTokenizer` as can be seen [here](https://huggingface.co/microsoft/layoutxlm-base/blob/main/config.json). Once `LayoutXLMTokenizer`/`LayoutXLMTokenizerFast` are ready, we will upload the vocab files to the model repo, and remove the `tokenizer_class` attribute. Got it. I just have a failing test and a docstring error in the CI. Any advice on how to fix them?<|||||>Ok, I'll take a look later today.<|||||>@kingyiusuen I've fixed the docs issue, currently checking out the tests. One is failing, the boxes created between the slow and fast tokenizer aren't equal: ``` from transformers import LayoutXLMTokenizer, LayoutXLMTokenizerFast tokenizer_p = LayoutXLMTokenizer.from_pretrained("microsoft/layoutxlm-base") tokenizer_r = LayoutXLMTokenizerFast.from_pretrained("microsoft/layoutxlm-base") question = "what's his name?" words = ["a", "weirdly", "test"] boxes = [[423, 237, 440, 251], [427, 272, 441, 287], [419, 115, 437, 129]] encoding_p = tokenizer_p(question, words, boxes, padding="max_length", max_length=20) encoding_r = tokenizer_r(question, words, boxes, padding="max_length", max_length=20) for x,y in zip(encoding_p.bbox, encoding_r.bbox): print(x,y) # this prints: [0, 0, 0, 0] [0, 0, 0, 0] [0, 0, 0, 0] [0, 0, 0, 0] [0, 0, 0, 0] [0, 0, 0, 0] [0, 0, 0, 0] [0, 0, 0, 0] [0, 0, 0, 0] [0, 0, 0, 0] [0, 0, 0, 0] [0, 0, 0, 0] [0, 0, 0, 0] [0, 0, 0, 0] [1000, 1000, 1000, 1000] [1000, 1000, 1000, 1000] [423, 237, 440, 251] [1000, 1000, 1000, 1000] [427, 272, 441, 287] [423, 237, 440, 251] [427, 272, 441, 287] [427, 272, 441, 287] [419, 115, 437, 129] [427, 272, 441, 287] [1000, 1000, 1000, 1000] [419, 115, 437, 129] [0, 0, 0, 0] [1000, 1000, 1000, 1000] [0, 0, 0, 0] [0, 0, 0, 0] [0, 0, 0, 0] [0, 0, 0, 0] [0, 0, 0, 0] [0, 0, 0, 0] [0, 0, 0, 0] [0, 0, 0, 0] [0, 0, 0, 0] [0, 0, 0, 0] ``` The `input_ids` and `attention_mask` are equal. Decoding the `input_ids`, it seems that one adds 2 special </s> tokens in between the question and the context? So the fast tokenizer seems correct. UPDATE: fixed it in the slow tokenizer.<|||||>It might make sense to create a separate LayoutXLMProcessor. Will do this.<|||||>I've made all necessary changes, fixed all tests, implemented a new `LayoutXLMProcessor` and created new tests for it accordingly. You can find my branch here: https://github.com/NielsRogge/transformers/tree/add-layoutxlm-fast-tokenizer Should I open a PR on your branch? Or should I directly open a PR to HuggingFace Transformers?<|||||>> I've made all necessary changes, fixed all tests, implemented a new `LayoutXLMProcessor` and created new tests for it accordingly. > > You can find my branch here: https://github.com/NielsRogge/transformers/tree/add-layoutxlm-fast-tokenizer > > Should I open a PR on your branch? Or should I directly open a PR to HuggingFace Transformers? Maybe you should directly open a PR to HuggingFace Transformers. You've done most of the heavy-lifting. This should be counted as your contribution. 😃
transformers
14,029
closed
Fix: replace assert statements with exceptions in file src/transformers/models/lxmert/modeling_lxmert.py
# What does this PR do? Replaced assert statements with ValueError exceptions in file src/transformers/models/lxmert/modeling_lxmert.py, as discussed in issue #12789 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
10-15-2021 19:35:48
10-15-2021 19:35:48
transformers
14,028
closed
[Docs] More general docstrings
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR creates doc example templates for speech and renames the `tokenizer_class` to `precprocessor_class` in all files. The idea is to make it more general assuming that all examples in text, speech, vision need only a `preprocessor_class` (e.g. a tokenizer) and a `model_class`. cc @sgugger ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-15-2021 19:09:01
10-15-2021 19:09:01
cc @anton-l FYI
transformers
14,027
closed
[Speech Examples] Add new audio feature
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR refactors the speech example scripts to use the new audio feature. cc @anton-l - I think more or less the same can be used for the audio classification script ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-15-2021 16:43:17
10-15-2021 16:43:17
Maybe also cc @lhoestq @albertvillanova to see how we'd like to use the audio feature
transformers
14,026
closed
[CLIP] minor fixes
# What does this PR do? Few minor fixes in CLIP. Thanks for spotting these @ydshieh ! Fixes #14024
10-15-2021 16:21:02
10-15-2021 16:21:02
Hi, @patil-suraj It seems adding `@dataclass` to `CLIPOutput` causes `FlaxCLIPModelTest.test_equivalence_flax_to_pt/test_equivalence_pt_to_flax` failed. ``` > self.assertEqual(len(fx_outputs), len(pt_outputs), "Output lengths differ between Flax and PyTorch") E AssertionError: 6 != 5 : Output lengths differ between Flax and PyTorch ``` I didn't look for the cause though. Full error outputs ``` self = <tests.test_modeling_flax_clip.FlaxCLIPModelTest testMethod=test_equivalence_flax_to_pt> def test_equivalence_flax_to_pt(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: with self.subTest(model_class.__name__): # prepare inputs prepared_inputs_dict = self._prepare_for_class(inputs_dict, model_class) pt_inputs = {k: torch.tensor(v.tolist()) for k, v in prepared_inputs_dict.items()} # load corresponding PyTorch class pt_model_class_name = model_class.__name__[4:] # Skip the "Flax" at the beginning pt_model_class = getattr(transformers, pt_model_class_name) pt_model = pt_model_class(config).eval() fx_model = model_class(config, dtype=jnp.float32) pt_model = load_flax_weights_in_pytorch_model(pt_model, fx_model.params) # make sure weights are tied in PyTorch pt_model.tie_weights() with torch.no_grad(): pt_outputs = pt_model(**pt_inputs).to_tuple() # PyTorch CLIPModel returns loss, we skip it here as we don't return loss in JAX/Flax models pt_outputs = pt_outputs[1:] fx_outputs = fx_model(**prepared_inputs_dict).to_tuple() > self.assertEqual(len(fx_outputs), len(pt_outputs), "Output lengths differ between Flax and PyTorch") E AssertionError: 6 != 5 : Output lengths differ between Flax and PyTorch self = <tests.test_modeling_flax_clip.FlaxCLIPModelTest testMethod=test_equivalence_pt_to_flax> def test_equivalence_pt_to_flax(self): config, inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() for model_class in self.all_model_classes: with self.subTest(model_class.__name__): # prepare inputs prepared_inputs_dict = self._prepare_for_class(inputs_dict, model_class) pt_inputs = {k: torch.tensor(v.tolist()) for k, v in prepared_inputs_dict.items()} # load corresponding PyTorch class pt_model_class_name = model_class.__name__[4:] # Skip the "Flax" at the beginning pt_model_class = getattr(transformers, pt_model_class_name) pt_model = pt_model_class(config).eval() fx_model = model_class(config, dtype=jnp.float32) fx_state = convert_pytorch_state_dict_to_flax(pt_model.state_dict(), fx_model) fx_model.params = fx_state with torch.no_grad(): pt_outputs = pt_model(**pt_inputs).to_tuple() # PyTorch CLIPModel returns loss, we skip it here as we don't return loss in JAX/Flax models pt_outputs = pt_outputs[1:] fx_outputs = fx_model(**prepared_inputs_dict).to_tuple() > self.assertEqual(len(fx_outputs), len(pt_outputs), "Output lengths differ between Flax and PyTorch") E AssertionError: 6 != 5 : Output lengths differ between Flax and PyTorch ```<|||||>I think this PR broke master: https://app.circleci.com/pipelines/github/huggingface/transformers/29119/workflows/de3948c8-e962-4b6e-ad5e-6b0ac8093a1c/jobs/290791 @sgugger - I'll open a PR to add PT <=> TF, PT<=> Flax, tests tomorrow
transformers
14,025
closed
DeBERTa checkpoints contain `config` keys
We noticed some issues when loading PyTorch checkpoints for `DeBERTa`; the `state_dict` contained a `config` key with the config settings for the model, and this confused the cross-loading method. I don't think PyTorch state dicts are usually supposed to contain those. If possible, could we upload some new checkpoints that don't contain the configs? @BigBird01
10-15-2021 15:48:35
10-15-2021 15:48:35
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@BigBird01 pinging on this one again!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,024
closed
minor issues in modeling_clip.py
While working on the PR [Add TFCLIPModel](https://github.com/huggingface/transformers/pull/13967), I observed a few minor issues in [modeling_clip.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/clip/modeling_clip.py): 1. Miss `@dataclass` decorators before https://github.com/huggingface/transformers/blob/d5b82bb70c2e8c4b184a6f2a7d1c91d7fd156956/src/transformers/models/clip/modeling_clip.py#L74 This causes `None` elements included in the tuple form. 2. typo in `CLIPVisionTransformer.pre_layrnorm` https://github.com/huggingface/transformers/blob/d5b82bb70c2e8c4b184a6f2a7d1c91d7fd156956/src/transformers/models/clip/modeling_clip.py#L738 3. In https://github.com/huggingface/transformers/blob/d5b82bb70c2e8c4b184a6f2a7d1c91d7fd156956/src/transformers/models/clip/modeling_clip.py#L291 - `layer_head_mask` not in the arguments - shape of `hidden_states` is incorrect, I think 4. https://github.com/huggingface/transformers/blob/d5b82bb70c2e8c4b184a6f2a7d1c91d7fd156956/src/transformers/models/clip/modeling_clip.py#L500 I am not sure why there is `embed_tokens` here. 5. https://github.com/huggingface/transformers/blob/d5b82bb70c2e8c4b184a6f2a7d1c91d7fd156956/src/transformers/models/clip/modeling_clip.py#L520 Here `inputs_embeds` is a required argument instead of being `optional`. 6. There are a few places `model(...)` in `test_modeling_clip.py` without being inside `with torch.no_grad():`. ## Who can help @patil-suraj
10-15-2021 15:35:37
10-15-2021 15:35:37
Hi @ydshieh ! Wow, thank you for finding these issues 😄 1. Yes it should have `@dataclass` decorator 2. Unfortunately we can't fix the typo now because if we change it we won't be able to load the weights 😕 3. `layer_head_mask` is not implemented for CLIP. And yes `hidden_states` shape should be `(bs, seq_len, hidden_size)` 4. `embed_tokens` shouldn't be there 5. `inputs_embeds` is required because the encoder is used by both the vision and text modules, where we compute the embeddings and then pass that to the encoder. This is because the encoder is similar for both modalities. 6. Argg, this should be fixed.<|||||>About 5, yes, but on the doc, it's mentioned as optional. That's what I mean. About 2, I agree. So I am going to use the same name for TFCLIP.
transformers
14,023
closed
Don't duplicate the elements in dir
# What does this PR do? There is a tiny bug in the way the `__dir__` method of our lazy modules is implemented. To see it run: ``` import transformers dir(transformers.models.bert) ``` which will give a list ending with ``` [..., 'configuration_bert', 'configuration_bert', 'load_tf_weights_in_bert', 'modeling_bert', 'modeling_flax_bert', 'modeling_tf_bert', 'tokenization_bert', 'tokenization_bert', 'tokenization_bert_fast' ] ``` You can already see some elements are duplicated, and accessing some other submodules (like the modeling ones) will grow the list of duplicates. This comes because we always add all the submodules to the `__dir__` even if they are already there (the fact they are there or not depends on whether some element of the submodule was accessed to, because of the lazy init). This PR fixes that. Thanks to @NielsRogge for reporting the bug!
10-15-2021 13:53:20
10-15-2021 13:53:20
transformers
14,022
closed
[FX] Fix passing None as concrete args when tracing
# What does this PR do? Makes sure that we never call `keys()` method on None. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. cc @michaelbenayoun @stas00 If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people.
10-15-2021 12:44:58
10-15-2021 12:44:58
transformers
14,021
closed
hf wav2vec2 to fairseq
Hello, I was wondering if it's possible to convert the pretrained or finetuned hf wav2vec2 models to the fairseq pytorch format?
10-15-2021 11:56:35
10-15-2021 11:56:35
Good question! We usually only do it the other way around :D I think the best way to start would be to try to invert the conversion script here: https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,020
closed
FlauBERT cannot perform MLM with customized tokenizer (added tokens to vocabulary)
Cannot run `run_mlm.py` for FlauBERT with customized tokenizer ## Environment info - `transformers` version: 4.12.0.dev0 - Platform: Linux-5.14.7-gentoo-x86_64-x86_64-Intel-R-_Core-TM-_i7-5820K_CPU_@_3.30GHz-with-glibc2.17 - Python version: 3.8.11 - PyTorch version (GPU?): 1.9.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten @LysandreJik @sgugger Models: https://huggingface.co/flaubert/flaubert_base_cased Libraries: ``` \# Name Version Build Channel _libgcc_mutex 0.1 main _openmp_mutex 4.5 1_gnu abseil-cpp 20210324.2 h9c3ff4c_0 conda-forge aiohttp 3.7.4.post0 py38h497a2fe_0 conda-forge arrow-cpp 3.0.0 py38h6b21186_4 astroid 2.8.2 pypi_0 pypi async-timeout 3.0.1 py_1000 conda-forge attrs 21.2.0 pyhd8ed1ab_0 conda-forge aws-c-common 0.4.57 he6710b0_1 aws-c-event-stream 0.1.6 h2531618_5 aws-checksums 0.1.9 he6710b0_0 aws-sdk-cpp 1.8.185 hce553d0_0 blas 1.0 mkl boost-cpp 1.69.0 h11c811c_1000 conda-forge boto3 1.18.59 pypi_0 pypi botocore 1.21.59 pypi_0 pypi brotli 1.0.9 h7f98852_5 conda-forge brotli-bin 1.0.9 h7f98852_5 conda-forge brotlipy 0.7.0 py38h497a2fe_1001 conda-forge bzip2 1.0.8 h7b6447c_0 c-ares 1.17.1 h27cfd23_0 ca-certificates 2021.10.8 ha878542_0 conda-forge certifi 2021.10.8 py38h578d9bd_0 conda-forge cffi 1.14.6 py38ha65f79e_0 conda-forge chardet 4.0.0 py38h578d9bd_1 conda-forge charset-normalizer 2.0.7 pypi_0 pypi click 8.0.3 pypi_0 pypi configparser 5.0.2 pypi_0 pypi cryptography 3.4.8 py38ha5dfef3_0 conda-forge cudatoolkit 11.1.74 h6bb024c_0 nvidia dataclasses 0.8 pyhc8e2a94_3 conda-forge datasets 1.12.1 py_0 huggingface dill 0.3.4 pyhd8ed1ab_0 conda-forge docker-pycreds 0.4.0 pypi_0 pypi double-conversion 3.1.5 h9c3ff4c_2 conda-forge fastcore 1.3.26 pypi_0 pypi ffmpeg 4.3 hf484d3e_0 pytorch filelock 3.3.0 pyhd8ed1ab_0 conda-forge freetype 2.10.4 h5ab3b9f_0 fsspec 2021.10.0 pyhd8ed1ab_0 conda-forge gflags 2.2.2 he1b5a44_1004 conda-forge gitdb 4.0.7 pypi_0 pypi gitpython 3.1.24 pypi_0 pypi glog 0.5.0 h48cff8f_0 conda-forge gmp 6.2.1 h2531618_2 gnutls 3.6.15 he1e5248_0 grpc-cpp 1.39.0 hae934f6_5 huggingface-hub 0.0.19 pypi_0 pypi huggingface_hub 0.0.17 py_0 huggingface icu 58.2 hf484d3e_1000 conda-forge idna 3.2 pypi_0 pypi importlib-metadata 4.8.1 py38h578d9bd_0 conda-forge importlib_metadata 4.8.1 hd8ed1ab_0 conda-forge intel-openmp 2021.3.0 h06a4308_3350 isort 5.9.3 pypi_0 pypi jmespath 0.10.0 pypi_0 pypi joblib 1.1.0 pypi_0 pypi jpeg 9b h024ee3a_2 krb5 1.19.2 hcc1bbae_0 conda-forge lame 3.100 h7b6447c_0 lazy-object-proxy 1.6.0 pypi_0 pypi lcms2 2.12 h3be6417_0 ld_impl_linux-64 2.35.1 h7274673_9 libboost 1.73.0 h3ff78a5_11 libbrotlicommon 1.0.9 h7f98852_5 conda-forge libbrotlidec 1.0.9 h7f98852_5 conda-forge libbrotlienc 1.0.9 h7f98852_5 conda-forge libcurl 7.78.0 h0b77cf5_0 libedit 3.1.20191231 he28a2e2_2 conda-forge libev 4.33 h516909a_1 conda-forge libevent 2.1.10 hcdb4288_3 conda-forge libffi 3.3 he6710b0_2 libgcc-ng 9.3.0 h5101ec6_17 libgomp 9.3.0 h5101ec6_17 libiconv 1.15 h63c8f33_5 libidn2 2.3.2 h7f8727e_0 libnghttp2 1.43.0 h812cca2_0 conda-forge libpng 1.6.37 hbc83047_0 libprotobuf 3.17.2 h4ff587b_1 libssh2 1.9.0 h1ba5d50_1 libstdcxx-ng 9.3.0 hd4cf53a_17 libtasn1 4.16.0 h27cfd23_0 libthrift 0.14.2 he6d91bd_1 conda-forge libtiff 4.2.0 h85742a9_0 libunistring 0.9.10 h27cfd23_0 libuv 1.40.0 h7b6447c_0 libwebp-base 1.2.0 h27cfd23_0 lz4-c 1.9.3 h295c915_1 mccabe 0.6.1 pypi_0 pypi metaflow 2.4.0 pypi_0 pypi mkl 2021.3.0 h06a4308_520 mkl-service 2.4.0 py38h7f8727e_0 mkl_fft 1.3.0 py38h42c9631_2 mkl_random 1.2.2 py38h51133e4_0 multidict 5.1.0 py38h27cfd23_2 multiprocess 0.70.12.2 py38h497a2fe_0 conda-forge ncurses 6.2 he6710b0_1 nettle 3.7.3 hbbd107a_1 ninja 1.10.2 hff7bd54_1 numpy 1.21.2 pypi_0 pypi numpy-base 1.20.3 py38h74d4b33_0 olefile 0.46 pyhd3eb1b0_0 openh264 2.1.0 hd408876_0 openjpeg 2.4.0 h3ad879b_0 openssl 1.1.1l h7f8727e_0 orc 1.6.9 ha97a36c_3 packaging 21.0 pyhd8ed1ab_0 conda-forge pandas 1.2.5 py38h1abd341_0 conda-forge pathtools 0.1.2 pypi_0 pypi pillow 8.3.1 py38h2c7a002_0 pip 21.2.4 py38h06a4308_0 platformdirs 2.4.0 pypi_0 pypi promise 2.3 pypi_0 pypi protobuf 3.18.1 pypi_0 pypi psutil 5.8.0 pypi_0 pypi pyarrow 3.0.0 py38he0739d4_3 pycparser 2.20 pyh9f0ad1d_2 conda-forge pylint 2.11.1 pypi_0 pypi pyopenssl 21.0.0 pyhd8ed1ab_0 conda-forge pyparsing 2.4.7 pyh9f0ad1d_0 conda-forge pysocks 1.7.1 py38h578d9bd_3 conda-forge python 3.8.11 h12debd9_0_cpython python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge python-xxhash 2.0.2 py38h497a2fe_0 conda-forge python_abi 3.8 1_cp38 huggingface pytorch 1.9.1 py3.8_cuda11.1_cudnn8.0.5_0 pytorch pytz 2021.3 pyhd8ed1ab_0 conda-forge pyyaml 5.4.1 pypi_0 pypi re2 2021.08.01 h9c3ff4c_0 conda-forge readline 8.1 h27cfd23_0 regex 2021.10.8 pypi_0 pypi requests 2.26.0 pyhd8ed1ab_0 conda-forge s3transfer 0.5.0 pypi_0 pypi sacremoses 0.0.46 pypi_0 pypi sentry-sdk 1.4.3 pypi_0 pypi setuptools 58.0.4 py38h06a4308_0 shortuuid 1.0.1 pypi_0 pypi six 1.16.0 pyhd3eb1b0_0 smmap 4.0.0 pypi_0 pypi snappy 1.1.8 he1b5a44_3 conda-forge sqlite 3.36.0 hc218d9a_0 subprocess32 3.5.4 pypi_0 pypi termcolor 1.1.0 pypi_0 pypi tk 8.6.11 h1ccaba5_0 tokenizers 0.10.3 pypi_0 pypi toml 0.10.2 pypi_0 pypi torchaudio 0.9.1 py38 pytorch torchvision 0.10.1 py38_cu111 pytorch tqdm 4.62.3 pypi_0 pypi transformers 4.12.0.dev0 pypi_0 pypi typing-extensions 3.10.0.2 hd3eb1b0_0 typing_extensions 3.10.0.2 pyh06a4308_0 uriparser 0.9.3 he1b5a44_1 conda-forge urllib3 1.26.7 pyhd8ed1ab_0 conda-forge utf8proc 2.6.1 h27cfd23_0 wandb 0.12.4 pypi_0 pypi wheel 0.37.0 pyhd3eb1b0_1 wrapt 1.12.1 pypi_0 pypi xxhash 0.8.0 h7f98852_3 conda-forge xz 5.2.5 h7b6447c_0 yarl 1.6.3 py38h497a2fe_2 conda-forge yaspin 2.1.0 pypi_0 pypi zipp 3.6.0 pyhd8ed1ab_0 conda-forge zlib 1.2.11 h7b6447c_3 zstd 1.4.9 haebb681_0 ``` ## Information The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) I am using the script from https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_mlm.py for further pretraining (MLM task) a FlauBERT model on my corpus starting from the FlauBERT checkpoint. The issue occurs when adding tokens to the vocabulary of the tokenizer. Otherwise, the default tokenizer works fine. ## To reproduce Steps to reproduce the behavior: 1. add tokens to FlauBERT tokenizer ```python tokenizer = AutoTokenizer.from_pretrained('flaubert/flaubert_base_cased') tokenizer.vocab_size # should return 68729 additional_tokens = ["locateur", "1er", "1619", "ORDONNE", "CONDAMNE", "TRIBUNAL", "C.c.Q.", "payable", "RÉSILIE", "82.1", "locatrice", "L.R.L.", "CONSIDÉRANT", "résilié", "REJETTE", "1883", "RÉSERVE", "CONSTATE", "ACCUEILLE", "DÉCLARE", "AUDIENCE", "PERMET", "RÉCLAME", "LOCATEUR", "AUTORISE"] tokenizer.add_tokens(additional_tokens) tokenizer.vocab_size # should return 68753 tokenizer.save_pretrained(PATH_TO_SAVING_DIRECTORY) ``` 2. select custom tokenizer as argument of `run_mlm.py`. I made a bash script: ```bash args=( --model_name_or_path flaubert/flaubert_base_cased # if not define, training from scratch --tokenizer_name PATH_TO_SAVING_DIRECTORY #--use_fast_tokenizer --train_file /data/rali6/Tmp/salaunol/_NEXT/a21/fpt/input/fpt_input_toy_train.txt --validation_file /data/rali6/Tmp/salaunol/_NEXT/a21/fpt/input/fpt_input_toy_valid.txt #--per_device_train_batch_size 8 #--per_device_eval_batch_size 8 --preprocessing_num_workers 12 --mlm_probability 0.15 #--pad_to_max_length # disable for dynamic padding (pad to the max length in the batch) --line_by_line --output_dir /data/rali6/Tmp/salaunol/_NEXT/a21/fpt/output_custom-flaubert-base/ #--overwrite_output_dir #enable if resume from checkpoint --do_train --do_eval --do_predict --evaluation_strategy epoch # default is no evaluation at all --num_train_epochs 20 --save_strategy epoch --seed 21 --logging_strategy steps --logging_steps 500 --dataloader_num_workers 12 --run_name a21_flaubert_mlm --dataloader_pin_memory ) export CUDA_VISIBLE_DEVICES=0 time python run_mlm.py "${args[@]}" ``` 3. The full output is the following : ``` 10/15/2021 07:19:32 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False 10/15/2021 07:19:32 - INFO - __main__ - Training/evaluation parameters TrainingArguments( _n_gpu=1, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, dataloader_drop_last=False, dataloader_num_workers=12, dataloader_pin_memory=True, ddp_find_unused_parameters=None, debug=[], deepspeed=None, disable_tqdm=False, do_eval=True, do_predict=True, do_train=True, eval_accumulation_steps=None, eval_steps=None, evaluation_strategy=IntervalStrategy.EPOCH, fp16=False, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, gradient_accumulation_steps=1, gradient_checkpointing=False, greater_is_better=None, group_by_length=False, hub_model_id=None, hub_strategy=HubStrategy.EVERY_SAVE, [211/1901] hub_token=<HUB_TOKEN>, ignore_data_skip=False, label_names=None, label_smoothing_factor=0.0, learning_rate=5e-05, length_column_name=length, load_best_model_at_end=False, local_rank=-1, log_level=-1, log_level_replica=-1, log_on_each_node=True, logging_dir=/data/rali6/Tmp/salaunol/_NEXT/a21/fpt/output_custom-flaubert-base/runs/Oct15_07-19-32_octal16, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=500, logging_strategy=IntervalStrategy.STEPS, lr_scheduler_type=SchedulerType.LINEAR, max_grad_norm=1.0, max_steps=-1, metric_for_best_model=None, mp_parameters=, no_cuda=False, num_train_epochs=20.0, output_dir=/data/rali6/Tmp/salaunol/_NEXT/a21/fpt/output_custom-flaubert-base/, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=8, per_device_train_batch_size=8, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=<PUSH_TO_HUB_TOKEN>, remove_unused_columns=True, report_to=['wandb'], resume_from_checkpoint=None, run_name=a21_flaubert_mlm, save_on_each_node=False, save_steps=500, save_strategy=IntervalStrategy.EPOCH, save_total_limit=None, seed=21, sharded_ddp=[], skip_memory_metrics=True, tpu_metrics_debug=False, tpu_num_cores=None, use_legacy_prediction_loop=False, warmup_ratio=0.0, warmup_steps=0, weight_decay=0.0, xpu_backend=None, ) 10/15/2021 07:19:32 - WARNING - datasets.builder - Using custom data configuration default-df960222b6433548 10/15/2021 07:19:32 - INFO - datasets.builder - Overwrite dataset info from restored data version. 10/15/2021 07:19:32 - INFO - datasets.info - Loading Dataset info from /data/rali6/Tmp/salaunol/_NEXT/hf_datasets_cache/text/default-df960222b6433548/0.0.0/e16f44aa1b321ece1f87b0797 7cc5d70be93d69b20486d6dacd62e12cf25c9a5 10/15/2021 07:19:32 - WARNING - datasets.builder - Reusing dataset text (/data/rali6/Tmp/salaunol/_NEXT/hf_datasets_cache/text/default-df960222b6433548/0.0.0/e16f44aa1b321ece1f87b07 977cc5d70be93d69b20486d6dacd62e12cf25c9a5) 10/15/2021 07:19:32 - INFO - datasets.info - Loading Dataset info from /data/rali6/Tmp/salaunol/_NEXT/hf_datasets_cache/text/default-df960222b6433548/0.0.0/e16f44aa1b321ece1f87b0797 7cc5d70be93d69b20486d6dacd62e12cf25c9a5 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 105.12it/s] [INFO|configuration_utils.py:584] 2021-10-15 07:19:32,706 >> loading configuration file https://huggingface.co/flaubert/flaubert_base_cased/resolve/main/config.json from cache at /d ata/rali6/Tmp/salaunol/_NEXT/transformers_cache/0b9ef58865bb61b2a44569c51b24b441c7b6b49ba63c659fc4ad5d61ffa011d6.c03a6cc0529664af7ebd7b4b385954d9cd0071c3d965d9377ab407e2eaa06918 [INFO|configuration_utils.py:621] 2021-10-15 07:19:32,708 >> Model config FlaubertConfig { "amp": 1, "architectures": [ "FlaubertWithLMHeadModel" ], "asm": false, "attention_dropout": 0.1, "bos_index": 0, "bos_token_id": 0, "bptt": 512, "causal": false, "clip_grad_norm": 5, "dropout": 0.1, "emb_dim": 768, "embed_init_std": 0.02209708691207961, "encoder_only": true, "end_n_top": 5, "eos_index": 1, "fp16": true, "gelu_activation": true, "group_by_size": true, "id2lang": { "0": "fr" }, "init_std": 0.02, "is_encoder": true, "lang2id": { "fr": 0 }, "lang_id": 0, [118/1901] "langs": [ "fr" ], "layer_norm_eps": 1e-12, "layerdrop": 0.0, "lg_sampling_factor": -1, "lgs": "fr", "mask_index": 5, "mask_token_id": 0, "max_batch_size": 0, "max_position_embeddings": 512, "max_vocab": -1, "mlm_steps": [ [ "fr", null ] ], "model_type": "flaubert", "n_heads": 12, "n_langs": 1, "n_layers": 12, "pad_index": 2, "pad_token_id": 2, "pre_norm": false, "sample_alpha": 0, "share_inout_emb": true, "sinusoidal_embeddings": false, "start_n_top": 5, "summary_activation": null, "summary_first_dropout": 0.1, "summary_proj_to_labels": true, "summary_type": "first", "summary_use_proj": true, "tokens_per_batch": -1, "transformers_version": "4.12.0.dev0", "unk_index": 3, [81/1901] "use_lang_emb": true, "vocab_size": 68729, "word_blank": 0, "word_dropout": 0, "word_keep": 0.1, "word_mask": 0.8, "word_mask_keep_rand": "0.8,0.1,0.1", "word_pred": 0.15, "word_rand": 0.1, "word_shuffle": 0 } [INFO|tokenization_utils_base.py:1671] 2021-10-15 07:19:32,932 >> Didn't find file /data/rali6/Tmp/salaunol/_NEXT/a21/fpt/custom-flaubert_flaubert_base_cased/tokenizer.json. We won' t load it. [INFO|tokenization_utils_base.py:1740] 2021-10-15 07:19:32,932 >> loading file /data/rali6/Tmp/salaunol/_NEXT/a21/fpt/custom-flaubert_flaubert_base_cased/vocab.json [INFO|tokenization_utils_base.py:1740] 2021-10-15 07:19:32,933 >> loading file /data/rali6/Tmp/salaunol/_NEXT/a21/fpt/custom-flaubert_flaubert_base_cased/merges.txt [INFO|tokenization_utils_base.py:1740] 2021-10-15 07:19:32,933 >> loading file /data/rali6/Tmp/salaunol/_NEXT/a21/fpt/custom-flaubert_flaubert_base_cased/added_tokens.json [INFO|tokenization_utils_base.py:1740] 2021-10-15 07:19:32,933 >> loading file /data/rali6/Tmp/salaunol/_NEXT/a21/fpt/custom-flaubert_flaubert_base_cased/special_tokens_map.json [INFO|tokenization_utils_base.py:1740] 2021-10-15 07:19:32,933 >> loading file /data/rali6/Tmp/salaunol/_NEXT/a21/fpt/custom-flaubert_flaubert_base_cased/tokenizer_config.json [INFO|tokenization_utils_base.py:1740] 2021-10-15 07:19:32,933 >> loading file None [INFO|tokenization_utils.py:414] 2021-10-15 07:19:33,080 >> Adding locateur to the vocabulary [INFO|tokenization_utils.py:414] 2021-10-15 07:19:33,080 >> Adding 1619 to the vocabulary [INFO|tokenization_utils.py:414] 2021-10-15 07:19:33,081 >> Adding ORDONNE to the vocabulary [INFO|tokenization_utils.py:414] 2021-10-15 07:19:33,081 >> Adding CONDAMNE to the vocabulary [INFO|tokenization_utils.py:414] 2021-10-15 07:19:33,081 >> Adding TRIBUNAL to the vocabulary [INFO|tokenization_utils.py:414] 2021-10-15 07:19:33,081 >> Adding C.c.Q. to the vocabulary [INFO|tokenization_utils.py:414] 2021-10-15 07:19:33,081 >> Adding payable to the vocabulary [INFO|tokenization_utils.py:414] 2021-10-15 07:19:33,081 >> Adding RÉSILIE to the vocabulary [INFO|tokenization_utils.py:414] 2021-10-15 07:19:33,081 >> Adding 82.1 to the vocabulary [INFO|tokenization_utils.py:414] 2021-10-15 07:19:33,081 >> Adding locatrice to the vocabulary [INFO|tokenization_utils.py:414] 2021-10-15 07:19:33,081 >> Adding L.R.L. to the vocabulary [INFO|tokenization_utils.py:414] 2021-10-15 07:19:33,081 >> Adding CONSIDÉRANT to the vocabulary [INFO|tokenization_utils.py:414] 2021-10-15 07:19:33,081 >> Adding résilié to the vocabulary [INFO|tokenization_utils.py:414] 2021-10-15 07:19:33,082 >> Adding REJETTE to the vocabulary [INFO|tokenization_utils.py:414] 2021-10-15 07:19:33,082 >> Adding 1883 to the vocabulary [INFO|tokenization_utils.py:414] 2021-10-15 07:19:33,082 >> Adding RÉSERVE to the vocabulary [INFO|tokenization_utils.py:414] 2021-10-15 07:19:33,082 >> Adding CONSTATE to the vocabulary [INFO|tokenization_utils.py:414] 2021-10-15 07:19:33,082 >> Adding ACCUEILLE to the vocabulary [INFO|tokenization_utils.py:414] 2021-10-15 07:19:33,082 >> Adding DÉCLARE to the vocabulary [INFO|tokenization_utils.py:414] 2021-10-15 07:19:33,082 >> Adding AUDIENCE to the vocabulary [INFO|tokenization_utils.py:414] 2021-10-15 07:19:33,082 >> Adding PERMET to the vocabulary [INFO|tokenization_utils.py:414] 2021-10-15 07:19:33,082 >> Adding RÉCLAME to the vocabulary [INFO|tokenization_utils.py:414] 2021-10-15 07:19:33,082 >> Adding LOCATEUR to the vocabulary [INFO|tokenization_utils.py:414] 2021-10-15 07:19:33,083 >> Adding AUTORISE to the vocabulary [INFO|modeling_utils.py:1341] 2021-10-15 07:19:33,186 >> loading weights file https://huggingface.co/flaubert/flaubert_base_cased/resolve/main/pytorch_model.bin from cache at /data/ rali6/Tmp/salaunol/_NEXT/transformers_cache/36e988379f6754002245b872381a8b00f6c3a5e3bb887031aef3d85b89cf0122.fa9ba484eef0a0fa228af0c4972ef470103b3e08b15aee50f1a99acf3b33086e [INFO|modeling_utils.py:1606] 2021-10-15 07:19:40,271 >> All model checkpoint weights were used when initializing FlaubertWithLMHeadModel. [INFO|modeling_utils.py:1614] 2021-10-15 07:19:40,271 >> All the weights of FlaubertWithLMHeadModel were initialized from the model checkpoint at flaubert/flaubert_base_cased. If your task is similar to the task the model of the checkpoint was trained on, you can already use FlaubertWithLMHeadModel for predictions without further training. Running tokenizer on dataset line_by_line #0: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.78ba/s] Running tokenizer on dataset line_by_line #1: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.89ba/s] Running tokenizer on dataset line_by_line #2: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2.35ba/s] Running tokenizer on dataset line_by_line #3: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2.45ba/s] Running tokenizer on dataset line_by_line #4: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2.32ba/s] Running tokenizer on dataset line_by_line #5: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2.38ba/s] Running tokenizer on dataset line_by_line #6: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2.37ba/s] Running tokenizer on dataset line_by_line #7: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2.26ba/s] Running tokenizer on dataset line_by_line #8: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2.34ba/s] Running tokenizer on dataset line_by_line #9: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2.30ba/s] Running tokenizer on dataset line_by_line #10: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2.44ba/s] Running tokenizer on dataset line_by_line #11: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2.50ba/s] Running tokenizer on dataset line_by_line #0: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5.40ba/s] Running tokenizer on dataset line_by_line #1: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5.48ba/s] Running tokenizer on dataset line_by_line #2: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5.62ba/s] Running tokenizer on dataset line_by_line #3: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5.41ba/s] Running tokenizer on dataset line_by_line #4: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5.60ba/s] Running tokenizer on dataset line_by_line #5: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5.64ba/s] Running tokenizer on dataset line_by_line #6: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5.65ba/s] Running tokenizer on dataset line_by_line #7: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5.69ba/s] Running tokenizer on dataset line_by_line #8: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5.51ba/s] Running tokenizer on dataset line_by_line #9: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5.70ba/s] Running tokenizer on dataset line_by_line #10: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5.90ba/s] Running tokenizer on dataset line_by_line #11: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 6.00ba/s] [INFO|trainer.py:540] 2021-10-15 07:20:26,264 >> The following columns in the training set don't have a corresponding argument in `FlaubertWithLMHeadModel.forward` and have been ig nored: special_tokens_mask. [INFO|trainer.py:1196] 2021-10-15 07:20:26,296 >> ***** Running training *****█████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5.70ba/s] [INFO|trainer.py:1197] 2021-10-15 07:20:26,296 >> Num examples = 4937 [INFO|trainer.py:1198] 2021-10-15 07:20:26,297 >> Num Epochs = 20████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5.52ba/s] [INFO|trainer.py:1199] 2021-10-15 07:20:26,297 >> Instantaneous batch size per device = 8 [INFO|trainer.py:1200] 2021-10-15 07:20:26,297 >> Total train batch size (w. parallel, distributed & accumulation) = 8███████████████████████████████| 1/1 [00:00<00:00, 5.71ba/s] [INFO|trainer.py:1201] 2021-10-15 07:20:26,297 >> Gradient Accumulation steps = 1 [INFO|trainer.py:1202] 2021-10-15 07:20:26,297 >> Total optimization steps = 12360███████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5.91ba/s] [INFO|integrations.py:500] 2021-10-15 07:20:26,298 >> Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true" wandb: Currently logged in as: osalaun (use `wandb login --relogin` to force relogin)██████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 6.02ba/s] wandb: Tracking run with wandb version 0.12.4 wandb: Syncing run a21_flaubert_mlm wandb: View project at https://wandb.ai/osalaun/huggingface wandb: View run at https://wandb.ai/osalaun/huggingface/runs/1dlneyei wandb: Run data is saved locally in /u/salaunol/Documents/_2021_automne/pil0_pretrain_lm/wandb/run-20211015_072026-1dlneyei wandb: Run `wandb offline` to turn off syncing. 0%| | 0/12360 [00:00<?, ?it/s] Traceback (most recent call last): File "run_mlm.py", line 552, in <module> main() File "run_mlm.py", line 501, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/u/salaunol/anaconda3/envs/fpt/lib/python3.8/site-packages/transformers/trainer.py", line 1316, in train tr_loss_step = self.training_step(model, inputs) File "/u/salaunol/anaconda3/envs/fpt/lib/python3.8/site-packages/transformers/trainer.py", line 1849, in training_step loss = self.compute_loss(model, inputs) File "/u/salaunol/anaconda3/envs/fpt/lib/python3.8/site-packages/transformers/trainer.py", line 1881, in compute_loss outputs = model(**inputs) File "/u/salaunol/anaconda3/envs/fpt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/u/salaunol/anaconda3/envs/fpt/lib/python3.8/site-packages/transformers/models/xlm/modeling_xlm.py", line 759, in forward outputs = self.pred_layer(output, labels) # (loss, logits) or (logits,) depending on if labels are provided. File "/u/salaunol/anaconda3/envs/fpt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/u/salaunol/anaconda3/envs/fpt/lib/python3.8/site-packages/transformers/models/xlm/modeling_xlm.py", line 664, in forward scores.view(-1, self.n_words), y.view(-1), reduction="elementwise_mean" RuntimeError: shape '[-1, 68729]' is invalid for input of size 45651992 wandb: Waiting for W&B process to finish, PID 1168625... (failed 1). Press ctrl-c to abort syncing. wandb: wandb: Synced 6 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s) wandb: Synced a21_flaubert_mlm: https://wandb.ai/osalaun/huggingface/runs/1dlneyei wandb: Find logs at: ./wandb/run-20211015_072026-1dlneyei/logs/debug.log wandb: ``` I am not sure of what the problem is, it seems that the new vocabulary size (68753) did not replace the default one (68729). ## Expected behavior `run_mlm.py` should be able to perform MLM with FlauBERT with a customized tokenizer (tokens added to default vocabulary).
10-15-2021 11:44:18
10-15-2021 11:44:18
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @oliviersalaun, I think the problem could be that the tokenizer's vocabulary size is not equal to the model's vocabulary size after you add tokens to the tokenizer. Could you maybe sure that after step 1.) that `model.config.vocab_size = tokenizer.vocab_size`? <|||||>Hey @patrickvonplaten, Thanks for replying. There is indeed a mismatch between the `vocab_size`s, I noticed something strange : ```python tokenizer_default = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased") tokenizer_custom = AutoTokenizer.from_pretrained(PATH_TO_CUSTOM_TOKENIZER_FOLDER) print(tokenizer_default.vocab_size) print(len(tokenizer_default)) print(tokenizer_custom.vocab_size) print(len(tokenizer_custom)) ``` That gives: ``` 68729 68729 68729 68753 ``` It seems that `tokenizer_custom.vocab_size` ignored the added tokens. Is this normal?<|||||>I think this is expected. If you change the tokenizer (*e.g.* add new tokens), you also have to make sure that the model's input and output embedding matrix is resized correctly. E.g. you should call: `model.resize_embeddings(len(tokenizer))` on the model before training.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,019
closed
Add SegFormer
# What does this PR do? This PR adds [SegFormer](https://arxiv.org/abs/2105.15203), a new model by NVIDIA that is surprisingly simple, yet very powerful for semantic segmentation of images. It uses a hierarchical Transformer as backbone, and an all-MLP decode head. I've implemented 3 models: * `SegformerModel` (backbone-only) * `SegformerForImageClassification` (backbone + classifier head) * `SegformerForSemanticSegmentation` (backbone + semantic segmentation all-MLP head) Models are on the hub (with approval from the author): https://huggingface.co/models?other=segformer Here's how to use the semantic segmentation model: ``` from transformers import SegformerFeatureExtractor, SegformerForSemanticSegmentation from PIL import Image feature_extractor = SegformerFeatureExtractor(do_random_crop=False) model = SegformerForSemanticSegmentation.from_pretrained("nvidia/segformer-b0-finetuned-ade-512-512") image = Image.open("...") # prepare image for model pixel_values = feature_extractor(image, return_tensors="pt").pixel_values # forward pass outputs = model(pixel_values) # logits are of shape (batch_size, num_labels, height/4, width/4) logits = outputs.logits ``` Quick inference notebook with visualization: https://colab.research.google.com/drive/1Kc1VLuFrWUPz0rZXA2E_rKQqdK7kV2iH?usp=sharing ## To do/questions - [ ] Decide on the default values of the feature extractor (which are kind of arbitrary right now) - [ ] I've called the decode head `SegformerDecodeHead`, rather than SegformerDecoder. It's more of a lightweight head, than a decoder. Is this ok? - [ ] Add padding of images + segmentation maps (probably a single function in `image_utils.py`), cc @sgugger. Currently, I rely on `torch.nn.functional.pad`, which makes the feature extractor depend on PyTorch. It could also make sense to do it in Numpy (this model for example pads after normalizing, so it would benefit from it as the output after normalization are Numpy arrays). - [x] Make sure model doesn't return hidden states when the user doesn't want to - [ ] Model currently returns a `SequenceClassifierOutput`, however this will render wrong shapes of logits in the docs. Logits are actually of shape (batch_size, num_labels, height/4, width/4). - [ ] Add model cards (author has joined the NVIDIA org on the hub and might create these)
10-15-2021 11:43:57
10-15-2021 11:43:57
PR is ready for review, only thing to be added is padding.
transformers
14,018
closed
Replace assertions with ValueError exceptions
# What does this PR do? Replaces the assertions in generation_logits_process.py with ValueError exceptions. Contributes towards fixing issue #12789 ## Before submitting - [] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
10-15-2021 08:09:49
10-15-2021 08:09:49
Can someone please help me understand this failed test (test_push_to_hub)? I cannot see how my changes can affect this...<|||||>Anyways, now with an explicit length check using len it works.
transformers
14,017
closed
[Bug?] Warning about weights not being initialized, even though not part of the model
Just opening this issue to keep track of it. As reported on the [forum](https://discuss.huggingface.co/t/some-weights-of-bertmodel-were-not-initialized-from-the-model-checkpoint/3805/4), sometimes a warning gets printed about certain weights not being initialized while they are not part of the model. These always seem to be pooler weights. ## Example 1 `BertForMaskedLM` only has a language modeling head and no pooler, yet the following warning might get printed: ``` Some weights of BertModel were not initialized from the model checkpoint at ./output_model and are newly initialized: ['bert.pooler.dense.weight', 'bert.pooler.dense.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` Even though `BertForMaskedLM` uses a `BertModel` with `add_pooling_layer=False` as can be seen [here](https://github.com/huggingface/transformers/blob/d5b82bb70c2e8c4b184a6f2a7d1c91d7fd156956/src/transformers/models/bert/modeling_bert.py#L1292). ## Example 2 I've added the TrOCR models in #13874, and when converting these models, I'm using a `ViTModel` with `add_pooling_layer=False` and a `TrOCRForCausalLM`. However, when you do the following: ``` from transformers import VisionEncoderDecoderModel model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten") ``` This prints the following warning: ``` Some weights of VisionEncoderDecoderModel were not initialized from the model checkpoint at microsoft/trocr-base-handwritten and are newly initialized: ['encoder.pooler.dense.weight', 'encoder.pooler.dense.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` This warning shouldn't be printed, as the encoder shouldn't have a pooler. Curious to know what's the cause of this!
10-15-2021 07:49:15
10-15-2021 07:49:15
I cannot reproduce this. Does it only occur for custom models? I get the expected behavior. ``` > from transformers import BertForMaskedLM > model = BertForMaskedLM.from_pretrained("bert-base-cased") Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForMaskedLM: ['cls.seq_relationship.bias', 'cls.seq_relationship.weight'] - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). ``` For the second part (and this is a guess), are you perhaps indeed using `add_pooling_layer` for conversion (where you add the enc/dec manually), but not so when you are loading `VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-handwritten")`? From what I remember, `add_pooling_layer` is True by default. So if you are just loading default configs, then the pooler is still enabled by default. So maybe this https://github.com/NielsRogge/transformers/blob/f3d9e9483d8d6b915260880312cdd58518e68cf4/src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py#L173-L174 has to be something like ```python if encoder is None: encoder = AutoModel.from_config(config.encoder, add_pooling_layer=False) ```<|||||>Ok, so clearly not a bug! Thanks for investigating. Indeed, whether or not to add a pooling layer is not part of the config, hence it will add it as it is set to True by default. We could indeed add `add_pooling_layer=False`, as I don't think any encoder-decoder model needs the pooler of the encoder (but only the final hidden states). However, not every vision encoder will have an `add_pooling_layer` argument in its init (the ones that are currently added, namely ViT, BEiT and DEiT do, but I'm working on a new one that doesn't have it). <|||||>It might be worth it to have all base models inits have a `**kwargs` to absorb any unrelated arguments. _But_ I am also not a big fan of that because it can lead to non-obvious errors or misunderstandings, especially for the user. Such as "Well, I passed add_pooling_layer=True, and I do not get an error, but still I do not get a pooling layer." I am not sure what the best design choice is here.<|||||>> It might be worth it to have all base models inits have a **kwargs to absorb any unrelated arguments. No that's definitely something we don't want. Otherwise it leads to all the models and the `AutoModel` API accepting any kwarg, so any typo will be forgotten silently: ``` AutoModelForSequenceClassiciation.from_pretrained("bert-base-cased", num_label=20) ``` will silently leave the labels at 2 for instance, producing bugs that are hard to debug.<|||||>Yes, that was exactly the "against" argument that I wrote after the _But_. :D Making it part of the configs seems sensible, as it pertains to the structure of the model. But perhaps there are arguments against that as well?<|||||>No the config is shared for all task-specific models and you can have the same base model with the same config used with different kinds of heads that require (or not) the pooling layer. That's why this parameter was not set in the config.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,016
closed
Fix weight loading issue
# What does this PR do? Fix #14002 + add 2 more tests (I uploaded the converted TF checkpoint to [ydshieh](https://huggingface.co/ydshieh/bert2bert-cnn_dailymail-fp16)). Might be better for @patrickvonplaten to upload to [patrickvonplaten](https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16/blob/main/README.md) ## Who can review? @Rocketknight1 @patrickvonplaten @LysandreJik
10-15-2021 06:41:55
10-15-2021 06:41:55
The issue in #14002 comes from the fact, when using `from_pt=True`, the block shown at the end doesn't use `load_weight_prefix`. However, it is required to extend the variable scope for the 2 components (encoder & decoder) of a TF composite models - to make the subsequent `save_pretrained -> from_pretrained` work after a creation from `from_encoder_decoder_pretrained`. I feel it would be better to modify `load_pytorch_weights_in_tf2_model` to address this situation, but I tried to avoid modify this Hugging Face's TF core method. https://github.com/huggingface/transformers/blob/d5b82bb70c2e8c4b184a6f2a7d1c91d7fd156956/src/transformers/modeling_tf_utils.py#L1445-L1449 <|||||>Looks great to me!<|||||>Hi, I just realized that `TFEncoderDecoderModel` is released with `v4.12.0` without this fix being merged.
transformers
14,015
closed
Translate README.md to Korean
# What does this PR do? Add README_ko.md and links to direct users to each README. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
10-15-2021 04:51:21
10-15-2021 04:51:21
Hello @yeounyi, Thank you very much for your PR, making `transformers` more user-friendly for Korean users! Would you mind adding proper information in the [tooling](https://github.com/huggingface/transformers/blob/cde0c750af2fae6848ed3ee8be381b4f1230ecd0/utils/check_copies.py#L37-L54) to enable automated model list synchronization, so that once a new model is added into `README.md`, it will be automatically synchronized into the Korean readme.<|||||>@qqaatw I added README_ko.md in the tooling :) <|||||>cc @sgugger
transformers
14,014
closed
[Typo] Replace "Masked" with "Causal" in TF CLM script
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Minor typos fixed. (Tensorflow CLM script had leftovers from the MLM script.) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-15-2021 04:27:26
10-15-2021 04:27:26
Good catch! Feel free to tag me directly with TF issues like these, I'll try to get to them quickly!
transformers
14,013
closed
[ONNX] Add symbolic function for XSoftmax op for exporting to ONNX.
# What does this PR do? For such customized operator, PyTorch ONNX exporter will try to call its symbolic() function for exporting to ONNX file. Add the implementation here so that it could be exported successfully.
10-15-2021 04:27:21
10-15-2021 04:27:21
Might be of interest to @BigBird01 <|||||>> Might be of interest to @BigBird01 This is cool! Thanks!<|||||>@BigBird01 The latest CI failure seems like is not relative to my changes. Could you please take a review and see if we can merge this change? Thanks.<|||||>> Seems good to me, thanks for your great contribution! > > Quick question: do you plan to add the [ONNX DeBERTa export support](https://huggingface.co/transformers/serialization.html#exporting-transformers-models) now that you have added support for the problematic operator? No, no plan for this yet.<|||||>@fatcat-z Hi, I have a question about the way symbolic conversion is implemented. Is there any reason why you are using `uint8` type for some values? It seems to me to work with signed integers, and uint8 is not supported by TensorRT (it only supports signed integers).
transformers
14,012
closed
Allow user to choose DDP backends: nccl or gloo (#13441)
null
10-15-2021 03:01:59
10-15-2021 03:01:59
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,011
closed
Allow user to choose DDP backends: nccl or gloo (#13441)
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-15-2021 02:51:14
10-15-2021 02:51:14
transformers
14,010
closed
Bert: relative_key position embedding causes error in encoder-decoder setup
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.1 - Platform: macOS 11.6 - Python version: 3.8.8 - PyTorch version (GPU?): 1.9.1 - Tensorflow version (GPU?): N/A - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @LysandreJik @patrickvonplaten <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj @patrickvonplaten - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten - Tokenizers: @LysandreJik - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): BertModel, EncoderDecoderModel The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Copy/paste the code snippet from below 2. Run the script <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ```python import torch from transformers import BertConfig, EncoderDecoderConfig, EncoderDecoderModel config = { 'hidden_size': 512, 'num_attention_heads': 8, 'position_embedding_type': 'relative_key_query' } encoder_config = BertConfig(**config) decoder_config = BertConfig(**config) config = EncoderDecoderConfig.from_encoder_decoder_configs(encoder_config, decoder_config) model = EncoderDecoderModel(config) model.config.decoder.is_decoder = True model.config.decoder.add_cross_attention = True batch_size, src_len, tgt_len = 1, 2, 3 x = torch.zeros(batch_size, src_len).int() y = torch.zeros(batch_size, tgt_len).int() model(input_ids=x, decoder_input_ids=y) ``` ## Expected behavior The above code snippet is expected to run without errors. Instead, it produces error [1] exactly if `src_len == tgt_len`. This breaks any setup where source sequences may have a different length than the target sequence, which includes my setup. The same error occurs for the `relative_key` position embedding. The problem can be circumvented by padding the sequences to be the same length, but this is not a good solution with respect to performance, e.g. if the source sequence is much longer than the target sequence. Error [1]: ``` [...] File "~/opt/miniconda3/envs/symbolic-music/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 306, in forward relative_position_scores_key = torch.einsum("bhrd,lrd->bhlr", key_layer, positional_embedding) File "~/opt/miniconda3/envs/symbolic-music/lib/python3.8/site-packages/torch/functional.py", line 299, in einsum return _VF.einsum(equation, operands) # type: ignore[attr-defined] RuntimeError: einsum(): operands do not broadcast with remapped shapes [original->remapped]: [1, 8, 2, 64]->[1, 8, 1, 2, 64] [3, 3, 64]->[1, 1, 3, 3, 64] ``` <!-- A clear and concise description of what you would expect to happen. -->
10-14-2021 20:57:59
10-14-2021 20:57:59
I think this can be solved by setting a distance matrix with a shape `(target sequence length, source sequence length)` for the relative position embedding, although I'm not sure whether this approach is hypothetically reasonable to cross-attention. (The original [paper](https://arxiv.org/pdf/1803.02155.pdf) focuses on self-attention) ``` if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query": seq_length_l = hidden_states.size()[1] seq_length_r = encoder_hidden_states.size()[1] if is_cross_attention else hidden_states.size()[1] position_ids_l = torch.arange(seq_length_l, dtype=torch.long, device=hidden_states.device).view(-1, 1) position_ids_r = torch.arange(seq_length_r, dtype=torch.long, device=hidden_states.device).view(1, -1) distance = position_ids_l - position_ids_r positional_embedding = self.distance_embedding(distance + self.max_position_embeddings - 1) positional_embedding = positional_embedding.to(dtype=query_layer.dtype) # fp16 compatibility if self.position_embedding_type == "relative_key": relative_position_scores = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding) attention_scores = attention_scores + relative_position_scores elif self.position_embedding_type == "relative_key_query": relative_position_scores_query = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding) relative_position_scores_key = torch.einsum("bhrd,lrd->bhlr", key_layer, positional_embedding) attention_scores = attention_scores + relative_position_scores_query + relative_position_scores_key ``` cc @patrickvonplaten <|||||>Sorry, to reply only now! Just attached a PR that should fix the problem. IMO, cross attention layers should never make use of positional encodings as they don't really make sense there. E.g. T5 uses relative position encodings as well and simply disables it for the cross attention layers: https://github.com/huggingface/transformers/blob/a3ded170e22b37027dab456a12ff2f523c99d998/src/transformers/models/t5/modeling_t5.py#L563 <|||||>Let me know what you guys think @qqaatw and @dvruette !
transformers
14,009
closed
TF Model train and eval step metrics for seq2seq models.
When using a model with a seq2seq output compute metrics against logits. # What does this PR do? This PR changes the TF train and test steps so that metrics can be correctly computed when using keras model.fit. The keras Model train/test step functions are supposed to compare the labels (y_true) and the predictions (y_pred). The previous code was passing as y_pred the ModelOutput dataclass (basically a dict) of values which results in TF/keras attempting to compute metrics between the variable 'y' (y_true) and each of the elements in the ModelOutput dict. ## Who can review? @sgugger @Rocketknight1
10-14-2021 14:50:42
10-14-2021 14:50:42
This seems like a good fix, thank you! The changes to `train_step` are very recent and we were expecting some initial problems like this. I'm going to do some testing with your PR today/tomorrow and hopefully approve and merge after that. <|||||>I did some quick testing - the impact of the change seems limited as `y_pred` is only used for computing metrics at that stage of the function, so I don't think this can have any major unwanted side-effects. Approving.<|||||>@Rocketknight1 I did some more testing. When one compiles the model multiple times the metric doesn't work correctly. If a model is compiled only once it does work. I don't know if this is a concern.<|||||>@Rocketknight1 Please take another look. The test makes sure that the metric now works as expected. I also removed a call to compute the loss on `test_step` which seems to me to be a duplicate.<|||||>@pedro-r-marques You're correct that some vestigial code made it into test_step - that's not great (at our end, you did a good job in spotting it!). Let me double-check that bit and tidy it up in your branch before we merge.<|||||>Done! The old code comes from a period when we were experimenting with a different way of handling the model's internal loss computations, and should have been removed. I fixed it now, and the rest of the code looks good. If you're happy with my changes, we can merge once the tests are good.<|||||>(That torch failure has nothing to do with this PR, don't worry)<|||||>@Rocketknight1 Checks are green now. Please merge the PR, if you are happy with it. Thanks a lot !<|||||>The rebase reverted the fix to the vestigial code, so I took it out again. Will merge once it's all green!<|||||>@Rocketknight1 apologies for squashing your changes unintentionally :-(. Cleaned up the git log; hopefully preserving your changes this time and took another roll at the CI dice.<|||||>@pedro-r-marques It's okay, I wrecked your changes too. Git is just hard, lol. Anyway, it looks good now and tests are green, so merging!
transformers
14,008
closed
[Testing] Move speech datasets to `hf-internal` testing ...
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ...and fix failing tests. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-14-2021 13:38:12
10-14-2021 13:38:12
transformers
14,007
closed
FX helper functions and quick fixes
# What does this PR do? This PR adds small helper functions and fixes small issues regarding `torch.fx` symbolic tracing for models from the library.
10-14-2021 12:42:03
10-14-2021 12:42:03
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Should this be merged @michaelbenayoun?<|||||>No, this is not up to date anymore. Closing this!
transformers
14,006
closed
Replace assertion with ValueError exception
# What does this PR do? Replaces the assertion in generation_flax_logits_process.py with a ValueError exception. Contributes towards fixing issue #12789 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
10-14-2021 11:14:49
10-14-2021 11:14:49
transformers
14,005
closed
typo
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-14-2021 10:54:39
10-14-2021 10:54:39
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,004
closed
typo
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-14-2021 10:48:32
10-14-2021 10:48:32
Do you mind running the code quality checks before we merge? You can do so by running the following from the root of your clone: ``` pip install -e .[quality] make fixup ```<|||||>I can but what is the point of that? I just changed a few letters in a documentation (.rst) file. No programming was involved. The failing check must have been caused by some previous commit.<|||||>Good question! The change comes from your modification: the style tool will try to fit all words that fit under the 120 character limit. Since you reduced the amount of characters contained in the line, my intuition tells me that it's trying to fit the word `tasks` in that line.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,003
closed
Integration.py
# 🚀 Feature request CometCallback in integration is overriding my experiment object declared in optimizer. ## Motivation reporting logs to comet through cometcallback is overriding experiment objects already declared before the TrainerAPI is invoked. Is there any way to make space for cometcallback to get the experiment object instead of overriding it. This way, we can not only use Optimizers of comet_ml but also use important metrics like "log_confusion_matrix". If it's not viable, please let me know. ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
10-14-2021 10:17:58
10-14-2021 10:17:58
Pinging @dsblank :) cc @sgugger <|||||>I think we have solved this issue here: https://github.com/comet-ml/issue-tracking/issues/434 And we are working on an internal refactor to allow more flexibility on the transformer side.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,002
closed
TFEncoderDecoderModel loading TF weights issue
## Environment info - `transformers` version: 4.12.0.dev0 - Platform: Windows-10-10.0.19042-SP0 - Python version: 3.9.5 - PyTorch version (GPU?): 1.9.0+cpu (False) - Tensorflow version (GPU?): 2.5.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ## Information The recent added `TFEncoderDecoderModel` has an issue: In order to load from a PyTorch checkpoint, a workaround is ``` _model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert-cnn_dailymail-fp16") _model.encoder.save_pretrained("./encoder") _model.decoder.save_pretrained("./decoder") model = TFEncoderDecoderModel.from_encoder_decoder_pretrained( "./encoder", "./decoder", encoder_from_pt=True, decoder_from_pt=True ) ``` as stated in the documentation. However, saving and reloading won't load the TF weights correctly. ``` model.save_pretrained("./temp") model = TFEncoderDecoderModel.from_pretrained("./temp") # This has an issue. ``` ## To reproduce Steps to reproduce the behavior: ``` from transformers import EncoderDecoderModel, TFEncoderDecoderModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") # a workaround to load from pytorch checkpoint _model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert-cnn_dailymail-fp16") _model.encoder.save_pretrained("./encoder") _model.decoder.save_pretrained("./decoder") model = TFEncoderDecoderModel.from_encoder_decoder_pretrained( "./encoder", "./decoder", encoder_from_pt=True, decoder_from_pt=True ) # This is only for copying some specific attributes of this particular model. model.config = _model.config article = """ (CNN)Sigma Alpha Epsilon is under fire for a video showing party-bound fraternity members singing a racist chant. SAE's national chapter suspended the students, but University of Oklahoma President David Boren took it a step further, saying the university's affiliation with the fraternity is permanently done. The news is shocking, but it's not the first time SAE has faced controversy. SAE was founded March 9, 1856, at the University of Alabama, five years before the American Civil War, according to the fraternity website. When the war began, the group had fewer than 400 members, of which "369 went to war for the Confederate States and seven for the Union Army," the website says. The fraternity now boasts more than 200,000 living alumni, along with about 15,000 undergraduates populating 219 chapters and 20 "colonies" seeking full membership at universities. SAE has had to work hard to change recently after a string of member deaths, many blamed on the hazing of new recruits, SAE national President Bradley Cohen wrote in a message on the fraternity's website. The fraternity's website lists more than 130 chapters cited or suspended for "health and safety incidents" since 2010. At least 30 of the incidents involved hazing, and dozens more involved alcohol. However, the list is missing numerous incidents from recent months. Among them, according to various media outlets: Yale University banned the SAEs from campus activities last month after members allegedly tried to interfere with a sexual misconduct investigation connected to an initiation rite. Stanford University in December suspended SAE housing privileges after finding sorority members attending a fraternity function were subjected to graphic sexual content. And Johns Hopkins University in November suspended the fraternity for underage drinking. "The media has labeled us as the 'nation's deadliest fraternity,' " Cohen said. In 2011, for example, a student died while being coerced into excessive alcohol consumption, according to a lawsuit. SAE's previous insurer dumped the fraternity. "As a result, we are paying Lloyd's of London the highest insurance rates in the Greek-letter world," Cohen said. Universities have turned down SAE's attempts to open new chapters, and the fraternity had to close 12 in 18 months over hazing incidents.""" input_dict = tokenizer(article, return_tensors="tf") output_ids = model.generate(input_ids=input_dict["input_ids"], max_length=None).numpy().tolist() summary = tokenizer.batch_decode(output_ids, skip_special_tokens=True) print(summary) model.save_pretrained("./temp") model = TFEncoderDecoderModel.from_pretrained("./temp") output_ids = model.generate(input_ids=input_dict["input_ids"], max_length=None).numpy().tolist() summary = tokenizer.batch_decode(output_ids, skip_special_tokens=True) print(summary) ``` Outputs: Loading from PT weights as in the workaround ``` ["sae was founded in 1856, five years before the civil war. the fraternity has had to work hard to change recently ... ``` After saving and reloading the TF weights ``` ['banning figurative banning figurative grandma discontinued keynoteeborgronia encouraged ... ``` The warning given when reloading the TF weights ``` Some layers from the model checkpoint at ./temp were not used when initializing TFEncoderDecoderModel: ['bert/encoder/layer_._9/output/dense/bias:0' ... Some layers of TFEncoderDecoderModel were not initialized from the model checkpoint at ./temp and are newly initialized: ['encoder/bert/encoder/layer_._3/attention/self/key/bias:0' ... ``` ## Expected behavior The weights should be loaded correctly, and the outputs should be exactly the same. ## Remark - In `test_modeling_tf_encoder_decoder.py`, we have tests * load PT weights to a TF model (use the workaround) and check the PT / TF models give the same results. * create a TF model, save and reload, and check the results are the same However, there is no test combining these two cases. - I am working on this. - A fix for this is necessary if we want to have a converted TF checkpoint that could be used. - @Rocketknight1 @patrickvonplaten @LysandreJik
10-14-2021 08:17:09
10-14-2021 08:17:09
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,001
closed
How should I try to feed new memory states to the next segments
Here is a simple exemple for my training loop based on Trainer class ```python transfo_xl_config = transformers.TransfoXLConfig(**hparams) model = transformers.TransfoXLForSequenceClassification(transfo_xl_config) training_args = transformers.TrainingArguments("test_trainer", no_cuda=True) trainer = transformers.Trainer( model=model, args=training_args, train_dataset=MyDataset('train'), eval_dataset=MyDataset('test') ) trainer.train() ``` The question is how should I feed new memory states to the next segments in the next batch when using TransfoXLModel
10-14-2021 03:37:09
10-14-2021 03:37:09
cc @sgugger <|||||>The `Trainer` does not support the transformer XL model, so you should run a manual training loop that you can power by [Accelerate](https://github.com/huggingface/accelerate) if you need distributed training.<|||||>> The `Trainer` does not support the transformer XL model, so you should run a manual training loop that you can power by [Accelerate](https://github.com/huggingface/accelerate) if you need distributed training. Thanks
transformers
14,000
closed
Add strong test for configuration attributes
# What does this PR do? This PR adds tests to make sure every possible common kwargs is properly set in all the configurations. This is done by adding a common test in the config tests that tries to set all common attributes to a value different form the default, as set in the dictionary `config_common_kwargs`. That dictionary is itself tested against a base `PretrainedConfig` to make sure that if we change the defaults or add new attributes, `config_common_kwargs` is also updated. This should prevent #13992 from happening again. Fixes #13992
10-14-2021 01:23:18
10-14-2021 01:23:18
It has been run on all test model files in the second to last commit (before removing a fake modif introduced to trigger all modeling tests). The only failure there was the SQUAD example, failing on master because of the Datasets release.
transformers
13,999
closed
Discrepancy in tokenization results using HF's AlbertTokenizer and sentencepiece library
Hi - I recently noticed that tokenized results from AlbertTokenizer and sentencepiece library differ for some inputs. Check below: ``` !pip install sentencepiece import sentencepiece as spm sp = spm.SentencePieceProcessor() sp.load('<SPM_MODEL>') print(sp.encode_as_pieces('3.0,')) print(sp.encode_as_ids('3.0,')) Output: ['▁3.0,'] [72369] ``` ``` ! pip install transformers from transformers import AlbertTokenizer tokenizer = AlbertTokenizer.from_pretrained('<SPM_MODEL>') print(tokenizer._tokenize("3.0,")) print(tokenizer._convert_token_to_id(tokenizer._tokenize("3.0,"))) Output: ['▁3.0', ','] [16047, 254713] ``` After looking at AlbertTokenizer codebase, I see that the if condition here is leading to the differences in the outputs above. https://github.com/huggingface/transformers/blob/master/src/transformers/models/albert/tokenization_albert.py#L218 Could you explain the intuition behind having this additional steps in AlbertTokenizer and what purpose do they serve here? Thanks!
10-13-2021 23:47:16
10-13-2021 23:47:16
Hello! We strive for perfect reproducibility between the original code and the HF code. This part originates from the original codebase: https://github.com/google-research/albert/blob/master/tokenization.py#L67 I would open an issue there instead. <|||||>Thank you @LysandreJik for the pointer. I wasn't aware that HF implementation is based on google's albert repo. Opened a question there: https://github.com/google-research/albert/issues/249 Resolving this issue.
transformers
13,998
closed
Saving/Reloading the Flax T5 model
@patil-suraj @patrickvonplaten Hey, I was able to train the FlaxT5Model using ```run_t5_mlm_flax.py``` on a small custom dataset as per the flax doc. How do I save and reuse the models to get the last hidden states? Would the following code work? ``` auto_tokenizer = AutoTokenizer.from_pretrained("my_output_folder") model = FlaxT5Model.from_pretrained('my_output_folder') input_ids = auto_tokenizer("my encoder input", return_tensors="np").input_ids decoder_input_ids = auto_tokenizer("my decoder input", return_tensors="np").input_ids outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) last_hidden_states = outputs.last_hidden_state print(numpy.array(last_hidden_states)) ```
10-13-2021 23:46:49
10-13-2021 23:46:49
You can save the model with `model.save_pretrained('my_saved_folder')`
transformers
13,997
open
[performance/precision] adding `jit.script` to activation functions
# 🚀 Feature request ### Switch our activation functions to use `@torch.jit.script` Over at BigScience we have been trying to figure out mismatches between Megatron-LM and HF Transformers when it comes to inference under fp16. There are several mismatches, this one discusses activation functions. And proposes to improve HF's models based on that. So Megatron uses `@torch.jit.script` for its activation functions, which leads to 2 things: 1. faster performance 2. more correct math under fp16 or amp/fp16. Quoting @ngimel: > ... that’s due to fusion. Fuser does intermediate operations in fp32, and thus produces more accurate results than simple function that truncates each intermediate to half. (I need to double check on amp/fp16 - I'm making an assumption here) So perhaps we should switch our activation functions to use `@torch.jit.script` too? Caveats: 1. it appears that when using `@torch.jit.script` one may have to write out the `bwd` part explicitly, see: https://github.com/NVIDIA/Megatron-LM/blob/b31e1296354e979722627a6c4dedafe19b51fa97/megatron/model/fused_bias_gelu.py#L27-L56 2. This will change the results slightly (back-compat OK?) but it should produce more correct results! You can see how the 2 functions diverge: ``` import torch import random seed = 42 random.seed(seed) # python RNG torch.manual_seed(seed) # cpu + cuda torch.cuda.manual_seed_all(seed) # multi-gpu torch.backends.cudnn.enabled = True width = 128 input = torch.rand((1,5,width*4)).cuda().half() @torch.jit.script def gelu_megatron_fwd_jit(x): return x * 0.5 * (1.0 + torch.tanh(0.79788456 * x * (1 + 0.044715 * x * x))) def gelu_megatron_fwd(x): return x * 0.5 * (1.0 + torch.tanh(0.79788456 * x * (1 + 0.044715 * x * x))) output = gelu_megatron_fwd(input) output_jit = gelu_megatron_fwd_jit(input) # have to run 2nd time for jit to kick in! output_jit = gelu_megatron_fwd_jit(input) torch.testing.assert_equal(output, output_jit, check_stride=False) ``` gives: ``` AssertionError: Tensors are not equal! Mismatched elements: 800 / 2560 (31.2%) Greatest absolute difference: 0.00048828125 at (0, 0, 1) Greatest relative difference: 0.0009828009828009828 at (0, 0, 2) ``` @patrickvonplaten, @patil-suraj, @sgugger, @LysandreJik
10-13-2021 22:27:20
10-13-2021 22:27:20
For backward compatibility reasons, I would put this as opt-in only: we can add a new key in the activation function for the jitted version that some models can opt-in to use. This would also make it easier for the support of older torch versions.<|||||>Thanks for the write-up! The difference is very minimal but it could still make an unexpected difference. We had the recent change in the way that GeLU is computed with torch (which is changing between torch 1.9 and torch 1.10) and even with a 1e-8 difference it affected us and our tests, so I would expect a 1e-4 difference to affect some downstream users. I tend to agree with Sylvain regarding the activation function: It's a configuration argument that is supported on most models (see BERT for example) https://github.com/huggingface/transformers/blob/7604557e4470822754c5658a19d81aa2ae7de934/src/transformers/models/bert/configuration_bert.py#L78-L80 which would make it simple to switch while keeping the current behavior identical. WDYT?<|||||>Thank you for your feedback, Sylvain and Lysandre So practically, do you propose to replicate the activation functions verbatim, with the only change of adding `@torch.jit.script` and adding a suffix `_jit` to those? So there will be `gelu_fast` and `gelu_fast_jit`? note that under fp32 the two will be identical in their outputs, but the jit one will be slightly faster.<|||||>Yes, that's exactly what I was suggesting!
transformers
13,996
closed
Scatter dummies + skip pipeline tests
Creates the scatter dummy objects and skips the pipeline test when a library isn't installed. The issue is due to TAPAS requiring Torch scatter - the last two times I enabled torch_scatter on the machine it bricked the installation and I needed to do it again, will eventually retry.
10-13-2021 21:26:37
10-13-2021 21:26:37
Will need to adapt the docs.
transformers
13,995
closed
Fix FNet tokenizer tests
This PR puts an actual revision for the tokenizer (current set revision is unknown). Temporarily put the `tooslow` decorator on a test that requires converting a tokenizer from slow to fast which is taking too long. Could you take a look @patrickvonplaten?
10-13-2021 21:25:13
10-13-2021 21:25:13
Merging this for now<|||||>Thank you
transformers
13,994
closed
Conversational pipeline: pop instead of get to avoir duplicate kwargs
The following two tests fail: ``` FAILED tests/test_pipelines_conversational.py::ConversationalPipelineTests::test_integration_torch_conversation FAILED tests/test_pipelines_conversational.py::ConversationalPipelineTests::test_integration_torch_conversation_encoder_decoder ``` For the following reason: ``` def _forward(self, model_inputs, minimum_tokens=10, **generate_kwargs): max_length = generate_kwargs.get("max_length", self.model.config.max_length) n = model_inputs["input_ids"].shape[1] if max_length - minimum_tokens < n: logger.warning(f"Conversation input is to long ({n}), trimming it to ({max_length} - {minimum_tokens})") trim = max_length - minimum_tokens model_inputs["input_ids"] = model_inputs["input_ids"][:, -trim:] if "attention_mask" in model_inputs: model_inputs["attention_mask"] = model_inputs["attention_mask"][:, -trim:] conversation = model_inputs.pop("conversation") model_inputs["max_length"] = max_length > output_ids = self.model.generate(**model_inputs, **generate_kwargs) E TypeError: generate() got multiple values for keyword argument 'max_length' ``` This PR fixes that.
10-13-2021 21:21:34
10-13-2021 21:21:34
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This was already fixed (by changing removing the assignment statement (making the fix a bit cleaner). Can't find the associated PR atm tho (was done approx at the same time)
transformers
13,993
closed
Use different data collator for train and eval dataset in trainer
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> I am trying pretraining a BART model from scratch on my own data. Since the noises added during the training phase are more than MLM, I need a different data collator during evaluation that differs from the one used in training. However, in `trainer`, there is only one arg for setting data collator that is used for both train and eval dataset. How do I achieve that?
10-13-2021 21:01:51
10-13-2021 21:01:51
Hi there. You will need to subclass the Trainer and change the `get_eval_dataloader` method.<|||||>thanks a lot!
transformers
13,992
closed
Attributes explicitly defined in model configurations are now overridden by the default type.
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.11.0.dev0 - Platform: Linux-5.14.11-arch1-1-x86_64-with-glibc2.33 - Python version: 3.9.7 - PyTorch version (GPU?): 1.9.1+cu102 (True) - Tensorflow version (GPU?): 2.6.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.3.5 (cpu) - Jax version: 0.2.21 - JaxLib version: 0.1.71 - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ## The issue The issue is made visible from the introduction of parameter setters in https://github.com/huggingface/transformers/pull/13026. This PR moved the initialization of the parent object to be the last statement of the configuration creation - while this could be benign, it isn't due to the fact that some arguments are defined both in the model configuration and in the upstream configuration. Such an example is the FSMT configuration. It defines the generate arguments here: https://github.com/huggingface/transformers/blob/408b2d2bd08f667cf4154730cc323c4e49657eed/src/transformers/models/fsmt/configuration_fsmt.py#L183-L185 At the end of the method, it initializes the parent configuration *without* passing the parameter: https://github.com/huggingface/transformers/blob/408b2d2bd08f667cf4154730cc323c4e49657eed/src/transformers/models/fsmt/configuration_fsmt.py#L199-L208 Finally, in the parent configuration, the `num_beams` is set once again: https://github.com/huggingface/transformers/blob/408b2d2bd08f667cf4154730cc323c4e49657eed/src/transformers/configuration_utils.py#L264 This is an issue now as this overrides the previously set `num_beams` to be 1. The issue wasn't caught before because the superclass initialization happened at the beginning, being overridden by the parameters afterwards. This is not the case anymore. This makes the following test fail: `tests/test_modeling_fsmt.py -k test_translation_direct_2_en_de`. IMO the issue comes from the redefinition of arguments in the FSMT configuration which should not be done as the superclass will already correctly define these arguments given the kwargs. The simplest patch for this (apart from making sure that the parameters are only set once) would be to make sure all previously applied parameters are taken into account by the superclass by adding the following statement to the initialization of the `PretrainedConfig` superclass: ```diff [...] def __init__(self, **kwargs): + kwargs = {**kwargs, **self.__dict__} # Attributes with defaults self.return_dict = kwargs.pop("return_dict", True) [...] ``` WDYT? cc @sgugger @stas00 @nreimers @patrickvonplaten The cleanest solution would be to make sure that all parameters are only set once, however, which is slightly harder to test. ### Reproducible code sample: ```py from transformers import AutoTokenizer, FSMTForConditionalGeneration pair = "en-de" text = { "en": "Machine learning is great, isn't it?", "ru": "Машинное обучение - это здорово, не так ли?", "de": "Maschinelles Lernen ist großartig, oder?", } src, tgt = pair.split("-") print(f"Testing {src} -> {tgt}") mname = f"facebook/wmt19-{pair}" src_text = text[src] tgt_text = text[tgt] tokenizer = AutoTokenizer.from_pretrained(mname) model = FSMTForConditionalGeneration.from_pretrained(mname) print(model.config) input_ids = tokenizer.encode(src_text, return_tensors="pt") outputs = model.generate(input_ids) decoded = tokenizer.decode(outputs[0], skip_special_tokens=True) assert decoded == tgt_text, f"\n\ngot: {decoded}\nexp: {tgt_text}\n" ```
10-13-2021 20:45:40
10-13-2021 20:45:40
Interesting bug! I'm a bit afraid of the solution you propose @LysandreJik so I would prefer adding a strong test that check every common attribute can be properly set on every config. That way it would highlight the problems we can have in specific configs and we can fix them easily. For instance, adding this to the config common tests: ```py def check_config_arguments_init(self): config = self.config_class(num_beams=3) self.parent.assertEqual(config.num_beams, 3) ``` reproduces the bug and creates a failure in the tests of FSMT. We can loop that test over a big dictionary containing all the common config attributes as keys and some values distinct from the defaults as values.<|||||>A few meta comments specific to FSMT: - It's very possible that FSMT wasn't written in the best way - I was modeling after Bart - and there must have been some thing I couldn't figure out and had to invent a workaround. So please don't hesitate to adjust it to the better way. - @patil-suraj refactored FSMT https://github.com/huggingface/transformers/pull/11218 but his PR didn't get merged because it introduced a speed and memory regression (which we suspect is possibly an issue in all recently refactored Bart-base models) and nobody had a chance to investigate / resolve - Further, FSMT in master has received mods since Suraj's PR https://github.com/huggingface/transformers/commits/c8be8a9adb218ecc593c687020e952554a5a55b5/src/transformers/models/fsmt <|||||>I highly doubt the bug is only in FSMT @stas00 The fact that #13026 moved all the super calls at the end of the configuration init has probably created multiple instances of it. It's just FSMT had good tests that showed us the bug :-)<|||||>wrt introduction of config setters, I missed that change - we also have/use `config.update` that does the same as a setter - so now we have 2 ways of doing setters? `update` is nice when pushing a large multi-arg change.<|||||>One solution would also be to move back the super call at the start of the configuration init. This would reduce the risk of errors / that parameters are set twice as for FSMT. I moved the call to `super().__init__` to the end so that common parameters (like hidden_size for `GPT2Config`) can correctly be set. But this can could be split in two call: ```python class MyConfig(PretrainedConfig): def __init__(num_beams=5, **kwargs): super().__init__(only=fixed, parameters=are_passed) #Remove **kwargs from the initial __init__ call # ... Custom parameter setting for MyConfig # Now handle all remaining parameters super().set_common_attribute(**kwargs) ``` What do you think?<|||||>This seems clean to me, and since we now have a strong test checking we can properly set the common attributes, I won't worry over things breaking again with this change :-)<|||||>I like this proposition too @nreimers!
transformers
13,991
closed
CLIPProcessor using only single core
## Environment info - `transformers` version: 4.11.3 - Platform: Linux-5.4.0-54-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyTorch version (GPU?): 1.9.1 (True) ### Who can help @patil-suraj ## Information The CLIPProcessor uses only a single core (on my machine) to process images. This makes encoding images with the CLIP model quite slow (as reported by https://github.com/UKPLab/sentence-transformers/issues/1200), as the image processing takes usually by far the longest. The original CLIP processor (https://github.com/openai/CLIP/blob/c13005fd422b20dcd11774e9fff46370887218e4/clip/clip.py#L75) in contrast uses multiple cores, leading to an image encoding speed-up of factor 3-4. ## Example ```python import time import requests from PIL import Image import torch from transformers import CLIPProcessor, CLIPModel import numpy as np torch.set_num_threads(4) img = Image.open(requests.get("https://cdn.pixabay.com/photo/2015/04/23/22/00/tree-736885__480.jpg", stream=True).raw) processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") # Repeat the same image 256 times images = [img] * 256 print("Transformers processor") runtimes = [] for _ in range(5): start_time = time.time() inputs_hf = processor(images=images, return_tensors="pt", padding=True) runtimes.append(time.time()-start_time) print("Time:", runtimes[-1]) print("Avg. runtime:", np.mean(runtimes)) ####### print("Original CLIP code") from torchvision.transforms import Compose, Resize, CenterCrop, ToTensor, Normalize def _transform(n_px): return Compose([ Resize(n_px, interpolation=Image.BICUBIC), #InterpolationMode.BICUBIC CenterCrop(n_px), lambda image: image.convert("RGB"), ToTensor(), Normalize((0.48145466, 0.4578275, 0.40821073), (0.26862954, 0.26130258, 0.27577711)), ]) clip_processor = _transform(224) runtimes = [] for _ in range(5): start_time = time.time() inputs_org = torch.stack([clip_processor(img) for img in images]) runtimes.append(time.time()-start_time) print("Time:", runtimes[-1]) print("Avg. runtime:", np.mean(runtimes)) print(inputs_hf.pixel_values.shape) print(inputs_org.shape) # Ensure the same output assert torch.allclose(inputs_hf.pixel_values, inputs_org) ``` Output of the script: ``` Transformers processor Time: 3.9625985622406006 Time: 3.9464197158813477 Time: 3.854238748550415 Time: 3.7350003719329834 Time: 3.6931965351104736 Avg. runtime: 3.838290786743164 Original CLIP code Time: 1.2502388954162598 Time: 1.2544281482696533 Time: 1.2220158576965332 Time: 1.2993192672729492 Time: 1.1974220275878906 Avg. runtime: 1.2446848392486571 torch.Size([256, 3, 224, 224]) torch.Size([256, 3, 224, 224]) ``` As we see, the original `_transform` code is 3 times faster than `CLIPProcessor` Performance was measured on a DGX-2 machine with 96 cores and Intel(R) Xeon(R) Platinum 8168 CPU @ 2.70GHz. `CLIPProcessor` just utilizes one core, while the original CLIP _transform code utilizes multiple threads (restricted to 4, but can be changed to any number). # Questions Is this behavior intended, that CLIPProcessor just utilizes a single core? If not, should I create a PR that fixes this issue so that CLIPProcessor uses multiple cores?
10-13-2021 19:56:32
10-13-2021 19:56:32
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This has significant implications if using `CLIP` models for production applications with GPUs. Essentially image preprocessing takes significant part of inference time for batches of data since it uses a single core as can be seen here: https://github.com/huggingface/transformers/blob/61d3928bfb3029bceb5be3e68ca3d4bf8456758f/src/transformers/models/clip/image_processing_clip.py#L324-L342<|||||>This is still a problem. Can this issue be reopened?<|||||>@asydorchuk @thesofakillers A bit of context around the design of the image processors: they've been created to enable users to go quickly from data -> predictions, preparing a batch of images to be ready for the model. The aim is that they can be quickly created using the `AutoImageProcessor` API and easy to understand and configure e.g. setting `do_resize=False`. They aren't, however, written to be fast or perform additional operations such as augmentations e.g. flip, rotate, change colours. The reason for this is that there are many great existing libraries for image processing that already do these and are extremely fast. In addition, our image processors need to support multiple frameworks: jax, tensorflow, pytorch. As such, we don't want to use one of these underlying libraries like torchvision directly as that would force e.g. TF users to install torch. You'll see in our framework specific examples, [that torchvision is used to process](https://github.com/huggingface/transformers/blob/main/examples/pytorch/image-classification/run_image_classification.py#LL302C5-L318C5), rather than our image processors.<|||||>@amyeroberts thank you very much. The framework-specific examples are very useful. Perhaps a warning/mention of this can be added to the [CLIP docs](https://huggingface.co/docs/transformers/model_doc/clip).<|||||>@thesofakillers I would add it somewhere more general, as this is true for all models' image processors, not just CLIP's. Perhaps as a note in the vision examples README or in the [docs for the image processor](https://huggingface.co/docs/transformers/v4.30.0/en/main_classes/image_processor#image-processor)? If you'd like to open a PR with these changes I'd be happy to review. <|||||>@amyeroberts Makes sense, but I didn't even know that that docs page existed. I think most people will be reading what is in the CLIP docs. If you think it should still be in the docs for the image processor, the CLIP docs should at least point to it explicitly. Lmk what you think, and please point me to the github file I'd edit for submitting PR's for doc edits, it's not super clear atm.
transformers
13,990
closed
Logit explosion in MobileBertForNextSentencePrediction example from documentation (and all others tried)
## Environment info - `transformers` version: 4.11.3 - Platform: Darwin-19.6.0-x86_64-i386-64bit - Python version: 3.6.8 - PyTorch version (GPU?): 1.9.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @vshampor ## Information Model I am using (Bert, XLNet ...): MobileBertForNextSentencePrediction The problem arises when using: * [x] the official example scripts: (give details below) Using the example code provided in https://huggingface.co/transformers/model_doc/mobilebert.html#mobilebertfornextsentenceprediction. The tasks I am working on is: * [x] an official GLUE/SQUaD task: Next Sentence Prediction ## To reproduce Steps to reproduce the behavior: Run the code from the official example script in the documentation: ``` >>> from transformers import MobileBertTokenizer, MobileBertForNextSentencePrediction >>> import torch >>> tokenizer = MobileBertTokenizer.from_pretrained('google/mobilebert-uncased') >>> model = MobileBertForNextSentencePrediction.from_pretrained('google/mobilebert-uncased') >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> next_sentence = "The sky is blue due to the shorter wavelength of blue light." >>> encoding = tokenizer(prompt, next_sentence, return_tensors='pt') >>> outputs = model(**encoding, labels=torch.LongTensor([1])) >>> loss = outputs.loss >>> logits = outputs.logits ``` Printing `logits`, we get `tensor([[2.7888e+08, 2.7884e+08]], grad_fn=<AddmmBackward>)` - strangely huge for both classes, and one that leads to a softmax score of 1 for the "is next sentence" class—the opposite of the correct answer, which is strange for an example from the documentation. I ran it on a handful of related prompt and next sentence pairs, then on a larger set from my own NSP dataset, and got the same strange behavior: logits of about 2e+08 for both classes, and higher for the first class in the 3rd or 4th significant figure, no matter the prompt and sentence pair. Given the sizes, it leads to a softmax score of 1 "is the next sentence" (the first class) and 0 for the other no matter what the first and second sentence is, no matter how unrelated the second sentence is. ## Expected behavior For comparison, the logits produced on the same example using BertForNextSentencePrediction with bert-base-uncased instead on this example are `tensor([[-3.0729, 5.9056]], grad_fn=<AddmmBackward>)`. I would expect that for an example with 'next sentence' from the "Not following the prompt" category like this, MobileBertForNextSentencePrediction with the default pretrained model would get this right, and have logits in a similar ballpark - not huge positive values like the ones pictured. I posted about this on HuggingFace Hub discussion board, but it got immediately taken down by the bot for some reason. Linking here in case admins approve it: https://discuss.huggingface.co/t/next-sentence-prediction-with-google-mobilebert-uncased-producing-massive-near-identical-logits-10-8-for-its-documentation-example-and-2k-others-tried/10750/1.
10-13-2021 18:47:18
10-13-2021 18:47:18
@vshampor Any updates?<|||||>@mmistele sorry for the late reply. The behaviour you are experiencing seems to be natural for the model that is basically *untrained* for the downstream task you are trying to apply it to. The reason is that you are loading the wrong checkpoint for the `MobileBertForNextSentencePrediction` object, and this is even highlighted in the console output: ``` Some weights of the model checkpoint at google/mobilebert-uncased were not used when initializing MobileBertForNextSentencePrediction: ['cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.bias', 'cls.predictions.dense.weight', 'cls.predictions.transform.dense.weight'] ``` The regular `google/mobilebert-uncased` contains the MobileBERT backbone and the language modeling head used for pre-training the backbone, but not the trained weights for the prediction head corresponding to the downstream "next sentence prediction" task. That `google/mobilebert-uncased` would be used by itself in the MobileBertForNextSentencePrediction docs by HF seems to be a bug by oversight. Summoning @LysandreJik who helped me integrate this model originally, but I don't see MobileBERT pre-trained for NSP on the HF hub so that the docs could be immediately made consistent.<|||||>Thank you so much for investigating! And for integrating this model into HF in the first place.<|||||>I'm not so sure the console output has to do with missing weights - the [source code](https://github.com/huggingface/transformers/blob/62ccbe0960019aceb4e36b1ee929ed2349e9653e/src/transformers/modeling_utils.py#L1597) for that log line comes from an if clause around the model checkpoint having _extra_ weights that the model doesn't expect or need, i.e. the weights that `google/mobilebert-uncased` has that aren't used by MobileBertForNextSentencePrediction.<|||||>I wonder... if google/mobilebert-uncased is missing weights for the prediction head, why is there no log line from line [1610](https://github.com/huggingface/transformers/blob/62ccbe0960019aceb4e36b1ee929ed2349e9653e/src/transformers/modeling_utils.py#L1610) of modeling_utils.py, a few lines below the lines that logged the console output you posted? ``` if len(missing_keys) > 0: logger.warning( f"Some weights of {model.__class__.__name__} were not initialized from the model checkpoint at {pretrained_model_name_or_path} " f"and are newly initialized: {missing_keys}\n" f"You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference." ) ```<|||||>Took another look at the pre-training objective for MobileBERT and it turns out that the NSP task is actually one of the two objectives (the other being masked LM). So the prediction head for NSP *should* be present in the checkpoint meant for pre-training, and it even seems to be loadable into the NextSentencePrediction model, judging by the lack of output by the line of code you highlighted.<|||||>Checked the code and yes, the MobileBertForPreTraining and MobileBertForNextSentencePrediction are crafted in such a way, state_dict-wise, that PreTraining is loadable into the NextSentencePrediction; the LM head won't be loaded, (but it shouldn't get used anyway). My theory doesn't explain the current state of affairs, then - the example should be working since the NSP from pretraining should have been transferred into the NSP-specific model. Will try and investigate further.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,989
closed
Add an API to register objects to Auto classes
# What does this PR do? This PR adds an API to register custom to the auto classes. Fixes #13522
10-13-2021 18:33:16
10-13-2021 18:33:16
Thanks a lot for working on this! I think this would greatly benefit from having an entry/tutorial in the `Auto Classes` section: ![image](https://user-images.githubusercontent.com/30755778/137330917-4ccf2167-02db-4378-9234-9cab4f54df52.png) WDYT?<|||||>Great idea!
transformers
13,988
closed
Allow single byte decoding
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #13779 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-13-2021 18:02:20
10-13-2021 18:02:20
transformers
13,987
closed
Intel OpenVINO inference backend
# 🚀 Feature request Hi all! We (Intel team) would like to contribute in Hugging Face by integration of optimized deep learning inference framework OpenVINO. According to the contribution guidelines, we should discuss it with community and developers to match the expectations. Let me share what we have as this moment: * New format for the [models hub](https://huggingface.co/) - a pair of `ov_model.xml` and `ov_model.bin` which is OpenVINO IR * New class `OVAutoModel` which downloads `ov_model.xml/.bin` by default or framework specific checkpoint (PyTorch, TensorFlow) with on-the-fly conversion. Example: ```python from transformers import OVAutoModel model = OVAutoModel.from_pretrained("dkurt/test_openvino") ``` (see https://huggingface.co/dkurt/test_openvino) or ```python # Downloads PyTorch weights and converts to OpenVINO in runtime model = OVAutoModel.from_pretrained("albert-base-v2", from_pt=True) ``` or ```python # Downloads TensorFlow weights and converts to OpenVINO in runtime model = OVAutoModel.from_pretrained("distilbert-base-uncased", from_tf=True) ``` ## Motivation Expected efficiency improvement as well as utilization of Intel hardware (CPU/iGPU/VPU and others supported) ## Your contribution Will do our best to complete this feature request. Looking forward for feedback from developers and experienced users.
10-13-2021 14:00:11
10-13-2021 14:00:11
Hi, @mfuntowicz and @LysandreJik. Just wanted to let you know that there is a PR with my proposal at https://github.com/huggingface/transformers/pull/14203. I would be very happy to get any feedback from you and understand if it goes in a right direction.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,986
closed
Trainer: Cannot train with 3+ GPUs / Uneven Memory Consumption
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.1 - Platform: Linux-4.15.0-156-generic-x86_64-with-glibc2.29 - Python version: 3.8.5 - PyTorch version (GPU?): 1.9.1+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: <fill in> ### Who can help @sgugger @patil-suraj ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [] the official example scripts: (give details below) * [X] my own modified scripts: I'm just using the Trainer class to train a model The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: Custom proprietary dataset ## To reproduce I'm running the `Trainer` class and I'm essentially just fine tune a GPT-Neo variant. I don't use any specific CLI options and just call `python train.py`. What happens? With `EleutherAI/gpt-neo-1.3B` I am running into CUDA OOM memory errors depending on how much GPUs I want to use for training. For example: - 1 GPUs: Works - 2 GPUs: Works - 3 GPUs: OOM So effectively I am unable to train with more than 2 GPUs. ``` training_args = TrainingArguments( output_dir='results', num_train_epochs=EPOCHS, logging_steps=EPOCHS, load_best_model_at_end=True, save_strategy="epoch", evaluation_strategy="epoch", per_device_train_batch_size=BATCH_SIZE, per_device_eval_batch_size=BATCH_SIZE, warmup_steps=100, weight_decay=0.01, logging_dir='logs', report_to="none", save_total_limit=15, seed=42, ) # start training Trainer(model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, data_collator=lambda data: { 'input_ids': torch.stack([f[0] for f in data]), 'attention_mask': torch.stack([f[1] for f in data]), 'labels': torch.stack([f[0] for f in data]), } ).train() ``` The memory consumption on those two GPUs is also very imbalanced: ``` +-------------------------------+----------------------+----------------------+ | 5 Tesla V100-SXM2... On | 00000000:89:00.0 Off | 0 | | N/A 78C P0 195W / 300W | 32212MiB / 32510MiB | 100% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 6 Tesla V100-SXM2... On | 00000000:B2:00.0 Off | 0 | | N/A 83C P0 281W / 300W | 16096MiB / 32510MiB | 99% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ ``` I also tried running the training script with the `torch.distributed` command, but that doesn't work either for me. For example: ``` python -m torch.distributed.launch --nproc_per_node=2 train.py ``` Am I missing something obvious? ## Expected behavior The trainer should be able to handle more GPUs than 2.
10-13-2021 13:56:53
10-13-2021 13:56:53
When you use `python train.py`, you use PyTorch `DataParallel` behind the scenes which only computes gradients and optimizer states on the main GPU. This is why you see this unbalance in memory usage. When using `python -m torch.distributed.launch --nproc_per_node=2 train.py` (which is the recommended way according to the PyTorch documentation) each GPU will have a copy of the gradients and optimizer states so the memory usage will be balanced across GPUs. In both cases the number of GPUs should not affect the fact you go OOM or not, unless you have batches of dynamic sizes. In this case, it's best to ensure the largest batches come first so you see the OOM as soon as possible.<|||||>@sgugger: Many thanks for the fast reply. That's why I am so confused why this ain't working (and I've played around with it quite a lot). The batch size is not dynamic (because the tokenizer is set to `max_length`, which is also set to 512) and is always set to 2 or 1 for this very example. The code runs completely containerized, the container only has access to specific device IDs. The devices are not occupied otherwise by any other services or processes. _______________________________ Let me describe the results better: - `python train.py` and 2 GPU available + batch size = 2: ``` +-------------------------------+----------------------+----------------------+ | 5 Tesla V100-SXM2... On | 00000000:89:00.0 Off | 0 | | N/A 68C P0 280W / 300W | 32212MiB / 32510MiB | 100% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 6 Tesla V100-SXM2... On | 00000000:B2:00.0 Off | 0 | | N/A 57C P0 287W / 300W | 16096MiB / 32510MiB | 100% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ ``` Speed: `1.19s/it` - `python train.py` and 3 GPU available + batch size = 2: ``` Traceback (most recent call last): File "train.py", line 142, in <module> Trainer(model=model, File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1280, in train tr_loss += self.training_step(model, inputs) File "/usr/local/lib/python3.8/dist-packages/transformers/trainer.py", line 1791, in training_step loss.backward() File "/usr/local/lib/python3.8/dist-packages/torch/_tensor.py", line 255, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/usr/local/lib/python3.8/dist-packages/torch/autograd/__init__.py", line 147, in backward Variable._execution_engine.run_backward( File "/usr/local/lib/python3.8/dist-packages/torch/autograd/function.py", line 87, in apply return self._forward_cls.backward(self, *args) # type: ignore[attr-defined] File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/_functions.py", line 34, in backward return (None,) + ReduceAddCoalesced.apply(ctx.input_device, ctx.num_inputs, *grad_outputs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/_functions.py", line 45, in forward return comm.reduce_add_coalesced(grads_, destination) File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/comm.py", line 143, in reduce_add_coalesced flat_result = reduce_add(flat_tensors, destination) File "/usr/local/lib/python3.8/dist-packages/torch/nn/parallel/comm.py", line 95, in reduce_add result = torch.empty_like(inputs[root_index]) RuntimeError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 31.75 GiB total capacity; 29.39 GiB already allocated; 25.75 MiB free; 30.20 GiB reserved in total by PyTorch) ``` Speed: `None` - `python train.py` and 3 GPUs available + batch size = 1: -> Trains, but runtime is much slower as with 2 GPUs + batch size = 1 ``` +-------------------------------+----------------------+----------------------+ | 5 Tesla V100-SXM2... On | 00000000:89:00.0 Off | 0 | | N/A 67C P0 288W / 300W | 32232MiB / 32510MiB | 89% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 6 Tesla V100-SXM2... On | 00000000:B2:00.0 Off | 0 | | N/A 55C P0 291W / 300W | 11554MiB / 32510MiB | 92% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 7 Tesla V100-SXM2... On | 00000000:B3:00.0 Off | 0 | | N/A 57C P0 292W / 300W | 11530MiB / 32510MiB | 94% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ ``` Speed: `1.06s/it` - `python -m torch.distributed.launch --nproc_per_node=1 train.py` and 3 GPU available + batch size = 1: ``` +-------------------------------+----------------------+----------------------+ | 5 Tesla V100-SXM2... On | 00000000:89:00.0 Off | 0 | | N/A 66C P0 280W / 300W | 31868MiB / 32510MiB | 98% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 6 Tesla V100-SXM2... On | 00000000:B2:00.0 Off | 0 | | N/A 42C P0 41W / 300W | 3MiB / 32510MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 7 Tesla V100-SXM2... On | 00000000:B3:00.0 Off | 0 | | N/A 41C P0 43W / 300W | 3MiB / 32510MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ ``` Speed: `1.69it/s` - `python -m torch.distributed.launch --nproc_per_node=2 train.py` and 3 GPU available + batch size = 2: OOM despite the fact that this works perfectly fine with `python train.py` ``` RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 31.75 GiB total capacity; 29.71 GiB already allocated; 5.75 MiB free; 30.28 GiB reserved in total by PyTorch) 0%| | 1/150780 [00:01<66:55:39, 1.60s/it] ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 2360) of binary: /usr/bin/python Traceback (most recent call last): File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py", line 193, in <module> main() File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py", line 189, in main launch(args) File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launch.py", line 174, in launch run(args) File "/usr/local/lib/python3.8/dist-packages/torch/distributed/run.py", line 689, in run elastic_launch( File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 116, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 244, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ``` ____________________________________ This is the training code I'm using: ``` model_name = "EleutherAI/gpt-neo-1.3B" EPOCHS = 10 BATCH_SIZE = 2 import os import re import torch import random import pandas as pd from tqdm import tqdm from torch.utils.data import Dataset from sklearn.model_selection import train_test_split from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments, Trainer import torch USER_TOKEN = "<user>" BOT_TOKEN = "<bot>" class DialogData(Dataset): def __init__(self, dialogs, tokenizer, max_length): self.input_ids = [] self.attn_masks = [] self.labels = [] for dialog in dialogs: spans = [] # ... prep_txt = ( "".join(spans) + "<|endoftext|>" ) encodings_dict = tokenizer( prep_txt, truncation=True, max_length=max_length, padding="max_length" ) # append to list self.input_ids.append(torch.tensor(encodings_dict['input_ids'])) self.attn_masks.append(torch.tensor(encodings_dict['attention_mask'])) self.labels.append(torch.tensor(encodings_dict['input_ids'])) def __len__(self): return len(self.input_ids) def __getitem__(self, idx): return self.input_ids[idx], self.attn_masks[idx], self.labels[idx] model_suffix = model_name.split("/")[-1] print(f"Training: {model_name}") print(f"Suffix: {model_suffix}") torch.manual_seed(42) tokenizer = AutoTokenizer.from_pretrained( model_name, bos_token='<|startoftext|>', eos_token='<|endoftext|>', pad_token='<|pad|>' ) tokenizer.add_tokens([USER_TOKEN, BOT_TOKEN]) model = AutoModelForCausalLM.from_pretrained(model_name).cuda() model.resize_token_embeddings(len(tokenizer)) dialogs = [f"{USER_TOKEN}I am an arbitrary training dataset" for _ in range(10_000)] X_train, X_test= train_test_split( dialogs, shuffle=True, test_size=0.05, random_state=1, ) train_dataset = DialogData(X_train, tokenizer, max_length=512) eval_dataset = DialogData(X_test, tokenizer, max_length=512) print(f"Training dataset: {len(train_dataset)}") print(f"Evaluate dataset: {len(eval_dataset)}") training_args = TrainingArguments( output_dir='results', num_train_epochs=EPOCHS, logging_steps=EPOCHS, load_best_model_at_end=True, save_strategy="epoch", evaluation_strategy="epoch", per_device_train_batch_size=BATCH_SIZE, per_device_eval_batch_size=BATCH_SIZE, warmup_steps=100, weight_decay=0.01, logging_dir='logs', report_to="none", save_total_limit=15, seed=42, ) Trainer(model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, data_collator=lambda data: { 'input_ids': torch.stack([f[0] for f in data]), 'attention_mask': torch.stack([f[1] for f in data]), 'labels': torch.stack([f[0] for f in data]), } ).train() ```<|||||>I don't see anything out of the ordinary: - raising the batch size will get you OOM on GPU-0 - distributed data parallel might take a little bit more space than DataParallel and you were super tight on GPU-0 - raising the number of GPUs will slow down the iterations a little bit because of communication, but you will also get less iterations since you are raising the actual batch size (actual batch size = batch size x number of GPUs)<|||||>I see - perhaps because of the tightness I was assuming this was an error from code side, but thinking about this it makes sense that there is very little room for anything overhead. I actually tried to also go for `fp16`, but that doesn't work either due to `nan`s. I will try a bit more, but probably I'll just let it run until it's done. Many thanks for your help!<|||||>Can safely confirm that it works nicely out of the box with the `125M` variant of the model. Thus I will have to play around with Zero or FP16 to understand how to get it to work with the larger ones. Many thanks!<|||||>@sgugger, actually it was much more difficult to get to the result where I wanted to be. I can now train with `fp16` enabled and with `Zero2` on 3 GPUs (more tests to come with more GPUs). The problem seems to have been resolved by running the container in which the training takes place using with certain args: ``` docker run -it --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 --gpus '"device=0,1,6"' -v $(pwd):/home -v /data/shared/transformers:/var/transformers trainer ``` where `--shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864` did the trick. Otherwise I was simply not able to run `deepspeed train.py` without running into NCCL errors.
transformers
13,985
closed
Tokenizers integration into onnx models 🤗
# 🚀 Feature request Integrate tokenizers into models while converting them from `transformers` to `onnx` format. ## Motivation I use NER camemBERT model for TokenClassification tasks from transformers library that I finetuned to my needs. I converted it to onnx format to deploy it on a Java client where I am planning to use it. But, I would like to integrate the SentenpieceTokenizer I used in python into Java and avoid to recode it in Java. I found this repo extension to ONNXRuntime package, [here](https://github.com/microsoft/onnxruntime-extensions) It helps to integrate tokenizers to onnx models by adding one pre-processing layer before model inference, and one post-processing layer after the model inference. It's not so easy to make it works perfectly with my NER model, they only have one GPT2 example and not so much informations in their docs. I opened an [issue here](https://github.com/microsoft/onnxruntime-extensions/issues/164), to describe my problem and to try to make it works. ## Your contribution At this moment, I managed to convert an unsupported model from transformers to onnx by adding its support to transformers library (I could add it via PR). I also managed to adapt transformers NER TokenClassification pipeline to onnx runtime inference + final layer to use inference results, in python. I am finally trying to understand how I could build a `all_in_one_file` model in onnx format, and could add examples if I succeed. Any explanations or help would be much appreciated 🤗
10-13-2021 09:36:56
10-13-2021 09:36:56
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@ChainYo Were you able to develop some examples? I am facing a similar problem: I want Tokenizer and Converted Model in one file. <|||||>@ChainYo @hacceebhassan Have either of you found a solution to this? We are now facing the same problem. Too bad this issue was closed, as I think this is a relevant issue. Exporting a single model doesn't do much for me, since I actually want to convert my entire pipeline (preprocessing steps->tokenizer->model), like I can do with a scikit-learn Pipelines...<|||||>Hey, have you managed to solve this issue? I also need to convert HF Tokernizer adn Model into single ONNX file.
transformers
13,984
closed
Intel OpenVINO backend
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-13-2021 08:35:43
10-13-2021 08:35:43
transformers
13,983
closed
Parameter max_new_tokens is always overshadow by max_length in model.generate()
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.11.0.dev0 - Platform: Linux-5.4.0-84-generic-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyTorch version (GPU?): 1.9.0+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj @patrickvonplaten - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten - Tokenizers: @LysandreJik - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @patrickvonplaten ## Information Model I am using: EleutherAI/gpt-j-6B, but any generation model would apply. The problem arises when using my code described below. The prompt to the model is fairly long (over 50 tokens). I wanted the model to generate up to 10 tokens, ADDITIONAL to the prompt. To this end, I tried the parameter max_new_token, but it's always overshadow by the other parameter max_length. There is no way to undefine max_length because it's default to model.config.max_length. These two parameters are described in https://huggingface.co/transformers/main_classes/model.html max_length (int, optional, defaults to model.config.max_length) – The maximum length of the sequence to be generated. max_new_tokens (int, optional, defaults to None) – The maximum numbers of tokens to generate, ignore the current number of tokens. Use either max_new_tokens or max_length but not both, they serve the same purpose. ## To reproduce Steps to reproduce the behavior: 1. Run the script below to generate text after the prompt. ```python from transformers import pipeline prompt = 'As with previous Valkyira Chronicles games , Valkyria Chronicles III is a tactical role @-@ playing game where' + \ 'players take control of a military unit and take part in missions against enemy forces . Stories are told through comic book' + \ '@-@ like panels with animated character portraits , with characters speaking partially through voiced speech bubbles and partially' + \ 'through unvoiced text . The player progresses through a series of linear missions , gradually unlocked as maps that can be freely ' generator = pipeline('text-generation', model='EleutherAI/gpt-j-6B', device=-1) string1 = generator(prompt, do_sample=True, max_length = 10, temperature=0.9, top_k=10, top_p=0.92, num_return_sequences=1) print(string1) string2 = generator(prompt, do_sample=True, max_new_tokens = 10, temperature=0.9, top_k=10, top_p=0.92, num_return_sequences=1) print(string2) ``` 2. The 1st call of generator recognized max_length = 10 and triggered warning "_Input length of input_ids is 91, but ``max_length`` is set to 10.This can lead to unexpected behavior. You should consider increasing ``config.max_length`` or ``max_length``._" 3. The 2nd call of generator used the default max_length of 50, completely ignoring max_new_tokens = 10, and triggered warning "_Input length of input_ids is 91, but ``max_length`` is set to 50.This can lead to unexpected behavior. You should consider increasing ``config.max_length`` or ``max_length``._" <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> I wanted the 2nd call of generator not to trigger any warning, and the model produce up to 10 more tokens, in additional to the input prompt.
10-12-2021 23:10:11
10-12-2021 23:10:11
Hey @dunalduck0, This issue was fixed very recently - see: https://github.com/pulls?q=is%3Apr+author%3Apatrickvonplaten+archived%3Afalse+is%3Aclosed Could you try to pull from master and see if it works?<|||||>Thank you @patrickvonplaten. I've verified and it works well.
transformers
13,982
closed
ConversationalPipeline Not Compatible With HuggingFace SageMaker Deploy
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.11.3 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.8 - PyTorch version (GPU?): 1.9.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj @patrickvonplaten - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten - Tokenizers: @LysandreJik - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> Tagging: @Narsil ## Information I am using the ConversationalPipeline with BlenderBot and SageMaker deploy. However, any model will encounter the same issue. The SageMaker handler passes the type `dict` inputs directly to the model: `prediction = model(inputs, **parameters)` (https://github.com/aws/sagemaker-huggingface-inference-toolkit/blob/f884fc65d64f2e15637ccf16d2a83e37114cb23f/src/sagemaker_huggingface_inference_toolkit/handler_service.py#L157). That triggers this ValueError from https://github.com/huggingface/transformers/blob/26b6ef79d6554a2ffc3b50ec8c68f8688bdff7a2/src/transformers/pipelines/conversational.py#L247: ```python if not isinstance(conversation, Conversation): raise ValueError("ConversationalPipeline, expects Conversation as inputs") ``` ## To reproduce Follow the SageMaker deploy boiler plate code and run the predictor. ```python from sagemaker.huggingface import HuggingFaceModel import sagemaker role = sagemaker.get_execution_role() # Hub Model configuration. https://huggingface.co/models hub = { 'HF_MODEL_ID':'facebook/blenderbot-1B-distill', 'HF_TASK':'conversational' } # create Hugging Face Model Class huggingface_model = HuggingFaceModel( transformers_version='4.6.1', pytorch_version='1.7.1', py_version='py36', env=hub, role=role, ) # deploy model to SageMaker Inference predictor = huggingface_model.deploy( initial_instance_count=1, # number of instances instance_type='ml.m5.xlarge' # ec2 instance type ) predictor.predict({ 'inputs': { "past_user_inputs": ["Which movie is the best ?"], "generated_responses": ["It's Die Hard for sure."], "text": "Can you explain why ?", } }) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior The input to the SageMaker handler has to be JSON because we can't send `Conversation` objects as part of an HTTP Post request. Therefore the ConversationalPipeline needs to accept a dictionary and Conversation objects. Specifically, we need to change: ```python if not isinstance(conversation, Conversation): raise ValueError("ConversationalPipeline, expects Conversation as inputs") ``` to ```python if not isinstance(conversation, Conversation) and not isinstance(conversation, dict): raise ValueError("ConversationalPipeline, expects Conversation as inputs") if isinstance(conversation, dict): try: conversation = Conversation(**conversation) except Exception as e: raise ValueError("Failed converting input dict to Conversation") from e ``` This should enable the ConversationalPipeline to work with the SageMaker deployment. <!-- A clear and concise description of what you would expect to happen. -->
10-12-2021 20:56:26
10-12-2021 20:56:26
That's seems like a very nice feature to add ! Do you want to make the PR (only NIT would be "ConversationalPipeline, expects Conversation as inputs" -> "ConversationalPipeline, expects Conversation or dicts as inputs"<|||||>Hey @joelsimonoff, this should be fixed with a higher DLC version. I see that you are using `transformers_version=4.6.1` for your SageMaker Image. We added a `wrapper` for this here Here https://github.com/aws/sagemaker-huggingface-inference-toolkit/blob/f884fc65d64f2e15637ccf16d2a83e37114cb23f/src/sagemaker_huggingface_inference_toolkit/transformers_utils.py#L92 You can switch to `transformers_version=4.11` and `pytorch_version=1.9` then your code works. But there is currently a bug in the `python-sagemaker-sdk` which is not generating the correct `image_uri` for `transformers=4.11`. I added the DLC directly via the `image_uri`. When the bug is solved you can normally use `transformers_version` and `pytorch_version` again. Issue: https://github.com/aws/sagemaker-python-sdk/issues/2700 P.S. I noticed that the instance type you used `ml.m5.xlarge`, was too large to use the model I went with an `ml.g4dn.2xlarge`. ```python f from sagemaker.huggingface import HuggingFaceModel import sagemaker ​ role = sagemaker.get_execution_role() # Hub Model configuration. https://huggingface.co/models hub = { 'HF_MODEL_ID':'facebook/blenderbot-1B-distill', 'HF_TASK':'conversational' } ​ # create Hugging Face Model Class huggingface_model = HuggingFaceModel( #transformers_version='4.11', #pytorch_version='1.9', #py_version='py38', image_uri="763104351884.dkr.ecr.us-east-1.amazonaws.com/huggingface-pytorch-inference:1.9.0-transformers4.11.0-gpu-py38-cu111-ubuntu20.04", env=hub, role=role, ) ​ # deploy model to SageMaker Inference predictor = huggingface_model.deploy( initial_instance_count=1, # number of instances instance_type='ml.g4dn.2xlarge' # ec2 instance type ) ​ predictor.predict({ 'inputs': { "past_user_inputs": ["Which movie is the best ?"], "generated_responses": ["It's Die Hard for sure."], "text": "Can you explain why ?", } }) ``` Outputing: ``` {'generated_text': " I think it's because it's based on a book by the same name by James Bond.", 'conversation': {'past_user_inputs': ['Which movie is the best ?', 'Can you explain why ?'], 'generated_responses': ["It's Die Hard for sure.", " I think it's because it's based on a book by the same name by James Bond."]}} ```<|||||>@philschmid Looks great, thanks!
transformers
13,981
closed
ModuleNotFoundError: No module named 'transformers.models.fnet.configuration_fnet
```py model_lang = SentenceTransformer('clip-ViT-B-32-multilingual-v1').cuda() def enc_text(txt): if multilang: emb = model_lang.encode([txt], convert_to_tensor=True, show_progress_bar=False) else: emb = model_clip.encode_text(clip.tokenize(txt).cuda()) return emb.detach().clone() ``` ``` Downloading: 100% 690/690 [00:00<00:00, 26.1kB/s] [...] Downloading: 100% 1.57M/1.57M [00:01<00:00, 1.13MB/s] ``` ``` --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-6-325284ff9354> in <module>() ----> 1 model_lang = SentenceTransformer('clip-ViT-B-32-multilingual-v1').cuda() 2 3 def enc_text(txt): 4 if multilang: 5 emb = model_lang.encode([txt], convert_to_tensor=True, show_progress_bar=False) 14 frames /usr/lib/python3.7/importlib/_bootstrap.py in _find_and_load_unlocked(name, import_) ModuleNotFoundError: No module named 'transformers.models.fnet.configuration_fnet' --------------------------------------------------------------------------- NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt. To view examples of installing some common dependencies, click the "Open Examples" button below. ```
10-12-2021 20:05:53
10-12-2021 20:05:53
It's very possibly linked to an outdated version of `transformers`: FNet was only released a few weeks ago, as part of the v4.11 release.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,980
closed
[parallel doc] dealing with layers larger than one gpu
This PR expands on the parallelism doc: - explains what to do when a layer is larger than one gpu - adds some more notes @sgugger
10-12-2021 19:20:50
10-12-2021 19:20:50
transformers
13,979
closed
input params in RobertaForQuestionAnswering
can anyone please help me understand what are `start_positions` and `end_positions` vars that are being passed in the model? (example from documentation) [link](https://huggingface.co/transformers/model_doc/roberta.html#tfrobertaforquestionanswering): below is code example: ``` >>> from transformers import RobertaTokenizer, RobertaForQuestionAnswering >>> import torch >>> tokenizer = RobertaTokenizer.from_pretrained('roberta-base') >>> model = RobertaForQuestionAnswering.from_pretrained('roberta-base') >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors='pt') >>> start_positions = torch.tensor([1]) >>> end_positions = torch.tensor([3]) >>> outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions) >>> loss = outputs.loss >>> start_scores = outputs.start_logits >>> end_scores = outputs.end_logits ```
10-12-2021 18:21:00
10-12-2021 18:21:00
I also have the same question as you.<|||||>The `start_positions` and `end_positions` are the labels that you provide to the model. With these, you tell the model which token is at the start of the answer, and which token is at the end of the answer. In the example you list above, we have a single question + text. The answer to the question is "a nice puppet". So what should be the start and end positions here? Well, first we need to tokenize the question + text using the tokenizer. We can do this as follows: ``` from transformers import RobertaTokenizer tokenizer = RobertaTokenizer.from_pretrained("roberta-base") question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" encoding = tokenizer(question, text, return_tensors="pt") ``` We can now check the tokens, by converting the `input_ids` back to text using the tokenizer's `decode` method: ``` for idx, id in enumerate(encoding.input_ids.squeeze().tolist()): print(idx, tokenizer.decode([id])) ``` This prints: ``` 0 <s> 1 Who 2 was 3 Jim 4 H 5 enson 6 ? 7 </s> 8 </s> 9 Jim 10 H 11 enson 12 was 13 a 14 nice 15 puppet 16 </s> ``` So in this case, the start position should be 13 and the end position should be 15. Note that the `start_positions` and `end_positions` should be of shape (batch_size,), indicating the start token index and end token index respectively for every example in the batch. As we only have a single example in this case, `start_positions` should be `torch.tensor([[13]])`. Similarly, `end_positions` should be `torch.tensor([[15]])`. The reason `start_positions` and `end_positions` are set to 1 and 3 in the docs is to just illustrate it, they are actually incorrect. <|||||>Note: this is explained in detail in the official [question-answering notebook](https://github.com/huggingface/notebooks/blob/master/examples/question_answering.ipynb).<|||||>Thank you so much @NielsRogge. I finally understood the topic.
transformers
13,978
closed
```input_embeds``` keyword not working properly (GPT2)
```transfomer-cli env``` command was not recognized so I do not know which environment I have. - `transformers` version: latest - Platform: windows 10 - Python version: 3.8.5 - PyTorch version (GPU?): CPU - Tensorflow version (GPU?): None - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten, @LysandreJik ## Information Model I am using (GPT2): When I want to provide my own embeded representation instead of input indices to a GPT2 that I want to train from scratch, I get the following error: ```Python Traceback (most recent call last): File "<ipython-input-18-d0df910b9d57>", line 10, in <module> model(inputs_embeds=inputs_embeds) # runs without error File "C:\Users\cnelias\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'inputs_embeds' ``` It does not come from my implementation since the following code also throws the same error: ``` from transformers import GPT2Model, GPT2Tokenizer import torch model = GPT2Model.from_pretrained('gpt2') tokenizer = GPT2Tokenizer.from_pretrained('gpt2', pad_token='<PAD>') input_ids = tokenizer.encode("Hello, how are you?", return_tensors='pt') inputs_embeds = model.wte(input_ids) model(inputs_embeds=inputs_embeds) ``` I installed pytorch via conda, this could be the cause.
10-12-2021 16:53:34
10-12-2021 16:53:34
Hi, I cannot reproduce the error. What versions of `PyTorch` and `transformers` are you using? Do you mind installing everything through pip again and checking if the problem still persists?<|||||>I can't install pytorch through pip, I don't have a cuda compatible GPU and get the following error ```ERROR: Could not find a version that satisfies the requirement torchaudio (from versions: none) ERROR: No matching distribution found for torchaudio```. Apparently, this is because torchaudio does not support windows yet. The versions I am using: - ```torch``` : 1.9.1 - ```transformers``` : 2.1.1<|||||>The problem is the outdated `transformers` version, `GPT2Model.forward` in `v2.1.1` does not support `input_embeds` argument. Please upgrade `transformers` to the latest (4.11.3) and see if the problem is solved.<|||||>Oh. The doc on ```GPT2Model``` still mentions ```input_embeds```: class transformers.GPT2Model ```Python forward(input_ids=None, past_key_values=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None) ``` - inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. Thanks for pointing this out, I will update ```transformers``` and try it out. <|||||>I think you probably checked a wrong docs version, the GPT2 in [v2.1.1 docs](https://huggingface.co/transformers/v2.1.1/model_doc/gpt2.html) was still under construction at that moment.<|||||>This is from the [v4.11.3](https://huggingface.co/transformers/model_doc/gpt2.html). Also, do you know why running ```conda update transformers``` also installs the 2.1.1 version? Same thing with pip ... <|||||>Sorry I'm not using conda to manage my packages. Would you mind creating a new conda virtual environment and then using `pip` to install all required packages? Note that when you install PyTorch related packages, please add this arg: `-f https://download.pytorch.org/whl/torch_stable.html` as not all PyTorch packages seem to exist on the official pypi repository. E.g. ``` pip install torch torchvision torchaudio -f https://download.pytorch.org/whl/torch_stable.html ``` <|||||>This does not work, I get this error ``` ERROR: Could not find a version that satisfies the requirement torchaudio (from versions: none) ERROR: No matching distribution found for torchaudio ``` This issue has been reported by other users on the torchaudio github page, and it seems to be due to the fact that windows in not supported yet.<|||||>I'm sorry to hear that. I can install all three packages on windows without any error, therefore, I'm pretty sure that `torchaudio` is supported. Maybe this is an environment problem on your system, you can try directly downloading whl packages from the index https://download.pytorch.org/whl/torch_stable.html and then install them manually. <|||||>I'll try that. I do have one last question: you mention the fact that in the latest version, ```input_embeds``` is not used anymore. Yet, when I look at the latest [doc](https://huggingface.co/transformers/model_doc/gpt2.html), I see that it is still an argument from the ```forward``` method. If the argument doesnt exist anymore, what is the current name?<|||||>> I think you probably checked a wrong docs version, the GPT2 in [v2.1.1 docs](https://huggingface.co/transformers/v2.1.1/model_doc/gpt2.html) was still under construction at that moment. You might misunderstand my point. Here I mentioned above was **not** the latest docs, it was the version `v2.1.1` you were previously using. At that moment (v2.1.1), the docs was still under construction which didn't indicate whether `input_embeds` is usable or not. So I checked out the codebase, the conclusion is: The `input_embeds` argument is **not supported** in `v2.1.1` but is **supported** in the latest version `v4.x`. Hope this clarification is helpful. <|||||>Understood, thanks for the clarification.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,977
closed
[Wav2Vec2] Make sure tensors are always bool for mask_indices
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR fixes a bug when Wav2Vec2 is trained with batch_size=1 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-12-2021 16:01:02
10-12-2021 16:01:02
transformers
13,976
closed
Fixing the lecture values by making sure defaults are not changed
384 // 4 < 128 would break `doc_stride`. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-12-2021 15:53:04
10-12-2021 15:53:04
transformers
13,975
closed
Issues with new LayoutLMv2
I am working with the newly added LayoutLMv2. Works fine with performing a forward pass, but get a dimensionality error related to the embeddings when I try to use it in another library, namely Captum for explainability. Note that LayoutLM gives no issues in the same context. Also, I realize that this model needs to be finetuned. This is just supposed to be a proof-of-concept usage. Here is my code: ``` from PIL import Image, ImageDraw, ImageFont from transformers import LayoutLMv2FeatureExtractor, LayoutLMv2TokenizerFast, LayoutLMv2Processor, LayoutLMv2ForSequenceClassification from captum.attr._utils.input_layer_wrapper import ModelInputWrapper from captum.attr import LayerIntegratedGradients, TokenReferenceBase import torch import torchvision import torch.nn.functional as F device = torch.device("cuda" if torch.cuda.is_available() else "cpu") image_rgb = Image.open("IMAGE.jpg").convert("RGB") processor = LayoutLMv2Processor.from_pretrained('microsoft/layoutlmv2-base-uncased') model = LayoutLMv2ForSequenceClassification.from_pretrained('microsoft/layoutlmv2-base-uncased') tokenizer = LayoutLMv2TokenizerFast.from_pretrained("microsoft/layoutlmv2-base-uncased") encoding = processor(image_rgb, return_tensors="pt") input_ids = encoding['input_ids'] token_type_ids = encoding['token_type_ids'] attention_mask = encoding['attention_mask'] bbox = encoding['bbox'] model_layered = ModelInputWrapper(model) outputs = model_layered(**encoding) pred, answer_idx = F.softmax(outputs.logits, dim=1).data.cpu().max(dim=1) def batch_predict(input_ids, image, bbox, attention_mask, token_type_ids): model_layered.eval() outputs = model_layered(input_ids=input_ids, image=image, bbox=bbox, attention_mask=attention_mask, token_type_ids=token_type_ids) logits = outputs.logits probs = F.softmax(logits, dim=1) return probs attr = LayerIntegratedGradients(batch_predict, [model_layered.module.layoutlmv2.embeddings.word_embeddings, model_layered.module.layoutlmv2.embeddings.position_embeddings, model_layered.module.layoutlmv2.embeddings.x_position_embeddings, model_layered.module.layoutlmv2.embeddings.y_position_embeddings, model_layered.module.layoutlmv2.embeddings.h_position_embeddings, model_layered.module.layoutlmv2.embeddings.w_position_embeddings, model_layered.module.layoutlmv2.embeddings.token_type_embeddings,]) # Generate reference for tokens token_reference = TokenReferenceBase(reference_token_idx=tokenizer.pad_token_id) text_reference_indices = token_reference.generate_reference(len(encoding['input_ids'][0]), device=device).unsqueeze(0) baselines = text_reference_indices attributions = attr.attribute(inputs=encoding['input_ids'], additional_forward_args=(encoding['image'], encoding['bbox'], encoding['attention_mask'], encoding['token_type_ids']), baselines=baselines, target=answer_idx, n_steps=5) ``` And the error: ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-1-00bdf4f97de3> in <module>() 60 baselines=baselines, 61 target=answer_idx, ---> 62 n_steps=5) /home/natbarkas/anaconda3/envs/explainability-v2/lib/python3.6/site-packages/captum/log/__init__.py in wrapper(*args, **kwargs) 33 @wraps(func) 34 def wrapper(*args, **kwargs): ---> 35 return func(*args, **kwargs) 36 37 return wrapper /home/natbarkas/anaconda3/envs/explainability-v2/lib/python3.6/site-packages/captum/attr/_core/layer/layer_integrated_gradients.py in attribute(self, inputs, baselines, target, additional_forward_args, n_steps, method, internal_batch_size, return_convergence_delta, attribute_to_layer_input) 496 method=method, 497 internal_batch_size=internal_batch_size, --> 498 return_convergence_delta=False, 499 ) 500 /home/natbarkas/anaconda3/envs/explainability-v2/lib/python3.6/site-packages/captum/attr/_core/integrated_gradients.py in attribute(self, inputs, baselines, target, additional_forward_args, n_steps, method, internal_batch_size, return_convergence_delta) 290 additional_forward_args=additional_forward_args, 291 n_steps=n_steps, --> 292 method=method, 293 ) 294 /home/natbarkas/anaconda3/envs/explainability-v2/lib/python3.6/site-packages/captum/attr/_core/integrated_gradients.py in _attribute(self, inputs, baselines, target, additional_forward_args, n_steps, method, step_sizes_and_alphas) 353 inputs=scaled_features_tpl, 354 target_ind=expanded_target, --> 355 additional_forward_args=input_additional_args, 356 ) 357 /home/natbarkas/anaconda3/envs/explainability-v2/lib/python3.6/site-packages/captum/attr/_core/layer/layer_integrated_gradients.py in gradient_func(forward_fn, inputs, target_ind, additional_forward_args) 464 465 output = _run_forward( --> 466 self.forward_func, tuple(), target_ind, additional_forward_args 467 ) 468 finally: /home/natbarkas/anaconda3/envs/explainability-v2/lib/python3.6/site-packages/captum/_utils/common.py in _run_forward(forward_func, inputs, target, additional_forward_args) 451 *(*inputs, *additional_forward_args) 452 if additional_forward_args is not None --> 453 else inputs 454 ) 455 return _select_targets(output, target) <ipython-input-1-00bdf4f97de3> in batch_predict(input_ids, image, bbox, attention_mask, token_type_ids) 34 bbox=bbox, 35 attention_mask=attention_mask, ---> 36 token_type_ids=token_type_ids) 37 logits = outputs.logits 38 probs = F.softmax(logits, dim=1) /home/natbarkas/anaconda3/envs/explainability-v2/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /home/natbarkas/anaconda3/envs/explainability-v2/lib/python3.6/site-packages/captum/attr/_utils/input_layer_wrapper.py in forward(self, *args, **kwargs) 74 kwargs[arg_name] = self.input_maps[arg_name](kwargs[arg_name]) 75 ---> 76 return self.module(*tuple(args), **kwargs) /home/natbarkas/anaconda3/envs/explainability-v2/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /home/natbarkas/anaconda3/envs/explainability-v2/lib/python3.6/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py in forward(self, input_ids, bbox, image, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict) 1053 output_attentions=output_attentions, 1054 output_hidden_states=output_hidden_states, -> 1055 return_dict=return_dict, 1056 ) 1057 if input_ids is not None: /home/natbarkas/anaconda3/envs/explainability-v2/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /home/natbarkas/anaconda3/envs/explainability-v2/lib/python3.6/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py in forward(self, input_ids, bbox, image, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict) 893 token_type_ids=token_type_ids, 894 position_ids=position_ids, --> 895 inputs_embeds=inputs_embeds, 896 ) 897 /home/natbarkas/anaconda3/envs/explainability-v2/lib/python3.6/site-packages/transformers/models/layoutlmv2/modeling_layoutlmv2.py in _calc_text_embeddings(self, input_ids, bbox, position_ids, token_type_ids, inputs_embeds) 754 token_type_embeddings = self.embeddings.token_type_embeddings(token_type_ids) 755 --> 756 embeddings = inputs_embeds + position_embeddings + spatial_position_embeddings + token_type_embeddings 757 embeddings = self.embeddings.LayerNorm(embeddings) 758 embeddings = self.embeddings.dropout(embeddings) RuntimeError: The size of tensor a (44) must match the size of tensor b (49) at non-singleton dimension 1 ```
10-12-2021 13:36:22
10-12-2021 13:36:22
Is it possible to make a Colab notebook that can reproduce the issue?<|||||>Yes, I sent you a link to the Colab notebook.
transformers
13,974
closed
Specify im-seg mask greyscole mode
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) After testing the output of image-segmentation pipeline as part of image-segmentation widget tests, found out the need to specify to pillow that `image-segmentation` pipeline outputs greyscale images (i.e. single channel images, black & white) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-12-2021 13:07:20
10-12-2021 13:07:20
@Narsil merging it now. However, I plan to do some research on support of boolean images by various libraries. As the first use case, I would like to test whether boolean images are supported in HTML (for the widget use case). I would like to know whether I can do something like this: ```js const img = document.getElemendById('some-image-element-id'); img.src = 'data:image/png;base64, BASE64 str of boolean image' ```
transformers
13,973
closed
examples/legacy/token-classification doesn't work well
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version:4.12.0.dev0 - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj @patrickvonplaten - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten - Tokenizers: @LysandreJik - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: × the official example scripts: (give details below) 〇 my own modified scripts: (give details below) run.sh 1 ## The relevant files are currently on a shared Google 2 ## drive at https://drive.google.com/drive/folders/1kC0I2UGl2ltrluI9NqDjaQJGw5iliw_J 3 ## Monitor for changes and eventually migrate to nlp dataset 4 5 export MAX_LENGTH=128 6 export BERT_MODEL=bert-base-multilingual-cased 7 python3 preprocess.py data/train_tab_space.txt $BERT_MODEL $MAX_LENGTH > train.txt 8 python3 preprocess.py data/dev_tab_space.txt $BERT_MODEL $MAX_LENGTH > dev.txt 9 python3 preprocess.py data/test_tab_space.txt $BERT_MODEL $MAX_LENGTH > test.txt 10 cat train.txt dev.txt test.txt | cut -d " " -f 2 | grep -v "^$"| sort | uniq > labels.txt 11 export OUTPUT_DIR=germeval-model 12 export BATCH_SIZE=32 13 export NUM_EPOCHS=3 14 export SAVE_STEPS=750 15 export SEED=1 16 17 python3 run_ner.py \ 18 --task_type NER \ 19 --data_dir . \ 20 --labels ./labels.txt \ 21 --model_name_or_path $BERT_MODEL \ 22 --output_dir $OUTPUT_DIR \ 23 --max_seq_length $MAX_LENGTH \ 24 --num_train_epochs $NUM_EPOCHS \ 25 --per_gpu_train_batch_size $BATCH_SIZE \ 26 --save_steps $SAVE_STEPS \ 27 --seed $SEED \ 28 --do_train \ 29 --do_eval \ 30 --do_predict The tasks I am working on is: × an official GLUE/SQUaD task: (give the name) 〇 my own task or dataset: (give details below) I use my data [dev_tab_space.txt](https://github.com/huggingface/transformers/files/7328515/dev_tab_space.txt) [train_tab_space.txt](https://github.com/huggingface/transformers/files/7328517/train_tab_space.txt) [test_tab_space.txt](https://github.com/huggingface/transformers/files/7328516/test_tab_space.txt) Looking at the eval_results.txt,test_results.txt and test_predictions.txt, it doesn't seem to be predicted correctly. eval_results.txt 1 eval_loss = 0.0005731257260777056 2 eval_accuracy_score = 1.0 3 eval_precision = 0.0 4 eval_recall = 0.0 5 eval_f1 = 0.0 6 eval_runtime = 1.7257 7 eval_samples_per_second = 130.379 8 eval_steps_per_second = 16.804 9 epoch = 3.0 test_results.txt 1 test_loss = 0.0005731506389565766 2 test_accuracy_score = 1.0 3 test_precision = 0.0 4 test_recall = 0.0 5 test_f1 = 0.0 6 test_runtime = 1.7989 7 test_samples_per_second = 132.302 8 test_steps_per_second = 16.677 test_predictions.txt 1 L.D._Schamphelaere O 2 ,_ O 3 " O 4 Short_run_digital_color_printing O 5 , O 6 " O 7 _ O 8 Japan_Hard_Copy'96論文集 O 9 , O 10 _ O 11 p. O 12 5 O 13 ,_ O 14 電子写真学会 O 15 . O 16 17 弘瀬紀寿 O 18 , O 19 “ O 20 RIPの技術動向とデジタルワークフロー O 21 ,” O 22 印刷雑誌 O 23 , O 24 vol. O 25 81 O 26 ,_ O 27 pp. O 28 41 O 29 - O 30 46 O 31 ,_ O 32 1998 O 33 . O 34 35 加藤茂夫 O 36 , O 37 長谷川まどか O 38 , O ↑After that, 「token O」 is repeated. However, 「O」label is not included in my data in the first place. Does anyone have similar experience? Please help me! ## To reproduce Steps to reproduce the behavior: 1.Go to transformers/examples/legacy/token-classification 2.Make 「data」 file. And, put 3 text data(train_tab_space.txt test_tab_space.txt dev_tab_space.txt) in it. 3.Rewrite run.sh. 4.Run run.sh. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
10-12-2021 09:25:16
10-12-2021 09:25:16
It is recommended to use the most up-to-date example scripts. Legacy scripts are not guaranteed to work. From the legacy readme: > Using these examples together with a recent version of the library usually requires to make small (sometimes big) adaptations to get the scripts working. Try the up-to-date scripts instead: https://github.com/huggingface/transformers/tree/master/examples/pytorch/token-classification<|||||>Thank you for your advice. There is a reason to use examples/legacy/token-classification. I want to use cl-tohoku/bert-base-japanese and bert-base-multilingual-cased, because my dataset is japanese dataset. However, those two models could not be used in examples/pytorch/token-classification with the following error. 「ValueError: This example script only works for models that have a fast tokenizer.」 In examples/legacy/token-classification, The example data (GermEval 2014) presented by huggingface worked fine.<|||||>Hi @wasabizusi after looking at your training data, it seems that you're using PoS tagging as downstream task? If you, you need to pass all possible tags for the label argument. Could you paste the output of the `labels.txt` file here :thinking: I can have a look on it later. <|||||>Hi @stefan-it . labels.txt is output as follows. D DAND DC DCL DCO DE DED DHY DLBR DN DP DPP DRBR DS DSL DSP DUN DV DZC DZE DZLBR DZRBR DZS ETC RA RAOT RB RC RD RE RL RM RN RP RPP RT RTR RURL RV RW RY UNKNOWN sp<|||||>Sorry. I forgot to attach the original labels.txt. [labels.txt](https://github.com/huggingface/transformers/files/7334822/labels.txt) <|||||>Ah, I see, could you please try to use `--task_type POS` instead of `NER`. <|||||>Thank you for your advice! I ran the program, but precision, recall, and f1 were all 1.0. And, test_predictions.txt was generated, but nothing was written. When I checked the data downloaded by legacy/token-classification/run_pos.sh, I think that the data format may be different.(For some reason, run_pos.sh didn't work properly...) I will consider processing the data.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,972
open
LayoutLMv2Processor does not accept the XLMRobertaTokenizerFast
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.11.3 - Platform: Linux-4.19.128-microsoft-standard-x86_64-with-glibc2.2.5 - Python version: 3.8.12 - PyTorch version (GPU?): 1.9.1+cu102 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @NielsRogge <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - LayoutLMv2 --> ## Information Model I am using: LayoutXLM The problem arises when using: * [x] the official example scripts: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/RVL-CDIP/Fine_tuning_LayoutLMv2ForSequenceClassification_on_RVL_CDIP.ipynb The tasks I am working on is: * [x] an official task: SequenceClassification ## To reproduce Steps to reproduce the behavior: When we replace the layoutlmv2 tokenizer in cell 8 of https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/RVL-CDIP/Fine_tuning_LayoutLMv2ForSequenceClassification_on_RVL_CDIP.ipynb ```python from transformers import LayoutLMv2FeatureExtractor, LayoutLMv2Tokenizer, LayoutLMv2Processor feature_extractor = LayoutLMv2FeatureExtractor() tokenizer = LayoutLMv2Tokenizer.from_pretrained("microsoft/layoutlmv2-base-uncased") processor = LayoutLMv2Processor(feature_extractor, tokenizer) ``` with the layoutxlm tokenizer as described in https://huggingface.co/transformers/model_doc/layoutxlm.html ```python from transformers import LayoutLMv2FeatureExtractor, LayoutLMv2Tokenizer, LayoutLMv2Processor, AutoTokenizer feature_extractor = LayoutLMv2FeatureExtractor() tokenizer = AutoTokenizer.from_pretrained('microsoft/layoutxlm-base') processor = LayoutLMv2Processor(feature_extractor, tokenizer) ``` the following error occurs ```python --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /tmp/ipykernel_3433/3030379235.py in <module> 5 tokenizer = AutoTokenizer.from_pretrained('microsoft/layoutxlm-base') 6 #tokenizer = LayoutLMv2Tokenizer.from_pretrained("microsoft/layoutlmv2-base-uncased") ----> 7 processor = LayoutLMv2Processor(feature_extractor, tokenizer) ~/.cache/pypoetry/virtualenvs/stp-experiment0-RgVp7VCN-py3.8/lib/python3.8/site-packages/transformers/models/layoutlmv2/processing_layoutlmv2.py in __init__(self, feature_extractor, tokenizer) 54 ) 55 if not isinstance(tokenizer, (LayoutLMv2Tokenizer, LayoutLMv2TokenizerFast)): ---> 56 raise ValueError( 57 f"`tokenizer` has to be of type {LayoutLMv2Tokenizer.__class__} or {LayoutLMv2TokenizerFast.__class__}, but is {type(tokenizer)}" 58 ) ValueError: `tokenizer` has to be of type <class 'type'> or <class 'type'>, but is <class 'transformers.models.xlm_roberta.tokenization_xlm_roberta_fast.XLMRobertaTokenizerFast'> ``` It looks like the LayoutLMv2Processor does not accept the XLMRobertaTokenizerFast. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> That the LayoutLMv2Processor accepts the XLMRobertaTokenizerFast.
10-12-2021 08:09:22
10-12-2021 08:09:22
`LayoutLMv2Processor` currently only supports `LayoutLMv2Tokenizer`/`LayoutLMv2TokenizerFast`. It would be a good first issue to add support for a new `LayoutXLMTokenizerFast`, which is based on XLMRoBERTa and takes into account the bounding box and word label inputs.<|||||>Hi @NielsRogge, I'd like to take a shot at this!<|||||>Great! So one would need to add `tokenization_layoutxlm.py` and `tokenization_layoutxlm_fast.py` to the [LayoutLMv2 folder](https://github.com/huggingface/transformers/tree/master/src/transformers/models/layoutlmv2). These should be near identical copies of `tokenization_xlm_roberta.py` and `tokenization_xlm_roberta_fast.py` (found [here](https://github.com/huggingface/transformers/tree/master/src/transformers/models/xlm_roberta)), respectively, but with added support for `boxes` and `word_labels` inputs (you can take a look at `tokenization_layoutlmv2.py` and `tokenization_layoutlmv2_fast.py` respectively how these are implemented).<|||||>> Great! So one would need to add `tokenization_layoutxlm.py` and `tokenization_layoutxlm_fast.py` to the [LayoutLMv2 folder](https://github.com/huggingface/transformers/tree/master/src/transformers/models/layoutlmv2). These should be near identical copies of `tokenization_xlm_roberta.py` and `tokenization_xlm_roberta_fast.py` (found [here](https://github.com/huggingface/transformers/tree/master/src/transformers/models/xlm_roberta)), respectively, but with added support for `boxes` and `word_labels` inputs (you can take a look at `tokenization_layoutlmv2.py` and `tokenization_layoutlmv2_fast.py` respectively how these are implemented). Thanks. Any advice on how I should go about writing the unit tests?<|||||>For the unit tests, I would define `test_tokenization_layoutxlm`.py and `test_tokenization_layoutxlm_fast.py` based on the corresponding tests of LayoutLMv2.
transformers
13,971
closed
Bug?/Question? Vocab of RoBERTa different from GPT2
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.1 - Platform: Linux-5.11.0-34-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.11 - PyTorch version (GPU?): 1.9.1+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - GPT-2, GPT: @patrickvonplaten, @LysandreJik If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Tokenizers: @LysandreJik Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ (Pdb) RobertaTokenizer.from_pretrained('roberta-large').vocab_size 50265 (Pdb) GPT2Tokenizer.from_pretrained('gpt2').vocab_size 50257] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> - GPT-2, GPT: @patrickvonplaten, @LysandreJik - If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. (Not sure who I should ping for RoBERTa) Library: - Tokenizers: @LysandreJik ## Problem Model I am using: GPT-2, RoBERTa The problem arises when I ran: ``` RobertaTokenizer.from_pretrained('roberta-large').vocab_size Output: 50265 GPT2Tokenizer.from_pretrained('gpt2').vocab_size Output: 50257 ``` ## Expected behavior Since RoBERTa and GPT-2 share vocabulary, are they supposed to have equal `vocab_size`? Not sure if this is a question or bug, so I put it here. If this is intended, may I ask where the difference comes from? <!-- A clear and concise description of what you would expect to happen. -->
10-12-2021 06:42:02
10-12-2021 06:42:02
Hi there! GPT-2 and RoBERTa both use byte-level Byte-Pair-Encoding for tokenization but they are different tokenizers, trained on different datasets with different `vocab_size`. So this is not a bug. They share the tokenization method but are essentially different tokenizers. Also, please use the [forum](https://discuss.huggingface.co/) for such questions. Thank you :) <|||||>> Hi there! GPT-2 and RoBERTa both use byte-level Byte-Pair-Encoding for tokenization but they are different tokenizers, trained on different datasets with different `vocab_size`. So this is not a bug. They share the tokenization method but are essentially different tokenizers. > > Also, please use the [forum](https://discuss.huggingface.co/) for such questions. Thank you :) I see, thank you very much! I am confused a bit because of seeing paper like these (https://aclanthology.org/2020.tacl-1.18.pdf, https://aclanthology.org/2020.emnlp-main.344.pdf) using GPT2 and RoBERTa since they share the vocab..
transformers
13,970
closed
Weird implementation of GPT2
null
10-12-2021 04:53:43
10-12-2021 04:53:43
transformers
13,969
closed
How to load a fine-tuned model and do predictions?
I have fine-tuned a Named entity recognition BERT model using the example here https://github.com/huggingface/transformers/tree/master/examples/pytorch/token-classification The command I used is: python3 run_ner.py --config_name (The bert model I chose) --tokenizer_name (The bert model I chose) --model_name_or_path (The bert model I chose) --train_file (My train file) --(my validation file) --text_column_name Words --label_column_name Tags --output_dir output --do_train --do_eval I successfully obtained the output, which is config.json and model.h5 With this, how can I load my model (the model.h5 above) and make a prediction, on an example sentence such as "I visited Github today"? For the tutorial in the link above, the code has the "do_predict" option, but I do not know how to ask the "run_ner.py" code to load my fine-tuned model, and I want to see the predictions (instead of the code giving me a metrics of accuracy = xx%, precision = xx%. Thanks.
10-12-2021 04:16:24
10-12-2021 04:16:24
Hi, please use the [forum](https://discuss.huggingface.co/) to post such general questions, we use issues for bug reports and feature requests. Thank you!<|||||>You can check out [this post](https://discuss.huggingface.co/t/decoding-the-predicted-output-array-in-distilbertbase-uncased-model-for-ner/10673/2?u=nielsr) I wrote on the forum about performing inference with NER models.<|||||>> Hi, please use the [forum](https://discuss.huggingface.co/) to post such general questions, we use issues for bug reports and feature requests. Thank you! Apologies, I am quite new here. Please close my post if necessary.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,968
closed
Fix missing tpu variable in benchmark_args_tf.py
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-12-2021 02:25:43
10-12-2021 02:25:43
transformers
13,967
closed
Add TFCLIPModel
# What does this PR do? Add `TFCLIPModel`, along with `TFCLIPTextModel` and `TFCLIPVisionModel`. (and fixed doc examples in `modeling_clip.py`) ## Who can review? @Rocketknight1 @patrickvonplaten @NielsRogge @sgugger @LysandreJik
10-11-2021 20:22:28
10-11-2021 20:22:28
Finally finalized this PR after working on something else. It is ready for review :)<|||||>@patil-suraj You know CLIP much better than me, but feel free to assign me as a reviewer too if you want me to double-check the TF code!<|||||>Thanks, @Rocketknight1 . I will re-work the initialization part (forgot that PT model codes have `_init_weights`). Could you check my comment about your question on `tf.tile/tf.broadcast_to`. I have a question there.<|||||>I addressed all the reviews, in particular the weight initialization. In PT version, there is ``` if isinstance(module, CLIPTextEmbeddings): module.token_embedding.weight.data.normal_(mean=0.0, std=factor * 0.02) module.position_embedding.weight.data.normal_(mean=0.0, std=factor * 0.02) ``` I am not sure why `0.02` is used instead of `config.initializer_range`. cc @patil-suraj <|||||>Should this be merged @Rocketknight1 @patil-suraj? :smiley: <|||||>> Should this be merged @Rocketknight1 @patil-suraj? 😃 There is (new) conflicts I just saw. I will resolve later today or tomorrow.<|||||>okay for merge on my side (once the conflicts are resolved) !<|||||>Merging will require: - a rebase on master - converting all docstrings to Markdown. You can install our `doc-builder` utility to do this: ``` pip install git+https://github.com/huggingface/doc-builder ``` Then run from your branch in the Transformers repo ``` doc-builder convert src/transformers/models/clip/modeling_tf_clip.py ``` to do this in the newly added modeling_tf_clip file. Note that clip.rst has disappeared and you will need to add your model to clip.mdx instead. Let us know if you need any help and please ping for review when you're done!<|||||>@patil-suraj @Rocketknight1 @sgugger I rebased on master and fixed errors. The last failed test is irrelevant I think. Ready to be merged when you have some time :) Thank you for reviewing this PR!<|||||>@sgugger I'm happy at this point!
transformers
13,966
closed
Feature request : add leave=True to dataset.map to enable tqdm nested bars (and whilst we're at it couldn't we get a way to access directly tqdm underneath?)
Furthering tqdm integration #2374 and #11797 solutioned by #12226 provided with tqdm description as `desc='`, it would be so nice to be able to nest HuggingFace `Datasets.map()` progress bars in the grander scheme of things and whilst we're at it why not other functions. By the way is there not a way to directly interact with underlying tqdm module ? `**kwargs-ish`? @sgugger @bhavitvyamalik
10-11-2021 20:16:31
10-11-2021 20:16:31
If you are requesting that the `Dataset.map` method accept this kwarg, you should open the feature request on the [Datasets repo](https://github.com/huggingface/datasets).<|||||>Woops indeed, done :)
transformers
13,965
closed
In pre-training a model how can I resume from last saved model?
Using flax example for training a T5 model from scratch, I am training a model using: ``` thon run_t5_mlm_flax.py \ --output_dir="/content/drive/MyDrive/Pouramini/" \ --cache_dir="/content/drive/MyDrive/cache/" \ --preprocessing_num_workers="20" \ --model_type="t5" \ --config_name="/content/drive/MyDrive/Pouramini/" \ --tokenizer_name="/content/drive/MyDrive/Pouramini/" \ --dataset_name="oscar" \ --dataset_config_name="unshuffled_deduplicated_fa" \ --max_seq_length="256" \ --per_device_train_batch_size="32" \ --per_device_eval_batch_size="32" \ --adafactor \ --learning_rate="0.005" \ --weight_decay="0.001" \ --warmup_steps="2000" \ --logging_steps="500" \ --save_steps="5000" \ --eval_steps="50000" \ --resume_from_checkpoint=True ``` Sometimes, the training interrupts and I must run the command again. But while I have specified `resume_from_checkpoint` it seems it trains from scratch, because the loss it shows is higher than the loss of the last saved model. What did I miss in the code above? I thought to add: ``` --model_name_or_path="/content/drive/MyDrive/Pouramini/" \ ``` Is it necessary?
10-11-2021 19:41:28
10-11-2021 19:41:28
cc @sgugger <|||||>It's Flax, so it's more up to @patil-suraj aisle than mine :-)<|||||>`resume_from_checkpoint` is not yet correctly implemented in flax examples. You could set the `--model_name_or_path` to the last saved checkpoint, this will load the model from that checkpoint, but note that the optimizer state is not saved. For a complete resume, we need to load both model and optimizer states from the last checkpoint. For now, you could find how to do this in [this t5 example ](https://github.com/gsarti/t5-flax-gcp/blob/34def61c98097224b4d9bca6c1e0832cd015f06a/run_t5_mlm_flax.py#L405)<|||||>@patil-suraj Thank you, I followed the example and added save and restore functions. However, I resumed a model (with 30000 steps) by just adding `--model_name_or_path` and if I interrupt my current running it will again save a model (with another 30000) without saving the optimizer. I would like to know if the training was okay? is it a serious stage? I mean should I train the model again from scratch or I can still rely on these saved models?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,964
closed
[wav2vec2] fix --gradient_checkpointing
This PR fixes `--gradient_checkpointing` in wav2vec2 examples. @sgugger
10-11-2021 17:40:42
10-11-2021 17:40:42
it was failing deepspeed tests for wav2vec2 - which I still run manually to check that Deepspeed didn't break something. wav2vec2 is a complex model with several manual fixes to make DS work. but I hear you about the pinned version! This makes sense. I was running those with master, so should probably add a skip rule to match the pinned version. Which would make it much more difficult for me to run the tests. <|||||>@patrickvonplaten, do we merge this or close the PR? Thank you!
transformers
13,963
closed
Add Unispeech & Unispeech-SAT
# What does this PR do? This PR adds UniSpeech from Microsoft: https://github.com/microsoft/UniSpeech ### TODOS: - [x] Run UniSpeech models and verify that HF forward pass yields same output - [x] Add UniSpeech checkpoints: https://huggingface.co/microsoft/unispeech-large-1500h-cv - [x] Run UniSpeech-SAT and verify that HF forward pass yields same output (blocked by: https://github.com/microsoft/UniSpeech/issues/4) - [x] Add UniSpeech-SAT checkpoints - [x] Add UniSpeech vocab and preprocessing (verify with Microsoft) - [x] Add UniSpeech vocab and preprocessing (verify wiht Microsoft) - [x] Verify naming with Microsoft & make README.md's pretty - [x] Clean PR and add tests - [x] Verify fine-tuning works ### Future PR: - [ ] Correct pretraining loss
10-11-2021 16:21:20
10-11-2021 16:21:20
Wait until https://github.com/huggingface/transformers/pull/13877 is merged <|||||>PR is good for review IMO: - PreTrained Unispeech checkpoints: https://huggingface.co/models?other=unispeech - PreTrained Unispeech-SAT checkpoints: https://huggingface.co/models?other=unispeech-sat<|||||>I think we can merge the pretrained models now. To make them "promotable" we should still do 2 things: - Unispeech: Add phoneme <-> text tokenizer, need some feedback here from the authors - Unispeech-SAT: the model should work very well for speaker-verification and speaker-diarization. We should add those two tasks and then promote the model on it as it performs very well
transformers
13,962
closed
Add the SEW and SEW-D speech models
# What does this PR do? This PR adds the SEW and SEW-D model from the paper "[Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870)" Source of the models: https://github.com/asappresearch/sew/ * **SEW** is based on Wav2Vec2, but with time frame downsampling and upsampling around the transformer layers. * **SEW-D** replaces the transformer layers in SEW with a DeBERTa-v2 encoder. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in this PR. ## TODO - [x] model docs - [x] checkpoints conversion - [ ] fintetuned CTC checkpoints?
10-11-2021 13:41:24
10-11-2021 13:41:24
The VQ pretraining modules aren't ported yet. After #13877 is merged they'll be added in a separate PR.<|||||>Let's try to get this PR merged by Thursday/Friday - anything I can help with? :-)<|||||>@patrickvonplaten in the end I removed the `feature_projection` if-else and left the modules only in SEW-D. The checkpoints are all uploaded now :tada: https://huggingface.co/models?other=sew https://huggingface.co/models?other=sew-d
transformers
13,961
closed
[Gradient checkpoining] Correct disabling `find_unused_parameters` in Trainer when gradient checkpointing is enabled
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Following https://github.com/huggingface/transformers/pull/13657 this PR makes sure that the Trainer uses the new gradient_checkpointing logic to disable `find_unused_parameters` argument in DDP. @sgugger - don't think many people have switched from `self.config._gradient_checkpointing` to the new API yet, but for those that have previously `find_unused_parameters` would not have been set to `False` which then leads to some hard to debug problems like: https://discuss.pytorch.org/t/finding-the-cause-of-runtimeerror-expected-to-mark-a-variable-ready-only-once/124428/5 Not sure if this worth a patch or not ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-11-2021 10:35:38
10-11-2021 10:35:38
transformers
13,960
closed
BartEnocder add set_input_embeddings
# What does this PR do? To unify the interface of the seq2seq models (eg. T5, Bart....), I add `set_input_embeddings`, `get_input_embeddings` to BartEncoder. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patrickvonplaten, @patil-suraj <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-11-2021 07:54:18
10-11-2021 07:54:18
transformers
13,959
closed
Error In Fine-Tuning MarianMT's opus-mt model ValueError: The two structures don't have the same sequence length. Input structure has length 4, while shallow structure has length 3
## Environment info - `transformers` version: 4.11.3 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyTorch version (GPU?): 1.9.0+cu111 (False) - Tensorflow version (GPU?): 2.6.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten ## Information Model I am using MarianMT: The problem arises when using: * [ ] my own modified scripts: I aim to finetune the opus-mt-en-hi model on a custom dataset for En-Hindi translation and while using the cross entropy objective my model.fit() loop errors out. I am facing the same error as mentioned in #11560 . In my best understanding, the source of the error is that the shape of y_pred and y_true don't match such as - y_pred is (Batch_size, seq_length, vocab_size ) and y_true is ( Batch_size, seq_length) but I am not sure if I have to softmax ( to generate y_pred per element that can be computed against each element in y_true ) the logits first to get the training started. The tasks I am working on is: * task: translation ## To reproduce : End-End script available [here](https://colab.research.google.com/drive/11wVlquduPyCV7vb6nh1eEt2PhvAbkyBM?usp=sharing) Steps to reproduce the behavior: Run colab script ( sample tensors are already provided as inputs) ## Expected behavior I looked at https://github.com/huggingface/transformers/blob/master/examples/tensorflow/translation/run_translation.py but except the custom objective function ( which is adding functionality to the same base sparse CE function ) , cannot identify anything that could be altered in my current script. Hence, the expected behavior would be for the training to proceed as expected. Would be very helpful if you could provide some direction on how to resolve this. Thanks in advance!
10-11-2021 07:51:09
10-11-2021 07:51:09
This seems related to TF model, gently pinging @Rocketknight1 :) <|||||>@Rocketknight1 - lemme know if you think you'll find time. Otherwise I can take a look as well I think :-)<|||||>I'll take a look!<|||||>@patrickvonplaten Investigated for an hour or so - I was able to reproduce the error, but debugging the script ran into constant shape errors no matter what I tried. I think I'd need to dig into this in more detail and make sure the input data is really what the model is expecting, but I have a bunch of other priorities right now, so I probably won't get to it in the next few days at least!<|||||>Sounds good! Let me know if you would like me to take over the issue<|||||>Hi, I'm sorry for how long it took me to get to this. Once I actually had time to sit down and look at it, the bug is straightforward - it's not within the model at all, but within the data preparation. The TF dataset you're creating does not batch inputs, and therefore the first batch dimension is missing, which is what causes the downstream shape errors. There are secondary issues with the loss function as supplied - as the model outputs logits, SparseCategoricalCrossentropy() should be called only with `from_logits=True`. You may find it easier to use the internal loss by leaving the `loss` argument blank. Here's [a modified version of your script](https://colab.research.google.com/drive/11wVlquduPyCV7vb6nh1eEt2PhvAbkyBM?usp=sharing) that should work. Please note, though, that we can't do extensive bug-fixing of user code in GitHub issues - taking this to the Discord or forums would usually be better!<|||||>> Hi, I'm sorry for how long it took me to get to this. Once I actually had time to sit down and look at it, the bug is straightforward - it's not within the model at all, but within the data preparation. The TF dataset you're creating does not batch inputs, and therefore the first batch dimension is missing, which is what causes the downstream shape errors. There are secondary issues with the loss function as supplied - as the model outputs logits, SparseCategoricalCrossentropy() should be called only with `from_logits=True`. You may find it easier to use the internal loss by leaving the `loss` argument blank. > > Here's [a modified version of your script](https://colab.research.google.com/drive/11wVlquduPyCV7vb6nh1eEt2PhvAbkyBM?usp=sharing) that should work. Please note, though, that we can't do extensive bug-fixing of user code in GitHub issues - taking this to the Discord or forums would usually be better! Thanks a lot for looking into it! Noted.
transformers
13,958
closed
Add support for `push_to_hub` for `AutoFeatureExtractor`
# 🚀 Feature request Add support for `push_to_hub` for `AutoFeatureExtractor.save_pretrained()`. Hey, while doing some Speech tests I noticed that `AutoFeatureExtractor.save_pretrained()` is not supporting `push_to_hub`. https://github.com/huggingface/transformers/blob/46efc5802458e91a702528332d76d464061d201f/src/transformers/feature_extraction_utils.py#L194 compared to `AutoTokenizer` https://github.com/huggingface/transformers/blob/0b8c84e110bf9012f30a85c40b9ff8ea868689fd/src/transformers/tokenization_utils_base.py#L1432
10-11-2021 07:08:30
10-11-2021 07:08:30
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>unstale<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,957
closed
Replace assert with unittest assertions
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Partly fix #12789 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-10-2021 22:30:49
10-10-2021 22:30:49
transformers
13,956
closed
Training on Tpu got stuck at 0%
I am trying to train a t5 model based on examples for Persian language. I cloned the example on Google Colab Pro, and chose TPU. I also add some code to the program to connect colab tpu via jax. ``` import jax.tools.colab_tpu jax.tools.colab_tpu.setup_tpu() ``` Then run: ``` !python run_t5_mlm_flax.py \ --output_dir="/content/drive/MyDrive/Pouramini/" \ --cache_dir="/content/drive/MyDrive/cache/" \ --preprocessing_num_workers="20" \ --model_type="t5" \ --config_name="/content/drive/MyDrive/Pouramini/" \ --tokenizer_name="/content/drive/MyDrive/Pouramini/" \ --dataset_name="oscar" \ --dataset_config_name="unshuffled_deduplicated_fa" \ --max_seq_length="256" \ --per_device_train_batch_size="16" \ --per_device_eval_batch_size="16" \ --adafactor \ --learning_rate="0.005" \ --weight_decay="0.001" \ --warmup_steps="2000" \ --logging_steps="500" \ --save_steps="5000" \ --eval_steps="50000" \ --resume_from_checkpoint=True ``` Now over 1 hours, the progress got stuck at: ``` Epoch ... : 0% 0/3 [00:00<?, ?it/s] Training...: 0% 0/161270 [00:00<?, ?it/s] ``` The number of steps indicates that the tokens were distributed on 8 Tpu core. The exact number of tokens is over 20M. Anyway, I don't know what is doing, whether it's an stage that I must wait for it, or it hanged or something. Just now it flushed hunderes of lines like below and stopped: ``` 2021-10-10 20:07:11.931040: E external/org_tensorflow/tensorflow/compiler/xla/python/tpu_driver/grpc_tpu_driver.cc:479] Resetting: 52612:2348383 2021-10-10 20:07:11.931048: E external/org_tensorflow/tensorflow/compiler/xla/python/tpu_driver/grpc_tpu_driver.cc:479] Resetting: 52612:2357538 2021-10-10 20:07:11.931057: E external/org_tensorflow/tensorflow/compiler/xla/python/tpu_driver/grpc_tpu_driver.cc:479] Resetting: 52612:2382164 2021-10-10 20:07:11.931065: E external/org_tensorflow/tensorflow/compiler/xla/python/tpu_driver/grpc_tpu_driver.cc:479] Resetting: 52612:2391319 2021-10-10 20:07:11.931072: E external/org_tensorflow/tensorflow/compiler/xla/python/tpu_driver/grpc_tpu_driver.cc:479] Resetting: 52612:2490748 2021-10-10 20:07:11.931080: E external/org_tensorflow/tensorflow/compiler/xla/python/tpu_driver/grpc_tpu_driver.cc:479] Resetting: 52612:2418989 2021-10-10 20:07:11.931088: E external/org_tensorflow/tensorflow/compiler/xla/python/tpu_driver/grpc_tpu_driver.cc:479] Resetting: 52612:2423186 2021-10-10 20:07:11.931095: E external/org_tensorflow/tensorflow/compiler/xla/python/tpu_driver/grpc_tpu_driver.cc:479] Resetting: 52612:2471759 2021-10-10 20:07:11.931103: E external/org_tensorflow/tensorflow/compiler/xla/python/tpu_driver/grpc_tpu_driver.cc:479] Resetting: 52612:2437978 2021-10-10 20:07:11.931112: E external/org_tensorflow/tensorflow/compiler/xla/python/tpu_driver/grpc_tpu_driver.cc:479] Resetting: 52612:2456967 ........ ``` What does it mean?
10-10-2021 20:08:40
10-10-2021 20:08:40
Hi, this looks like a strange error. > The number of steps indicates that the tokens were distributed on 8 Tpu core. We specify `--per_device_train_batch_size=16` which means each TPU core will use 16 BS, so the effective total BS is 16 * 8 = 256 and the number of steps are computed using this total BS. It looks like the model is stuck at compilation, it could happen if the model cannot fit on the TPU core. What is the size of the model, is it t5-base or t5-large or larger than that? Also, could you post the jax, jaxlib, and flax versions?<|||||>As I checked them using ``` !pip freeze | grep jax !pip freeze | grep flax ``` it prints: ``` jax==0.2.21 jaxlib @ https://storage.googleapis.com/jax-releases/cuda111/jaxlib-0.1.71+cuda111-cp37-none-manylinux2010_x86_64.whl flax==0.3.5 ``` I also tried `--per_device_train_batch_size=8`, but I got the same error!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I met the same error.<|||||>@patil-suraj @puraminy Same error here. I add these two lines in the script as well. ``` import jax.tools.colab_tpu jax.tools.colab_tpu.setup_tpu() ``` In Colab, Jax is still unable to connect TPU. ``` 2022-05-13 02:37:09.265602: E external/org_tensorflow/tensorflow/stream_executor/cuda/cuda_driver.cc:271] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected. [02:37:09] - INFO - absl - Unable to initialize backend 'gpu': FAILED_PRECONDITION: No visible GPU devices. [02:37:09] - INFO - absl - Unable to initialize backend 'tpu': INVALID_ARGUMENT: TpuPlatform is not available. ``` And the training progress got stuck as well. I guess it is because the script uses CPU for training instead of TPU, so it is just too slow.. ``` Epoch ... : 0% 0/3 [00:00<?, ?it/s]. Training...: 0% 0/6344 [00:00<?, ?it/s] ``` Do you have any solution? Many thanks.<|||||>Could you try to run these lines ```python import jax.tools.colab_tpu jax.tools.colab_tpu.setup_tpu() ``` and then print `jax.devies()` in colab cell to see if it can detect TPU ? Adding those two lines in the script before any other JAX imports should resolve this issue.
transformers
13,955
closed
Replace assert by ValueError of src/transformers/models/electra/modeling_{electra,tf_electra}.py and all other models that had copies
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Related to issue: #12789 I've had to change others files in order that the code_check_quality tests passes, if there is any change that should be removed please feel free to let me know. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. I'm tagging @LysandreJik because I have modified the albert and bert models as well as other that had to be changed in order for the `code_quality_check` to pass. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-10-2021 19:47:11
10-10-2021 19:47:11
Thank you for your suggestion about the ValueError message and the simplification made to the logic to improve readability, I think this PR is ready to merge.<|||||>Thanks a lot for adapting!
transformers
13,954
closed
log_softmax runtimeerror when utilizing generation gradients in beam_search
## Environment info - `transformers` version: 4.11.0 (also in latest) - Platform: Linux-5.4.0-88-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.9.1+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No (but also happens in gpu) - Using distributed or parallel set-up in script?: N/A ### Who can help @patrickvonplaten ## Information Model I am using BART: The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ```python #Train the generator batch_size = batch["input_ids"].shape[0] length_penalty = 1.0 early_stopping = True num_beams = 5 beam_scorer = BeamSearchScorer( batch_size=batch_size, max_length=self.max_target_length, num_beams=num_beams, device=self.generator.device, length_penalty=length_penalty, do_early_stopping=early_stopping, num_beam_hyps_to_keep=1, ) # interleave with `num_beams` input_ids = torch.ones((num_beams * batch_size, 1), device=self.generator.device, dtype=torch.long) input_ids = input_ids * self.generator.model.config.decoder_start_token_id encoder_outputs = self.generator.model.get_encoder()( batch['input_ids'].repeat_interleave(num_beams, dim=0), return_dict=True) logits_processor = LogitsProcessorList([ MinLengthLogitsProcessor(5, eos_token_id=self.generator.model.config.eos_token_id), ]) fake = self.generator.model.beam_search( input_ids, beam_scorer, logits_processor=logits_processor, max_length=self.max_target_length, pad_token_id=self.generator.tokenizer.pad_token_id, eos_token_id=self.generator.tokenizer.eos_token_id, output_scores=True, return_dict_in_generate=True, encoder_outputs=encoder_outputs ) ``` 1. Essentially, when utilizing the "scores" that beam search produces (after filtering out the ones used in the returned sequence), eventually if you utilize some kind of loss with those scores vectors (i.e. multiplying the scores with an embedding matrix and summing it up) like passing that on to some model that can generate a loss, you will get the error: `RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [10, 50296]], which is output 0 of LogSoftmaxBackward, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck! ` 2. With debugging, you can figure out that the error can be traced back to the use of log_softmax in generation_utils: - https://huggingface.co/transformers/_modules/transformers/generation_utils.html ```python next_token_scores = **nn.functional.log_softmax(** next_token_logits, dim=-1 ) # (batch_size * num_beams, vocab_size) ``` Now, I don't think this is an issue with transformers (i.e. it should just work given that its a pytorch call), but it is blocking me. My solution (once more sloppy) was to replace it with an actual log(softmax( call: ```python next_token_scores = torch.log(F.softmax(next_token_logits, dim=-1)) # (batch_size * num_beams, vocab_size) ``` ## Expected behavior Utilizing the gradients from the beam search scores (at every generation point) should work for backpropagation and other custom uses.
10-10-2021 19:06:12
10-10-2021 19:06:12
Small clarification: log_softmax is numerically stable, whereas log(softmax( is not, so I understand this is not ideal. <|||||>Thanks for the issue @pedrocolon93 - we don't really support gradient backpropagation with `generate(...)` yet. I don't think we can just replace `log_softmax` with `torch.log(F.softmax(...))` as the function is not numerical stable as you pointed out. `log_softmax(...)` should also be able to compute the gradient correctly...the issue IMO rather comes from using `logits_processor` that perform an in-place operation no? Can you try whether the gradients work with `log_softmax(...)` and no `logits_processor`?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,953
closed
BART with inputs_embeds crashes with shift_tokens_right
## Environment info - `transformers` version: 4.11.0 - Platform: Linux-5.4.0-88-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.9.1+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten, @patil-suraj ## Information Model I am using: BART The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Get the input embeddings directly from the shared Embedding layer, build a dictionary without 'input_ids', pass it to the model ```python model = BartModel.from_pretrained("facebook/bart-base") tokenizer = BartTokenizer.from_pretrained("facebook/bart-base") bos_vec = model.shared(torch.tensor([tokenizer.bos_token_id])*model.encoder.embed_scale final_fake_batch = { "inputs_embeds":[], "attention_mask":[] } vecs = [bos_vec,bos_vec] vecs = torch.cat(vecs,0) att = [1 for i in range(2)] final_fake_batch["inputs_embeds"].append(vecs) final_fake_batch["attention_mask"].append(torch.tensor(att)) final_fake_batch["attention_mask"].append(torch.tensor(att)) model(**final_fake_batch) <-error happens here ``` I didn't check this ^^^ but it should give the same error. Within the modeling_bart, you will get a NoneType error that happens in what is now: ```python def _prepare_bart_decoder_inputs( config, input_ids, decoder_input_ids=None, decoder_padding_mask=None, causal_mask_dtype=torch.float32 ): """ Prepare masks that ignore padding tokens in the decoder and a causal mask for the decoder if none are provided. This mimics the default behavior in fairseq. To override it pass in masks. Note: this is not called during generation """ pad_token_id = config.pad_token_id if decoder_input_ids is None: **decoder_input_ids = shift_tokens_right(input_ids, pad_token_id) <<< HERE** bsz, tgt_len = decoder_input_ids.size() if decoder_padding_mask is None: decoder_padding_mask = make_padding_mask(decoder_input_ids, pad_token_id) else: decoder_padding_mask = invert_mask(decoder_padding_mask) if decoder_padding_mask is not None and decoder_padding_mask.shape[1] > 1: # never mask leading token, even if it is pad decoder_padding_mask[:, 0] = decoder_padding_mask[:, 1] tmp = fill_with_neg_inf(torch.zeros(tgt_len, tgt_len)) mask = torch.arange(tmp.size(-1)) tmp.masked_fill_(mask < (mask + 1).view(tmp.size(-1), 1), 0) causal_mask = tmp.to(dtype=causal_mask_dtype, device=decoder_input_ids.device) return decoder_input_ids, decoder_padding_mask, causal_mask ``` This function assumes input_ids is always going to be provided, which it might not. A quick (albeit sloppy) fix is: ```python if decoder_input_ids is None and decoder_inputs_embeds is None: if input_ids is not None: decoder_input_ids = shift_tokens_right( input_ids, self.config.pad_token_id, self.config.decoder_start_token_id ) elif inputs_embeds is not None: decoder_inputs_embeds = shift_embeddings_right( inputs_embeds, self.shared( torch.tensor([self.config.decoder_start_token_id]).to(self.shared.weight.device)).unsqueeze(0) ) ``` In which we essentially check if input ids are none, and if they are just check if we have provided inputs_embeds. Additionally, since it is no longer single tokens, a shift embedding function needs to be implemented, or the shift tokens right needs to be modified: ```python def shift_embeddings_right(inputs_embeds, decoder_start_token_vec): shifted_input_ids = inputs_embeds.new_zeros(inputs_embeds.shape) shifted_input_ids[:, 1:, :] = inputs_embeds[:, :-1,:]#.clone() shifted_input_ids[:, 0, :] = decoder_start_token_vec return shifted_input_ids ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Model should work with inputs_embeds rather than input_ids since it makes a check for one or the other.
10-10-2021 18:53:30
10-10-2021 18:53:30
If `inputs_embeds` are provided instead of `input_ids`, the `decoder_inputs_embeds` need to be provided as well. The problem is that when costum word embeddings are used we cannot automatically shift the input as we don't know what the "padding" token word embedding is as the user could have passed whatever word embedding. *E.g.*: ```python decoder_inputs_embeds = shift_embeddings_right( inputs_embeds, self.shared( torch.tensor([self.config.decoder_start_token_id]).to(self.shared.weight.device)).unsqueeze(0) ) ``` makes the assumption the actualy word embedding is used, but we can't know that beforehand so the better solution is to just force the user to provide the `decoder_inputs_embeds` in this case. I'm happy about adding a better error message, but think that's all we can do here<|||||>Thats perfect! I didn't know the decoder inputs embeds were needed! just a tiny clarification would be good! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@pedrocolon93, Would you be interested in open a PR to add such an error message maybe? :-)
transformers
13,952
closed
Documentation for exporting custom architecture to ONNX
I have a custom Bert model for relation classification based on the [R-BERT paper](https://arxiv.org/abs/1905.08284). The model performs well but is relatively slow on CPU, so I'd like to try exporting to ONNX. The model inherits from `BertPreTrainedModel` and is relatively simple: ```python class BertForRelationClassification(BertPreTrainedModel): def __init__(self, config): super(BertForRelationClassification, self).__init__(config) self.num_labels = config.num_labels self.bert = BertModel(config) self.cls_dropout = nn.Dropout(0.1) self.ent_dropout = nn.Dropout(0.1) self.classifier = nn.Linear(config.hidden_size*3, self.config.num_labels) self.init_weights() def forward(self, input_ids, token_type_ids=None, attention_mask=None, e1_mask=None, e2_mask=None, labels=None, position_ids=None, head_mask=None): outputs = self.bert(input_ids, position_ids=position_ids, token_type_ids=token_type_ids, attention_mask=attention_mask, head_mask=head_mask) pooled_output = outputs[1] sequence_output = outputs[0] def extract_entity(sequence_output, e_mask): extended_e_mask = e_mask.unsqueeze(1) extended_e_mask = torch.bmm( extended_e_mask.float(), sequence_output).squeeze(1) return extended_e_mask.float() e1_h = self.ent_dropout(extract_entity(sequence_output, e1_mask)) e2_h = self.ent_dropout(extract_entity(sequence_output, e2_mask)) context = self.cls_dropout(pooled_output) pooled_output = torch.cat([context, e1_h, e2_h], dim=-1) logits = self.classifier(pooled_output) outputs = (logits,) + outputs[2:] if labels is not None: loss_fct = CrossEntropyLoss() loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) outputs = (loss,) + outputs return outputs ``` Following the [guidelines for exporting custom models to ONNX](https://huggingface.co/transformers/serialization.html#implementing-a-custom-configuration-for-an-unsupported-architecture), I've created a custom OnnxConfig for it and specified inputs and outputs: ```python class BertForRelationClassificationOnnxConfig(OnnxConfig): @property def inputs(self) -> Mapping[str, Mapping[int, str]]: return OrderedDict( [ ("input_ids", {0: "batch", 1: "sequence"}), ("attention_mask", {0: "batch", 1: "sequence"}), ("token_type_ids", {0: "batch", 1: "sequence"}), ("labels", {0: "batch", 1: "sequence"}), ("e1_mask", {0: "batch", 1: "sequence"}), ("e2_mask", {0: "batch", 1: "sequence"}), ] ) @property def outputs(self) -> Mapping[str, Mapping[int, str]]: return OrderedDict([("outputs", {0: "batch", 1: "sequence"})]) ``` However when I run `convert_graph_to_onnx.py`, (of course) the model is assumed to be `BertModel` and the inputs and outputs are those of the vanilla `BertModel`. I'm unclear on the next steps. I'm fairly sure this is what I should do next as stated on the documentation page: ***"Once this is done, a single step remains: adding this configuration object to the initialisation of the model class, and to the general `transformers` initialisation"***. While I feel a bit dense I'm still not following how to make this work, as the `BertForRelationClassificationOnnxConfig` class I created doesn't inherit from `BertConfig` (I could make it do so, but the documentation doesn't specify this) so I don't see how I can use this for initialization of the model. The [MBart example](https://github.com/huggingface/transformers/pull/13049/commits/d097adcebd89a520f04352eb215a85916934204f) doesn't make sense to me as I'm not contributing to the transformers code base. Can you please provide guidance or a specific example? Thank you!
10-10-2021 18:14:12
10-10-2021 18:14:12
Hi this is what I did for `CamemBERT` TokenClassification model: - git clone https://github.com/huggingface/transformers.git - cd transformers - open code editor and find the file `src/transformers/onnx/features.py` - add a new item to `_SUPPORTED_MODEL_KIND` dictionnary Because my camembert model have the same config, I used the `RobertaOnnxConfig` ```diff "roberta": supported_features_mapping("default", onnx_config_cls=RobertaOnnxConfig), + "camembert": supported_features_mapping("default", onnx_config_cls=RobertaOnnxConfig), ``` - Rebuild the transformers package with `pip install .` at the root directory - Relaunch your conversion script If you don't find a model already added with the same config as your model, then you have to add it to your model configuration file. Example: for `Roberta` the path is `transformers/src/transformers/models/roberta/configuration_roberta.py` and there is already `RobertaOnnxConfig` class. After that you will have to import it in `features.py` ```python from ..models.roberta import RobertaOnnxConfig ``` And add it to `_SUPPORTED_MODEL_KIND` dictionnary and do the rebuild of the transformers package. I think you have to add your created class to the bert configuration file and import it in features.py file and add it to the dictionnary. It's probably the easiest thing to do in your case. Tell me if this works for you 🤗 P.S.: The best way is to add it to your model repo and make a PR to add it to the base repository package. This is what I'm going to do for camemBERT as well.<|||||>@ChainYo, thank you very much! That makes a lot of sense and I was clearly misunderstanding. I'll do as you suggest and, if it works, close the issue. <|||||>Thank you @ChainYo for the insightful comment - if the documentation is lacking in any way, we're very open to PRs that would help make it clearer. Thank you both!<|||||>> if the documentation is lacking in any way, we're very open to PRs that would help make it clearer. Yes, I am going to prepare one 🤗 I am just wondering if this is usefull to add the camemBERT config code to camemBERT model because the configuration is exactly the same as Roberta, it will make duplicates in code in differents files. but adding `"camembert": supported_features_mapping("default", onnx_config_cls=RobertaOnnxConfig),` doesn't make sense in the official transformers repository, it's more a personnal trick. So I don't know what to do! @LysandreJik <|||||>Having a duplicate for CamemBERT isn't an issue :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,951
closed
Raise ValueError instead of asserts in src/transformers/benchmark/benchmark.py
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Replaced all asserts statements by raising `ValueError` exception in `src/transformers/benchmark/benchmark.py` Related to issue : #12789 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. I'm tagging @patrickvonplaten since he is in charge of the benchmark library. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-10-2021 17:00:34
10-10-2021 17:00:34
Perhaps this could be extended to the library in general? There are still many assert statements in place where Exceptions are more appropriate. Almost all assert statements in the core codebase would be more appropriate as Exceptions - particularly in light of Python's -O and -OO flags.
transformers
13,950
closed
Few-Shot Learning Attempt Failure on HF Inference API with GPT-J
## Environment info - `transformers` version: 4.6.1 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.10 - PyTorch version (GPU?): 1.8.1 - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: Unknown @patil-suraj , @patrickvonplaten , @sgugger ## Information Model I am using: GPT-J 6B, GPT-NEO 2.7B The problem arises when using: * [ ] the official example scripts: (give details below) * [ o ] my own modified scripts: (give details below) I am trying to make a model generate a text based on a given prompt with examples (Few-Shot Learning). Example of the prompt: ``` prompt_tweet = """Generate tweet text from a key word: key: markets tweet: Take feedback from nature and markets, not from people ### key: children tweet: Maybe we die so we can come back as children. ### key: startups tweet: Startups should not worry about how to put out fires, they should worry about how to start them. ### key: NLP tweet: """ ``` However, the model seems to struggle with properly identifying 1) Response Length, 2) End/Stop sequence (i.e. "###" in the example above). How can I achieve results like [here](https://huggingface.co/blog/few-shot-learning-gpt-neo-and-inference-api)? I am not actually sure if I am using the correct parameter names, even though I have looked into everything in the API documentation, there is no indication of that. The parameters I am passing to my model are indicated under the "##To reproduce" section. The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ o ] my own task or dataset: (give details below) I am trying to develop a way for the model to generate text based on Few-Shot Learning, something like [here](https://huggingface.co/blog/few-shot-learning-gpt-neo-and-inference-api). However, I am not sure how to declare an end sequence properly, since I was not able to find any indication of that in the API documentation. ## To reproduce ```import json import requests API_URL = "https://api-inference.huggingface.co/models/EleutherAI/gpt-j-6B" #gpt-j default API_URL1 = "https://api-inference.huggingface.co/models/EleutherAI/gpt-neo-2.7B" #gpt-neo-2.7B headers = {"Authorization": f"Bearer {API_TOKEN}"} def query(payload): data = json.dumps(payload) response = requests.request("POST", API_URL, headers=headers, data=data) return json.loads(response.content.decode("utf-8")) options = {"use_gpu": False, "use_cache": False} parameters = { "temperature": 0.6, #"top_k":30, "repetition_penalty": None, "max_new_tokens": None, "max_time": None, "top_p": 1.0, "return_full_text": False, "num_return_sequences": 1, #"min_length": 100, "max_length": len(prompt_tweet), "end_sequence": "###" } data = query({"inputs": prompt_tweet, "parameters": parameters, "options": options}) print(data[0]['generated_text']) ``` The code above is pretty much copy-pasted from the website, apart from parameters values. Steps to reproduce the behavior: 1. Make an API call (Model: GPT-J) 2. Pass in the parameters indicated above 3. Call the query function ## Expected behavior The model is supposed to generate sensible results more or less consistently when using Few-Shot Learning Approach, I know that by using other API tools I found (see - https://hub.getneuro.ai/model/nlp/gpt-j-6B-text-generation ; https://nlpcloud.io/nlp-text-generation-api-gpt-neo-gpt-j-gpt-3-alternatives.html)
10-10-2021 13:38:36
10-10-2021 13:38:36
I am sorry if I have done some formatting wrong, please let me know if I should improve something.<|||||>Gently pinging @Narsil here<|||||>Hi @kaisardauletbek , For formatting you need to use 3 backticks ` ``` ` to get multiline formatting. As for your request, the `end_sequence` was not properly respected by this model (it's slightly custom because of memory requirements. This should be adjusted now. That being said, your example does NOT use the `end_sequence` within it's prompt meaning the model is unlikely to use it too. You probably need to use it within your prompts. Hope this helps, Cheers, Nicolas<|||||>Hello! Reformatted. Thanks. Can you please elaborate on "example does not use the ```end_sequence``` within it's prompt? Do you mean that it should be ```\n###```? If so, then I have tried that too; if not, can you please clarify how to use it within the prompt? Thanks, KD<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@kaisardauletbek sorry about late answer, notifications on github are shaky sometimes. (Don't hesitate to ping if it happens a again). What I meant is that yes. `\n###` is a single token with that model, meaning `###` as end_sequence is just ignored (because the ids doesn't match what's used in the prompt, namely `\n###`. Using this instead should yield better resutls. Sorry if my answer was not very clear. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,949
closed
Change DataCollatorForSeq2Seq to pad labels to a multiple of `pad_to_multiple_of`
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> While the input sequences are padded by `tokenizer.pad` which makes the length of the sequences to be a multiple of `pad_to_multiple_of`, the label sequences are padded to the maximum length among them without considering `pad_to_multiple_of`. This PR changes `DataCollatorForSeq2Seq` class to make sure that the labels have their lengths to be a multiple of the given value. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-10-2021 13:17:42
10-10-2021 13:17:42
transformers
13,948
closed
bert obtained different results on mac and linux
I ran same code locally on my mac and remotely on linux through JupyterNotebook, and got different result. Code is as follows: from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('bert-base-chinese') inputs = tokenizer('回复新浪网友对博文【国家文物局限制鉴宝节目现场估价转】的评论:;;查看原文:', return_tensors="pt") model = BertModel.from_pretrained('bert-base-chinese') model.eval() outputs_trans = model(**inputs) I compared model on mac and model on linux, they have same state dict, the inputs are the same, but I got different outputs_trans. transformers: 4.11.3
10-10-2021 11:04:12
10-10-2021 11:04:12
transformers
13,947
closed
The tftrainer custom_datasets colab does not work with t5model
## Environment info - `transformers` version: 4.11.3 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyTorch version (GPU?): 1.9.0+cu111 (True) - Tensorflow version (GPU?): 2.6.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Parallel ### Who can help Models: - encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj Library: - Trainer: @sgugger - maintained examples (not research project or legacy): @sgugger, @patil-suraj ## Information https://colab.research.google.com/github/huggingface/notebooks/blob/master/transformers_doc/tensorflow/custom_datasets.ipynb I added the following to produce decoder_input_ids: ``` train_labels = tokenizer(_train_labels, truncation=True, padding=True).input_ids val_labels = tokenizer(_val_labels, truncation=True, padding=True).input_ids test_labels = tokenizer(_test_labels, truncation=True, padding=True).input_ids ``` And construct the dataset with that. And, the model is TFT5 ``` model = TFT5ForConditionalGeneration.from_pretrained('t5-small') ``` I get ``` ----> 7 model.fit(train_dataset.shuffle(1000).batch(16), epochs=3, batch_size=16) stack /usr/local/lib/python3.7/dist-packages/transformers/models/t5/modeling_tf_t5.py:637 call * raise ValueError(f"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds") ``` I'm sorry if I provide not enough information. I try to be succinct. Essentially, it is not clear from the docs or trainer colab how to pass the decoder_input_ids. It seems to me like I am setting it in the dataset, because I redefined {train,val,test}_labels.
10-10-2021 04:50:59
10-10-2021 04:50:59
When building the dataset, I tried doing `train_encodings["labels"] = train_labels.input_ids`, but now I get a new error when calling `.fit()`: ``` /usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py:165 compute_loss * reduced_logits = tf.boolean_mask(tf.reshape(logits, (-1, shape_list(logits)[2])), active_loss) IndexError: list index out of range ```<|||||>I'm not sure which task you are interested in, but since T5 is for sequence-to-sequence problems and there is none of that in this tutorial, you should check the [example notebook](https://huggingface.co/transformers/master/notebooks.html#tensorflow-examples) corresponding to your task.<|||||>The labels in this custom_datasets colab were 0 or 1. What if we simply change the labels to "neg" and "pos" (I read/understood that in T5, strings like that are valid classes, is that correct?) Thank you! I'll check out the link you share<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,946
closed
Incosistent vocab sizes in t5-base model & tokenizer
## Environment info - `transformers` version: 4.11.3 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyTorch version (GPU?): 1.9.0+cu111 (False) - Tensorflow version (GPU?): 2.6.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ## Information I'm using `t5-base` model (yet my test shows the same result for `t5-small`). To implement some knowledge-distillation task and "classical" sequence-to-sequence tasks - I'm trying to use `batch_size x seq_length x vocab_size` array as labels for both kinds of tasks (soft labels in KD case, one-hot hard labels I'm seq2seq case). So I need to convert tokenizer output from `batch_size x seq_length` to `batch_size x seq_length x vocab_size` one-hot array to pass it to my custom loss later. Yet I found out that I can't just use `T5Tokenizer(...).vocab_size` to build one-hot matrix - `T5ForConditionalGeneration(...).config.vocab_size` have different value. So when I'm trying to build a one-hot vector based on tokenizer vocab size - I'm getting dimension mismatch errors. The next code shows me different vocabulary sizes when I'm trying to access vocab size through `T5Tokenizer` and related `T5ForConditionalGeneration` config: ``` from transformers import T5ForConditionalGeneration, T5Tokenizer T5_MODEL = "t5-base" print("tokenizer", T5Tokenizer.from_pretrained(T5_MODEL).vocab_size) print("model", T5ForConditionalGeneration.from_pretrained(T5_MODEL).config.vocab_size) ``` ``` tokenizer 32100 model 32128 ``` ## Expected behavior I expected the tokenizer & model to have the same vocabulary size.
10-09-2021 20:59:30
10-09-2021 20:59:30
cc @patrickvonplaten @patil-suraj <|||||>Duplicate of https://github.com/huggingface/transformers/issues/4875 I think<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,945
closed
Raise exceptions instead of asserts in src/transformers/data/processors/xnli.py
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Partly fix https://github.com/huggingface/transformers/issues/12789 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-09-2021 15:44:02
10-09-2021 15:44:02
transformers
13,944
closed
The first inference timing for pure PyTorch is unexpectedly fast and should be ignored
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.11.3 - Platform: Linux-5.4.0-1056-azure-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.9.1+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj @patrickvonplaten - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten - Tokenizers: @LysandreJik - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Hi, thanks for the benchmark library, which has helped me simplify a lot in my project. But I ran into a strange issue that the inference speed slows down when using `torchscript=True`. After digging into this issue, I found that the first timing for `torchscript=False` is strangely fast. (see below section for the details) I think it is better to perform warmup for all situations instead of only: https://github.com/huggingface/transformers/blob/91758e399f8c4bf81820a8af6a257682ccea0223/src/transformers/benchmark/benchmark.py#L198 I will be happy to send a PR if this suggestion is accepted. ## To reproduce Steps to reproduce the behavior: ```python # test.py from transformers import PyTorchBenchmark, PyTorchBenchmarkArguments args = PyTorchBenchmarkArguments(models=["bert-base-uncased"], batch_sizes=[8], sequence_lengths=[512], torchscript=False, env_print=True) benchmark = PyTorchBenchmark(args) benchmark.run() ``` 1. Add `print(runtimes)` at L213 https://github.com/huggingface/transformers/blob/91758e399f8c4bf81820a8af6a257682ccea0223/src/transformers/benchmark/benchmark.py#L208-L212 2. Run `test.py` with `torchscript=False` ``` [1.2410342500079423, 1.7289500490296632, 1.7288817439693958, 1.7287787408567965, 1.7287089368328452] ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- bert-base-uncased 8 512 0.124 -------------------------------------------------------------------------------- ... ``` 3. Run `test.py` with `torchscript=True` ``` [1.6381433540955186, 1.6382532438728958, 1.6387098210398108, 1.6382117359898984, 1.6384075228124857] ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- bert-base-uncased 8 512 0.164 -------------------------------------------------------------------------------- ... ==================== ENVIRONMENT INFORMATION ==================== - transformers_version: 4.11.3 - framework: PyTorch - use_torchscript: True - framework_version: 1.9.1+cu102 - python_version: 3.8.5 - system: Linux - cpu: x86_64 - architecture: 64bit - date: 2021-10-09 - time: 14:50:09.526534 - fp16: False - use_multiprocessing: True - only_pretrain_model: False - cpu_ram_mb: N/A - use_gpu: True - num_gpus: 1 - gpu: N/A - gpu_ram_mb: N/A - gpu_power_watts: N/A - gpu_performance_state: N/A - use_tpu: False ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Perform warmup for all situations to avoid getting unexpected inference speed.
10-09-2021 15:35:11
10-09-2021 15:35:11
It would be nice if someone could shed light on this strange behavior. :)<|||||>@patrickvonplaten Hello, I found from the guide that you are the right people for the benchmark library. :smile: Do you think this suggestion should be accepted?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @siahuat0727, Thanks for your issue. We don't actively maintain the benchmarking tools of `transformers` anymore and also recommend to not use it as it's quite outdated and has not been shown to be very accurate.
transformers
13,943
closed
Raise exception instead of assert benchmark_args_utils.py
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #12789 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-09-2021 14:12:28
10-09-2021 14:12:28
I will redo my pull request after testing the changes correctly.
transformers
13,942
closed
fix issue #13904 -attribute does not exist-
# What does this PR do? The PR addresses the issue #13904 I have changed file "auto_factory.py" and replace self._mapping to self._modle_mapping in line 559 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger @LysandreJik Anyone in the community is free to review the PR once the tests have passed.
10-09-2021 12:01:15
10-09-2021 12:01:15
transformers
13,941
closed
Pretrained model download slowly,It will speed up when I delete the folder $USER/.cache/huggingface
I don't know why, It's very slow, But when I delete the folder `$USER/.cache/huggingface`, It will be `6.2MB/s`. ![image](https://user-images.githubusercontent.com/16131917/136638557-70625b19-ac0f-48f4-aaf0-119289cc62e8.png)
10-09-2021 01:12:54
10-09-2021 01:12:54
The speed of manual download. ![image](https://user-images.githubusercontent.com/16131917/136638962-455ef2da-99df-45f2-9eac-2736ee26ae5e.png) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,940
closed
Make username optional in hub_model_id
# What does this PR do? This PR makes the username in the `hub_model_id` optional, by checking whether there is a "/" in it and adding the username if needed. Since no one is allowed to push to base repo IDs (like `"bert-base-cased"`) anymore, this shouldn't lock any usecase while making the API delightfully easier.
10-08-2021 21:51:11
10-08-2021 21:51:11
transformers
13,939
closed
Raise exceptions instead of asserts in src/transformers/models/bart/modeling_flax_[bart, marian, mbart, pegasus].py
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #12789. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-08-2021 19:39:06
10-08-2021 19:39:06
I wanted to just change one file but the code quality check forced me to also update the other three files. So that's why I am changing multiple at once. I hope you don't mind. <|||||>@patil-suraj I am confused , do you know why the ci fails to install the dependencies? I thought that it could be a temporary issue with the runner that's why I tried to rerun it, but it somehow is another issue. <|||||>Weird, it seems to be a similar issue to: https://stackoverflow.com/questions/69100275/error-while-downloading-the-requirements-using-pip-install-setup-command-use-2 I have relaunched the tests to see if the issue persists.<|||||>@LysandreJik Yeah that issue is really weird and unfortunately it still persists. Maybe i can find something, but i'am not so familiar with the project that I am comfortable changing some dependencies. <|||||>I have traced it back to the release of pip 21.3 today. I have pushed a hotfix on `master`, do you mind rebasing on the `master` branch to include the fix? Thank you!<|||||>@LysandreJik i rebased the branch and now it works just fine. I am curious why there are these issues in pip 21.3, did you find anything out?
transformers
13,938
closed
Raise exceptions instead of asserts in src/transformers/data/processors/utils.py
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Related to #12789. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
10-08-2021 19:30:31
10-08-2021 19:30:31
transformers
13,937
closed
Remove wrong model_args supplied
# What does this PR do? `PretrainedConfig.from_pretrained` method doesn't accept positional arguments, while the `PreTrainedModel.from_pretrained` supplies `model_args` as positional arguments to it when a config is not provided. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik @sgugger
10-08-2021 15:06:26
10-08-2021 15:06:26
The failure seems not related to this PR.<|||||>@sgugger Fixed, thanks for pointing it out.<|||||>There seems to be an issue with the code quality issues! Do you mind fixing these by running the following at the root of your clone? ``` pip install -e .[quality] make fixup ```<|||||>@LysandreJik the tests are passing. Let's merge it?<|||||>Thanks again!
transformers
13,936
closed
Fixed typo: herBERT -> HerBERT
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes typo: `herBERT` -> `HerBERT` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger, @Qbiwan, @forest1988, @rmroczkowski
10-08-2021 14:14:13
10-08-2021 14:14:13
transformers
13,935
closed
Add vim undo files to .gitignore
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> Add vim undo files to `.gitignore`, so that they don't get added automatically in `git add .`. Vim undo files look like this (for `.gitignore`): `..gitignore.un~`
10-08-2021 13:17:01
10-08-2021 13:17:01
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.