repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
9,510
closed
config.json not found when loading fasttext-language-id model
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.1.1 - Platform: Linux-5.4.0-60-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.3.1 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @julien-c ## Information Model I am using (Bert, XLNet ...): [julien-c/fasttext-language-id](https://huggingface.co/julien-c/fasttext-language-id) The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("julien-c/fasttext-language-id") model = AutoModel.from_pretrained("julien-c/fasttext-language-id") ``` Which returns the following error: ``` 404 Client Error: Not Found for url: https://huggingface.co/julien-c/fasttext-language-id/resolve/main/config.json ``` ## Expected behavior The model should load or the config file should be present
01-11-2021 10:21:25
01-11-2021 10:21:25
Hi @nbeuchat this is a fasttext model, not a `transformers` model, so you can't load it that way. I've updated the main button on the webpage to make it clearer that you need to use the model in fasttext: <img width="1067" alt="Screenshot 2021-01-11 at 19 21 07" src="https://user-images.githubusercontent.com/326577/104222282-11d03600-5410-11eb-9b03-307fc776f197.png"> <img width="885" alt="Screenshot 2021-01-11 at 19 21 56" src="https://user-images.githubusercontent.com/326577/104222284-14329000-5410-11eb-842a-05fa8e05c1ca.png"> <|||||>Also cc'ing @thomwolf and @celebio <3<|||||>Got it, thanks for the info and for the quick update! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,509
closed
[Benchmark]onnx-export
# 🖥 Benchmarking `transformers` ## Benchmark I follow 04-onnx-export.ipynb this guidance on CPU, and my CPU model is: Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) Silver 4210 CPU @ 2.20GHz NUMA node0 CPU(s): 0-9,20-29 NUMA node1 CPU(s): 10-19,30-39 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke avx512_vnni md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities ## Set-up I set the environment is: export OMP_NUM_THREADS = 8 export OMP_WAIT_POLICY= 'ACTIVE' then I run all the test program py taskset -c 0-7 python test_.py ## Results when I finish the test program, I print all result,like this: dict_keys(['PyTorch CPU', 'ONNX CPU', 'PyTorch CPU Quantized', 'ONNX CPU Quantized']) dict_values([94.01082992553711, 96.25397443771362, 82.11332082748413, 71.06868505477905]) so, In my operation “PyTorch CPU”:“ONNX CPU Quantized" promote 1.32X but in the guidance “PyTorch CPU”:“ONNX CPU Quantized" promote 5.78X Why didn't it increase so many times confused me?
01-11-2021 09:32:53
01-11-2021 09:32:53
Hi @jianqianzhou, Thanks for raising this issue. I would remove the `OMP_NUM_THREADS` environment variable to fully exploit all the cores/threads you have on your machine. Also, tests were run on a machine with 56 cores so it might impact final performances. Also, it might be possible to further optimize the model / quantized model through the ONNX Runtime optimizer tool.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,508
closed
bug in distributed codes AssertionError: Default process group is not initialized
Hi I am using transformers 3.5.1, on distributed fashion on multiple gpus with pytorch 1.6 and python=3.7 I am running: python -m torch.distributed.launch --nproc_per_node=1 --nnodes=2 --node_rank=1 --master_addr=$host --master_port=$port finetune_trainer.py config.json Huggingface codes only work wiht distributed fashion when all gpus are relying on one machine. If user wants to run two copy of codes on two machines, since in the codes it decides based on local_rank not rank, the local_rank for bot copies would be zero. could you have a look please? thanks
01-11-2021 09:16:18
01-11-2021 09:16:18
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,507
closed
Remove tolerance + drop_rows_to_fit by default
Please take a look @NielsRogge. I'm setting `drop_rows_to_fit=True` when the user wants that truncation. I don't think that attribute really means anything anymore given the way we handle truncation in the encoding methods, so I think it can be removed altogether. Regarding the integration tests, I finally chose to go with a per-test tolerance instead of a relative tolerance, as the TAPAS model can output very large negative numbers; for example for the `test_inference_question_answering_head_conversational` test: ```py expected_tensor = torch.tensor( [ [ -9997.22461, -9997.22461, -9997.22461, -9997.22461, -9997.22461, -9997.22461, -9997.22461, -9997.22461, -9997.22461, -16.2628059, -10004.082, 15.4330549, 15.4330549, 15.4330549, -9990.42, -16.3270779, -16.3270779, -16.3270779, -16.3270779, -16.3270779, -10004.8506, ] ], device=torch_device, ) ``` I think in these cases it's helpful to see what difference we're looking at directly in the test, and I'm not sure a relative difference would handle such ranges, but I may be mistaken here.
01-11-2021 09:12:07
01-11-2021 09:12:07
The tests look ok to me!<|||||>@NielsRogge removed the `drop_rows_to_fit` attribute in the last commit.
transformers
9,506
closed
Model previews not working for models that require MecabTokenizer
As brought up [on Twitter](https://twitter.com/polm23/status/1348520920948695043) by user pol23 @polm, Japanese (and other?) models do not work at all on the model page. They will throw an error `You need to install fugashi to use MecabTokenizer.See https://pypi.org/project/fugashi/ for installation.` Perhaps the environment that these models run in should include all optional dependencies, too? You can try it yourself by picking [a model](https://huggingface.co/daigo/bert-base-japanese-sentiment?text=I+likw+po) and trying the inference widget.
01-11-2021 08:37:27
01-11-2021 08:37:27
Can we add those `pip install -e .[ja]` dependencies to the hosted Inference API @Narsil?<|||||>@julien-c I'm not sure if that is possible, but perhaps this can even be derived from the model card? If a model card specifies Japanese, then the env could include the `[ja]` option.<|||||>If I'm not mistaken all models run in the same env so it's probably not an issue to add a dependency, but I'll let @Narsil answer!<|||||>Yes I created a patch for this, should be up soon.<|||||>and up !<|||||>Confirmed it works, thanks for the quick fix!
transformers
9,505
closed
Fix cardinality
# What does this PR do? Fix the cardinality computation in the TF Trainer. Fix issue #9495
01-11-2021 08:04:57
01-11-2021 08:04:57
Thanks for fixing!
transformers
9,504
closed
Fix template
# What does this PR do? Fix the template as stated in https://github.com/huggingface/transformers/pull/9482#issuecomment-757496368
01-11-2021 07:54:44
01-11-2021 07:54:44
transformers
9,503
closed
torch.nn.modules.module.ModuleAttributeError: 'RecursiveScriptModule' object has no attribute 'resize_token_embeddings'
## Environment info - `transformers` version: 2.0.0 (tried with 4.1.1 as well) - Python version: 3.6.9 - PyTorch version (GPU?): 1.7(False) - Tensorflow version (GPU?): 1.14.0(False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No @LysandreJik @mfuntowicz ## Information Model I am using: GPT2 The tasks I am working on is: * Question generation given a paragraph, clue, style, and answer The problem arises when using: * Torchscript version of fine-tuned GPT2. I have an inference script in which I load the pre-trained tokenizer and add special tokens to it. I resize the token embeddings using the model.resize_token_embeddings() function after adding the special tokens. It works fine for the original PyTorch GPT2 model but fails for the traced(Torchscript) model. The code snippet is as follows: tokenizer = GPT2Tokenizer.from_pretrained(args.model_name_or_path) if args.model_name != "": model = GPT2LMHeadModel.from_pretrained(args.model_name) else: if args.torchscript: model = torch.jit.load(args.ts_model_name_or_path) else: model = GPT2LMHeadModel.from_pretrained(args.model_name_or_path) tokenizer.add_tokens(SPECIAL_TOKENS) model.resize_token_embeddings(len(tokenizer)) Following is the error stack trace: Traceback (most recent call last): File "QG_gpt2_generate.py", line 5, in <module> run() File "/content/drive/MyDrive/home/FQG/src/model/FactorizedQG/GPT2_QG/interact.py", line 231, in run model.resize_token_embeddings(len(tokenizer)) File "/usr/local/lib/python3.6/dist-packages/torch/jit/_script.py", line 558, in __getattr__ return super(RecursiveScriptModule, self).__getattr__(attr) File "/usr/local/lib/python3.6/dist-packages/torch/jit/_script.py", line 288, in __getattr__ return super(ScriptModule, self).__getattr__(attr) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 779, in __getattr__ type(self).__name__, name)) torch.nn.modules.module.ModuleAttributeError: 'RecursiveScriptModule' object has no attribute 'resize_token_embeddings' Is there any other way in which I can perform the same operation of resizing for torchscript models? Thanks
01-11-2021 07:23:22
01-11-2021 07:23:22
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @Mounika2405. Were you able to find a solution for this issue? I am facing a similar issue with another torch_script model<|||||>Facing the same issue.
transformers
9,502
closed
RoBERTa tokenizer does not add start and end token at the beginning and end of the sentence
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.2.0dev0 - Platform: Linux-3.10.0-1127.13.1.el7.x86_64-x86_64-with-redhat-7.8-Verona - Python version: 3.6.12 - PyTorch version (GPU?): 1.7.1+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @mfuntowicz @sgugger <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner ray/raytune: @richardliaw @amogkam tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): RoBERTa The problem arises when using: * [Yes] the official example scripts: (give details below) The problem occurs when running the `run_mlm.py` file in `examples/language-modeling` * [Yes] my own modified scripts: (give details below) The tasks I am working on is: Language Modeling ## To reproduce Steps to reproduce the behavior: 1. Run `python -m pdb examples/language-modeling/run_mlm.py --train_file= wikitext --dataset_config_name wikitext-2-raw-v1 --output_dir=/tmp/debug --model_type=roberta --config_name=roberta-base --tokenizer_name=roberta-base --learning_rate 1e-4 --num_train_epochs 2 --warmup_steps 10000 --do_train --save_steps 10000 --per_device_train_batch_size 2 --overwrite_output_dir` 2. Insert breakpoint using the following command: (At line `if self.use_amp`):`b ../../src/transformers/trainer.py:1138` 3. Press `c` 4. `print(self.tokenizer.decode(inputs['input_ids'][0]))` The output will look like the following: > ' Photograph : The Very Best of Ringo Starr, and as a bonus track<mask> his<mask>astered<mask> studio album Goodnight Vienna. Since his return<mask> touring in 1989, Starr has performed " Back Off<mask>ogaloo " regularly in concert with the various incarnations of his All @-@ Starr Band. </s> > <s> Commentators have interpreted the song,<mask> particularly this statement<mask> as an<mask><mask> Starr on his former Beatles band facet<mask> McCartney. Starr<mask> denied<mask> such interpretation, instead " claiming that the song was inspired by Bolan and nothing more ", Beatles bi<mask> Robert Rodriguez writes. Starr had publicly criticised<mask>\'s solo albums McCartney<mask> 1970 ) and Ram ( 1971 ) on' <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Ideally the first token should have been `<s>` in RoBERTa because that is the start token. And the last token should have been `</s>` because that is the ending token. But those are not the start or end tokens. Wouldn't this be a departure from the implementation in the RoBERTa paper? PS: Please ignore the strikethrough. No idea why that is appearing. <!-- A clear and concise description of what you would expect to happen. -->
01-11-2021 06:59:53
01-11-2021 06:59:53
You are inspecting an input of the training datalaoder, which has been shuffled. Therefore you do not have the beginning of one of your original documents since by default, the script concatenates all your texts (after adding the special tokens at the beginning and the end) then splits the result in contiguous chunks of length `max_seq_length` (unspecified here so the default of a roberta-base model). So the text you are inspecting is inside one of your original documents, which is why it doesn't have that <s> and </s> You can use the `line_by_line` option to change the script preprocessing to consider each line of your dataset as a separate entry (and apply padding or truncation to always have them of `max_seq_length`), in which case every input will have that `</s>` at the beginning.<|||||>Thanks for the information, this makes sense!
transformers
9,501
closed
Question About Attention Score Computation & Intuition
When it comes to transformers, the Query and Key matrices are what determine the attention scores. Here is a nice visual taken from Jay [Alammar's blog post ](http://jalammar.github.io/illustrated-transformer/)on transformers that illustrates how attention scores are computed: ![self-attention_softmax](https://user-images.githubusercontent.com/56566565/104143061-e64b3e00-5372-11eb-8b0f-2c9568988aaa.png) As you can see the attention score depends solely on qi and kj vectors multiplied with no additional parameters. However each of these two vectors are calculated through a linear layer **which had the word embedding (+positional) of just 1 word as input.** My question is: how can the network assign attention scores meaningfully if q and k are computed without looking at different parts of the sentence other than their corresponding word? **How can the network produce k and q vectors that when multiplied represent a meaningful attention score if k and q are computed based on a single word embedding?** lets say I want to process this sentence: The man ate the apple; It didn't taste good. When calculating the attention scores for the word 'it', how would the model know to assign a higher attention score to 'apple' (it refers to the apple) than to 'man' or basically any other word? The model had no way of understanding the context of the sentence because q and k are calculated solely based on the embedding of one word and not the sentence as a whole. q for 'it' is computed from the apple's embedding and the same goes for k for 'apple'. The two vectors are then multiplied to get the attention score. wouldn't this mean that if the two words are present in a different sentence but with the same distance the attention score between the two would be identical in the second sentence? What makes sense to me is the classic approach to attention models. Look at the following visual from Andrew NG's deep learning specialization. ![eac4f222d9d468a0c29a71a3830a5c60-c5w3l08attentionmodel-3-638](https://user-images.githubusercontent.com/56566565/104143423-3971c080-5374-11eb-88e0-78454c3b795b.jpg) Here the attention scores are calculated using the hidden states at that timestamp. The hidden states are calculated with FC layers in a bidirectional RNN. In other words a hidden state at a certain timestamp is influenced by the words that come after and before it, So it makes sense that the model is able to calculate attention scores there.
01-11-2021 05:38:26
01-11-2021 05:38:26
HI @rezhv , that's a great question, I would suggest you ask such general questions on the forum https://discuss.huggingface.co/ and use issues to report bugs and to discuss new features :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,500
closed
Question on the example script run_glue.py for text classification
When we run this script to train a text classification, are the weights of the underlying language model frozen and not updated? Whether they are fixed or trainable, is there any config to change the behavior of the training process? Thanks!
01-11-2021 01:21:51
01-11-2021 01:21:51
hi @xiaolin-cheng `run_glue.py` fine-tunes the whole model, it doesn't freeze anything. You would need to manually freeze the base model, you could do this after loading the `ForSequenceClassification` and then freeze the base model. For example for `BertForSequenceClassification` you can access the base model using `model.bert`. ```python model = BertForSequenceClassification.from_pretrained('bert-base-uncased') for param in model.bert.parameters(): param.requires_grad = False ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,499
closed
[ray] add maintainers for Ray / Tune
# What does this PR do? Adds maintainers for Ray / Raytune integration! cc @sgugger <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
01-11-2021 00:39:21
01-11-2021 00:39:21
transformers
9,498
closed
Can not load a saved tokenizer using AutoTokenizer
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: ubuntu 18.04 - Python version: 3.8 - PyTorch version (GPU?): No - Tensorflow version (GPU?): No - Using GPU in script?: No - Using distributed or parallel set-up in script?: No @mfuntowicz @patrickvonplaten ## Information I'm using following code to save and load t5 tokenizer: ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('t5-small') tokenizer.add_tokens(['<sep>', '<hl>']) tokenizer.save_pretrained('./t5-tokenizer-test/') tokenizer2 = AutoTokenizer.from_pretrained('./t5-tokenizer-test/') ``` But it throws the following exception: During handling of the above exception, another exception occurred: ``` Traceback (most recent call last): File "/home/amir/PycharmProjects/question_generation/testifier.py", line 19, in <module> tokenizer2 = AutoTokenizer.from_pretrained('./t5-tokenizer-test/') File "/home/amir/PycharmProjects/question_generation/venv/lib/python3.7/site-packages/transformers/models/auto/tokenization_auto.py", line 345, in from_pretrained config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) File "/home/amir/PycharmProjects/question_generation/venv/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 349, in from_pretrained config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "/home/amir/PycharmProjects/question_generation/venv/lib/python3.7/site-packages/transformers/configuration_utils.py", line 418, in get_config_dict raise EnvironmentError(msg) OSError: Can't load config for './t5-tokenizer-test/'. Make sure that: - './t5-tokenizer-test/' is a correct model identifier listed on 'https://huggingface.co/models' - or './t5-tokenizer-test/' is the correct path to a directory containing a config.json file ``` If I replace Autotokenizer with T5Tokenizer the issue will be fixed: ``` from transformers import T5Tokenizer tokenizer = T5Tokenizer.from_pretrained('t5-small') tokenizer.add_tokens(['<sep>', '<hl>']) tokenizer.save_pretrained('./t5-tokenizer-test/') tokenizer2 = T5Tokenizer.from_pretrained('./t5-tokenizer-test/') ```
01-10-2021 22:31:45
01-10-2021 22:31:45
Hi @hadifar The `AutoTokenizer` needs to know the model type to load the correct `Tokenizer` class, and that information is stored in the `config` file, so if `config.json` is not present it can not load the correct class. And `config.json` is saved when saving the model using `.save_pretrained` method. To load a separately saved tokenizer you should use the respective tokenizer class
transformers
9,497
closed
[TFBart] Split TF-Bart
# What does this PR do? TF mirror of: #9343 - Exact same changes as in #9343 - Docs are improved - TFBlenderbot gets a better integration tests - tf_saved_model & tf_serving tests are disabled for now and should ideally be fixed in https://github.com/huggingface/transformers/pull/9478 after merging this one ## After PR is merged TODO: - [x] Open issue about `facebook/blenderbot_small-90M` tokenizer - cannot download files from hub. Weird issue
01-10-2021 14:37:27
01-10-2021 14:37:27
> Awesome work!! Just left few smalll comments. I think we should first find a proper fix #9478 and then merging this one. Switching on/off some tests everytime we touch a model is really not a long term solution, I think a proper template as to be stated first and then afterwards we do the models. IMO this PR should be merged and the s2s fix should be applied afterward as said offline. This PR is blocking a new release currently<|||||>> IMO this PR should be merged and the s2s fix should be applied afterward as said offline. This PR is blocking a new release currently Ok, nevermind, I didn't know you wanted to have it in the next release.
transformers
9,496
closed
[make docs] please help make the validation process easier
Writing serious documents in .rst is such a pain because the sphinx builder is terrible at times. If all goes well I can incrementally run `make docs` and it rebuilds just the modified page which is relatively quick, while it still re-runs highlighting on all pages (not needed for the doc I'm working on) But if a single error happens it rebuilds everything from scratch which takes forever and chances are that it fails again are very high. since half the time I have no idea what the error is. It's good if it even tells me the line number but sometimes it doesn't even give any context - what a horrible tool. So I have to do a lot of guessing and a lot of waiting and by the end of it I don't really want to finish the doc I was very inspired to write. There must be a better way to isolate just the page I'm working on. I don't care for cross references, I just want to be able to quickly validate that my page will "compile" and not error. For example how do I hack `make docs` to not do die on: ``` Warning, treated as error: ``` This is extremely painful, as after each error it rebuilds everything which takes forever. I think what would ease the process in this particular situation is to leave warnings as warnings only make them errors when I commit, and obviously on CI. So something that: * doesn't treat warnings as errors * doesn't rebuild everything if something failed in the previous run * doesn't re-run highlighting on all pages * ideally a way to work on just one page of my choice - surely it could detect the only modified file - but if it's too much to ask I would be happy to manually supply it Something like: ``` utils/checksingledoc.py file.rst ``` I don't know sphinx, so it's very hard for me to know what to propose. Perhaps it could build a project made of a single file (or several files) on the fly and that it could solve the problem? Of course, it will need to ignore cross-references as those won't be available and probably other features that I didn't think of. Or perhaps there is an existing 3rd party program that lints .rst in the same way shpinx does and could be configured to do things that will make the doc writing easier? Thank you! @sgugger
01-09-2021 23:45:50
01-09-2021 23:45:50
I am not aware of anything that could make life easier on this as I would have implemented it/documented it if I knew of it. Your solution of creating a project with just two files does not work, as it would then very likely be impossible to import the file in question and sphinx needs to do that. I'm not aware of any software that lints properly the .rst. Happy to add any new functionality the doc styler that could help here (as this one runs fast and can be run on a given .py/.rst file) though it's medium priority in terms of development of the project (we do want users to be able to build the documentation smoothly and easily but there are other things more important).<|||||>Thank you for feedback, @sgugger ------------------- So to disable `Warning, treated as error:` I need to drop `-W` in: ``` cd docs && make html SPHINXOPTS="-W" ``` and then need to figure out how to skip highlighting as it re-works all files on every run. ------------------ I started looking at finding a single page linter that supports sphinx's custom parser. I will post my findings here: - https://pypi.org/project/restructuredtext-lint/ - says partially supports sphinx - https://pypi.org/project/doc8/ - supports sphinx, but may have its own demands <|||||>Also did you know sphinx has parallel processing with `-j`? I added `-a -E`, which forces a full rebuild, just for the test so that we are comparing the same things. ``` time make html SPHINXOPTS="-a -E" real 1m15.265s user 1m15.114s sys 0m1.790s ``` ``` time make html SPHINXOPTS="-a -E -j 6" real 0m39.555s user 1m31.551s sys 0m7.608s ``` this is almost twice as fast! It seems that on my setup `-j 5` is just as fast but less heat get generated (41 sec). It has `-j auto` - to use all cpu cores, but it's a bad idea, since it won't get any faster with 12 or more workers. Any number of workers beyond 5 on my setup provides a tiny speedup. Do you think it'd be a good idea to add say `SPHINXOPTS="-j 4"` as the default? <|||||>I filed a bug report https://github.com/sphinx-doc/sphinx/issues/8681 since if that `re-highlighting of all modules` stage gets fixed to not re-run on all modules when only 1 files is modified and I drop `-W` temporarily - then the rebuild should be almost instantaneous for a single modified file and thus we won't need to look for an outside linter. <|||||>Last time I tried ot use multiprocessing I didn't get any speed up, but it might have been because I was trying the auto option. We can certainly try with 4 cores to begin with.<|||||>Excellent! In general `auto` is almost never a good option w/o knowing the user's setup. That's why I never use `make test`, which runs `pytest -n auto` - I have 12 cpu cores and it can't possibly run 12 workers on 2 gpus - the outcome is really bad.<|||||>> I filed a bug report [sphinx-doc/sphinx#8681](https://github.com/sphinx-doc/sphinx/issues/8681) since if that `re-highlighting of all modules` stage gets fixed to not re-run on all modules when only 1 files is modified and I drop `-W` temporarily - then the rebuild should be almost instantaneous for a single modified file and thus we won't need to look for an outside linter. sphinx dev has fixed this issue in master, so now `make docs` for one modified file is blazingly fast - ~5sec. Most of the overhead is loading tf+pt I think. ``` time python -c "import torch, tensorflow" 2021-01-18 09:54:49.558444: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 real 0m1.810s user 0m2.375s sys 0m1.219s ``` I personally am pretty happy with this outcome, so closing this ticket.<|||||>Oh, that's very nice!
transformers
9,495
closed
tf trainer dataset cardinality issue - potentially a bug
In `trainer_tf.py`, line 138, we have self.num_train_examples = self.train_dataset.cardinality(self.train_dataset).numpy() in the method def get_train_tfdataset(self) -> tf.data.Dataset: However, in the official tf documenation [cardinality](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#cardinality), it is defined as: cardinality() which has no argument. I got the following error ``` File "/home/imo/Desktop/transformers/src/transformers/trainer_tf.py", line 138, in get_train_tfdataset self.num_train_examples = self.train_dataset.cardinality(self.train_dataset).numpy() TypeError: cardinality() takes 1 positional argument but 2 were given ``` I think the current version on master is a bug, which should be changed to self.train_dataset.cardinality().numpy() Could you confirm, @jplu? And if it is a bug, let's fix it. Thank you.
01-09-2021 19:36:14
01-09-2021 19:36:14
I can confirm! Good catch!<|||||>I resume the work on creating `test_trainer_tf.py`, I promise I will finish it this time. After that, it might be easier to catch the errors in `tf_trainer.py`.<|||||>I take care of this!<|||||>OK, @jplu . Thank you for letting me know about it (I did some check and didn't found it on master, so I thought it was not done yet).
transformers
9,494
closed
New Updated DistilGPT-2 Finetuning and Generation
https://github.com/huggingface/transformers/pull/3177 # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 --> @patrickvonplaten
01-09-2021 11:54:48
01-09-2021 11:54:48
Failing Test is fixed on master I believe
transformers
9,493
closed
Added a new DistilGPT2 fine-tuning and generation Tutorial
https://github.com/huggingface/transformers/pull/3177 # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 --> @patrickvonplaten
01-09-2021 08:16:50
01-09-2021 08:16:50
The tutorial has issues due old code. Will make another pull request with new code.
transformers
9,492
closed
Problems with using LongFormer
I am according to the official longformer lot (https://github.com/allenai/longformer) provides methods to use, I use in the code of parts as follows: ` tokenizer_class = BertTokenizer model_class = LongformerModel # directory is fine pretrained_weights = self.pretrainedBertPath tokenizer = tokenizer_class.from_pretrained(pretrained_weights) model = model_class.from_pretrained('longformer-base-4096', gradient_checkpointing=True) # add_special_tokens will add start and end token input_ids = torch.tensor([tokenizer.encode(text, add_special_tokens=False)]) ` This warning appeared: ` Some weights of the model checkpoint at longformer-base-4096 were not used when initializing LongformerModel: ['roberta.embeddings.word_embeddings.weight', 'roberta.embeddings.position_embeddings.weight', 'roberta.embeddings.token_type_embeddings.weight', 'roberta.embeddings.LayerNorm.weight', 'roberta.embeddings.LayerNorm.bias', 'roberta.encoder.layer.0.attention.self.query.weight', 'roberta.encoder.layer.0.attention.self.query.bias', 'roberta.encoder.layer.0.attention.self.key.weight', 'roberta.encoder.layer.0.attention.self.key.bias', 'roberta.encoder.layer.0.attention.self.value.weight', 'roberta.encoder.layer.0.attention.self.value.bias', 'roberta.encoder.layer.0.attention.self.query_global.weight', 'roberta.encoder.layer.0.attention.self.query_global.bias', 'roberta.encoder.layer.0.attention.self.key_global.weight', 'roberta.encoder.layer.0.attention.self.key_global.bias', 'roberta.encoder.layer.0.attention.self.value_global.weight', 'roberta.encoder.layer.0.attention.self.value_global.bias', 'roberta.encoder.layer.0.attention.output.dense.weight', 'roberta.encoder.layer.0.attention.output.dense.bias', 'roberta.encoder.layer.0.attention.output.LayerNorm.weight', 'roberta.encoder.layer.0.attention.output.LayerNorm.bias', 'roberta.encoder.layer.0.intermediate.dense.weight', 'roberta.encoder.layer.0.intermediate.dense.bias', 'roberta.encoder.layer.0.output.dense.weight', 'roberta.encoder.layer.0.output.dense.bias', 'roberta.encoder.layer.0.output.LayerNorm.weight', 'roberta.encoder.layer.0.output.LayerNorm.bias', 'roberta.encoder.layer.1.attention.self.query.weight', 'roberta.encoder.layer.1.attention.self.query.bias', 'roberta.encoder.layer.1.attention.self.key.weight', 'roberta.encoder.layer.1.attention.self.key.bias', 'roberta.encoder.layer.1.attention.self.value.weight', 'roberta.encoder.layer.1.attention.self.value.bias', 'roberta.encoder.layer.1.attention.self.query_global.weight', 'roberta.encoder.layer.1.attention.self.query_global.bias', 'roberta.encoder.layer.1.attention.self.key_global.weight', 'roberta.encoder.layer.1.attention.self.key_global.bias', 'roberta.encoder.layer.1.attention.self.value_global.weight', 'roberta.encoder.layer.1.attention.self.value_global.bias', 'roberta.encoder.layer.1.attention.output.dense.weight', 'roberta.encoder.layer.1.attention.output.dense.bias', 'roberta.encoder.layer.1.attention.output.LayerNorm.weight', 'roberta.encoder.layer.1.attention.output.LayerNorm.bias', 'roberta.encoder.layer.1.intermediate.dense.weight', 'roberta.encoder.layer.1.intermediate.dense.bias', 'roberta.encoder.layer.1.output.dense.weight', 'roberta.encoder.layer.1.output.dense.bias', 'roberta.encoder.layer.1.output.LayerNorm.weight', 'roberta.encoder.layer.1.output.LayerNorm.bias', 'roberta.encoder.layer.2.attention.self.query.weight', 'roberta.encoder.layer.2.attention.self.query.bias', 'roberta.encoder.layer.2.attention.self.key.weight', 'roberta.encoder.layer.2.attention.self.key.bias', 'roberta.encoder.layer.2.attention.self.value.weight', 'roberta.encoder.layer.2.attention.self.value.bias', 'roberta.encoder.layer.2.attention.self.query_global.weight', 'roberta.encoder.layer.2.attention.self.query_global.bias', 'roberta.encoder.layer.2.attention.self.key_global.weight', 'roberta.encoder.layer.2.attention.self.key_global.bias', 'roberta.encoder.layer.2.attention.self.value_global.weight', 'roberta.encoder.layer.2.attention.self.value_global.bias', 'roberta.encoder.layer.2.attention.output.dense.weight', 'roberta.encoder.layer.2.attention.output.dense.bias', 'roberta.encoder.layer.2.attention.output.LayerNorm.weight', 'roberta.encoder.layer.2.attention.output.LayerNorm.bias', 'roberta.encoder.layer.2.intermediate.dense.weight', 'roberta.encoder.layer.2.intermediate.dense.bias', 'roberta.encoder.layer.2.output.dense.weight', 'roberta.encoder.layer.2.output.dense.bias', 'roberta.encoder.layer.2.output.LayerNorm.weight', 'roberta.encoder.layer.2.output.LayerNorm.bias', 'roberta.encoder.layer.3.attention.self.query.weight', 'roberta.encoder.layer.3.attention.self.query.bias', 'roberta.encoder.layer.3.attention.self.key.weight', 'roberta.encoder.layer.3.attention.self.key.bias', 'roberta.encoder.layer.3.attention.self.value.weight', 'roberta.encoder.layer.3.attention.self.value.bias', 'roberta.encoder.layer.3.attention.self.query_global.weight', 'roberta.encoder.layer.3.attention.self.query_global.bias', 'roberta.encoder.layer.3.attention.self.key_global.weight', 'roberta.encoder.layer.3.attention.self.key_global.bias', 'roberta.encoder.layer.3.attention.self.value_global.weight', 'roberta.encoder.layer.3.attention.self.value_global.bias', 'roberta.encoder.layer.3.attention.output.dense.weight', 'roberta.encoder.layer.3.attention.output.dense.bias', 'roberta.encoder.layer.3.attention.output.LayerNorm.weight', 'roberta.encoder.layer.3.attention.output.LayerNorm.bias', 'roberta.encoder.layer.3.intermediate.dense.weight', 'roberta.encoder.layer.3.intermediate.dense.bias', 'roberta.encoder.layer.3.output.dense.weight', 'roberta.encoder.layer.3.output.dense.bias', 'roberta.encoder.layer.3.output.LayerNorm.weight', 'roberta.encoder.layer.3.output.LayerNorm.bias', 'roberta.encoder.layer.4.attention.self.query.weight', 'roberta.encoder.layer.4.attention.self.query.bias', 'roberta.encoder.layer.4.attention.self.key.weight', 'roberta.encoder.layer.4.attention.self.key.bias', 'roberta.encoder.layer.4.attention.self.value.weight', 'roberta.encoder.layer.4.attention.self.value.bias', 'roberta.encoder.layer.4.attention.self.query_global.weight', 'roberta.encoder.layer.4.attention.self.query_global.bias', 'roberta.encoder.layer.4.attention.self.key_global.weight', 'roberta.encoder.layer.4.attention.self.key_global.bias', 'roberta.encoder.layer.4.attention.self.value_global.weight', 'roberta.encoder.layer.4.attention.self.value_global.bias', 'roberta.encoder.layer.4.attention.output.dense.weight', 'roberta.encoder.layer.4.attention.output.dense.bias', 'roberta.encoder.layer.4.attention.output.LayerNorm.weight', 'roberta.encoder.layer.4.attention.output.LayerNorm.bias', 'roberta.encoder.layer.4.intermediate.dense.weight', 'roberta.encoder.layer.4.intermediate.dense.bias', 'roberta.encoder.layer.4.output.dense.weight', 'roberta.encoder.layer.4.output.dense.bias', 'roberta.encoder.layer.4.output.LayerNorm.weight', 'roberta.encoder.layer.4.output.LayerNorm.bias', 'roberta.encoder.layer.5.attention.self.query.weight', 'roberta.encoder.layer.5.attention.self.query.bias', 'roberta.encoder.layer.5.attention.self.key.weight', 'roberta.encoder.layer.5.attention.self.key.bias', 'roberta.encoder.layer.5.attention.self.value.weight', 'roberta.encoder.layer.5.attention.self.value.bias', 'roberta.encoder.layer.5.attention.self.query_global.weight', 'roberta.encoder.layer.5.attention.self.query_global.bias', 'roberta.encoder.layer.5.attention.self.key_global.weight', 'roberta.encoder.layer.5.attention.self.key_global.bias', 'roberta.encoder.layer.5.attention.self.value_global.weight', 'roberta.encoder.layer.5.attention.self.value_global.bias', 'roberta.encoder.layer.5.attention.output.dense.weight', 'roberta.encoder.layer.5.attention.output.dense.bias', 'roberta.encoder.layer.5.attention.output.LayerNorm.weight', 'roberta.encoder.layer.5.attention.output.LayerNorm.bias', 'roberta.encoder.layer.5.intermediate.dense.weight', 'roberta.encoder.layer.5.intermediate.dense.bias', 'roberta.encoder.layer.5.output.dense.weight', 'roberta.encoder.layer.5.output.dense.bias', 'roberta.encoder.layer.5.output.LayerNorm.weight', 'roberta.encoder.layer.5.output.LayerNorm.bias', 'roberta.encoder.layer.6.attention.self.query.weight', 'roberta.encoder.layer.6.attention.self.query.bias', 'roberta.encoder.layer.6.attention.self.key.weight', 'roberta.encoder.layer.6.attention.self.key.bias', 'roberta.encoder.layer.6.attention.self.value.weight', 'roberta.encoder.layer.6.attention.self.value.bias', 'roberta.encoder.layer.6.attention.self.query_global.weight', 'roberta.encoder.layer.6.attention.self.query_global.bias', 'roberta.encoder.layer.6.attention.self.key_global.weight', 'roberta.encoder.layer.6.attention.self.key_global.bias', 'roberta.encoder.layer.6.attention.self.value_global.weight', 'roberta.encoder.layer.6.attention.self.value_global.bias', 'roberta.encoder.layer.6.attention.output.dense.weight', 'roberta.encoder.layer.6.attention.output.dense.bias', 'roberta.encoder.layer.6.attention.output.LayerNorm.weight', 'roberta.encoder.layer.6.attention.output.LayerNorm.bias', 'roberta.encoder.layer.6.intermediate.dense.weight', 'roberta.encoder.layer.6.intermediate.dense.bias', 'roberta.encoder.layer.6.output.dense.weight', 'roberta.encoder.layer.6.output.dense.bias', 'roberta.encoder.layer.6.output.LayerNorm.weight', 'roberta.encoder.layer.6.output.LayerNorm.bias', 'roberta.encoder.layer.7.attention.self.query.weight', 'roberta.encoder.layer.7.attention.self.query.bias', 'roberta.encoder.layer.7.attention.self.key.weight', 'roberta.encoder.layer.7.attention.self.key.bias', 'roberta.encoder.layer.7.attention.self.value.weight', 'roberta.encoder.layer.7.attention.self.value.bias', 'roberta.encoder.layer.7.attention.self.query_global.weight', 'roberta.encoder.layer.7.attention.self.query_global.bias', 'roberta.encoder.layer.7.attention.self.key_global.weight', 'roberta.encoder.layer.7.attention.self.key_global.bias', 'roberta.encoder.layer.7.attention.self.value_global.weight', 'roberta.encoder.layer.7.attention.self.value_global.bias', 'roberta.encoder.layer.7.attention.output.dense.weight', 'roberta.encoder.layer.7.attention.output.dense.bias', 'roberta.encoder.layer.7.attention.output.LayerNorm.weight', 'roberta.encoder.layer.7.attention.output.LayerNorm.bias', 'roberta.encoder.layer.7.intermediate.dense.weight', 'roberta.encoder.layer.7.intermediate.dense.bias', 'roberta.encoder.layer.7.output.dense.weight', 'roberta.encoder.layer.7.output.dense.bias', 'roberta.encoder.layer.7.output.LayerNorm.weight', 'roberta.encoder.layer.7.output.LayerNorm.bias', 'roberta.encoder.layer.8.attention.self.query.weight', 'roberta.encoder.layer.8.attention.self.query.bias', 'roberta.encoder.layer.8.attention.self.key.weight', 'roberta.encoder.layer.8.attention.self.key.bias', 'roberta.encoder.layer.8.attention.self.value.weight', 'roberta.encoder.layer.8.attention.self.value.bias', 'roberta.encoder.layer.8.attention.self.query_global.weight', 'roberta.encoder.layer.8.attention.self.query_global.bias', 'roberta.encoder.layer.8.attention.self.key_global.weight', 'roberta.encoder.layer.8.attention.self.key_global.bias', 'roberta.encoder.layer.8.attention.self.value_global.weight', 'roberta.encoder.layer.8.attention.self.value_global.bias', 'roberta.encoder.layer.8.attention.output.dense.weight', 'roberta.encoder.layer.8.attention.output.dense.bias', 'roberta.encoder.layer.8.attention.output.LayerNorm.weight', 'roberta.encoder.layer.8.attention.output.LayerNorm.bias', 'roberta.encoder.layer.8.intermediate.dense.weight', 'roberta.encoder.layer.8.intermediate.dense.bias', 'roberta.encoder.layer.8.output.dense.weight', 'roberta.encoder.layer.8.output.dense.bias', 'roberta.encoder.layer.8.output.LayerNorm.weight', 'roberta.encoder.layer.8.output.LayerNorm.bias', 'roberta.encoder.layer.9.attention.self.query.weight', 'roberta.encoder.layer.9.attention.self.query.bias', 'roberta.encoder.layer.9.attention.self.key.weight', 'roberta.encoder.layer.9.attention.self.key.bias', 'roberta.encoder.layer.9.attention.self.value.weight', 'roberta.encoder.layer.9.attention.self.value.bias', 'roberta.encoder.layer.9.attention.self.query_global.weight', 'roberta.encoder.layer.9.attention.self.query_global.bias', 'roberta.encoder.layer.9.attention.self.key_global.weight', 'roberta.encoder.layer.9.attention.self.key_global.bias', 'roberta.encoder.layer.9.attention.self.value_global.weight', 'roberta.encoder.layer.9.attention.self.value_global.bias', 'roberta.encoder.layer.9.attention.output.dense.weight', 'roberta.encoder.layer.9.attention.output.dense.bias', 'roberta.encoder.layer.9.attention.output.LayerNorm.weight', 'roberta.encoder.layer.9.attention.output.LayerNorm.bias', 'roberta.encoder.layer.9.intermediate.dense.weight', 'roberta.encoder.layer.9.intermediate.dense.bias', 'roberta.encoder.layer.9.output.dense.weight', 'roberta.encoder.layer.9.output.dense.bias', 'roberta.encoder.layer.9.output.LayerNorm.weight', 'roberta.encoder.layer.9.output.LayerNorm.bias', 'roberta.encoder.layer.10.attention.self.query.weight', 'roberta.encoder.layer.10.attention.self.query.bias', 'roberta.encoder.layer.10.attention.self.key.weight', 'roberta.encoder.layer.10.attention.self.key.bias', 'roberta.encoder.layer.10.attention.self.value.weight', 'roberta.encoder.layer.10.attention.self.value.bias', 'roberta.encoder.layer.10.attention.self.query_global.weight', 'roberta.encoder.layer.10.attention.self.query_global.bias', 'roberta.encoder.layer.10.attention.self.key_global.weight', 'roberta.encoder.layer.10.attention.self.key_global.bias', 'roberta.encoder.layer.10.attention.self.value_global.weight', 'roberta.encoder.layer.10.attention.self.value_global.bias', 'roberta.encoder.layer.10.attention.output.dense.weight', 'roberta.encoder.layer.10.attention.output.dense.bias', 'roberta.encoder.layer.10.attention.output.LayerNorm.weight', 'roberta.encoder.layer.10.attention.output.LayerNorm.bias', 'roberta.encoder.layer.10.intermediate.dense.weight', 'roberta.encoder.layer.10.intermediate.dense.bias', 'roberta.encoder.layer.10.output.dense.weight', 'roberta.encoder.layer.10.output.dense.bias', 'roberta.encoder.layer.10.output.LayerNorm.weight', 'roberta.encoder.layer.10.output.LayerNorm.bias', 'roberta.encoder.layer.11.attention.self.query.weight', 'roberta.encoder.layer.11.attention.self.query.bias', 'roberta.encoder.layer.11.attention.self.key.weight', 'roberta.encoder.layer.11.attention.self.key.bias', 'roberta.encoder.layer.11.attention.self.value.weight', 'roberta.encoder.layer.11.attention.self.value.bias', 'roberta.encoder.layer.11.attention.self.query_global.weight', 'roberta.encoder.layer.11.attention.self.query_global.bias', 'roberta.encoder.layer.11.attention.self.key_global.weight', 'roberta.encoder.layer.11.attention.self.key_global.bias', 'roberta.encoder.layer.11.attention.self.value_global.weight', 'roberta.encoder.layer.11.attention.self.value_global.bias', 'roberta.encoder.layer.11.attention.output.dense.weight', 'roberta.encoder.layer.11.attention.output.dense.bias', 'roberta.encoder.layer.11.attention.output.LayerNorm.weight', 'roberta.encoder.layer.11.attention.output.LayerNorm.bias', 'roberta.encoder.layer.11.intermediate.dense.weight', 'roberta.encoder.layer.11.intermediate.dense.bias', 'roberta.encoder.layer.11.output.dense.weight', 'roberta.encoder.layer.11.output.dense.bias', 'roberta.encoder.layer.11.output.LayerNorm.weight', 'roberta.encoder.layer.11.output.LayerNorm.bias', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias', 'lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight'] - This IS expected if you are initializing LongformerModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing LongformerModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of LongformerModel were not initialized from the model checkpoint at longformer-base-4096 and are newly initialized: ['embeddings.word_embeddings.weight', 'embeddings.position_embeddings.weight', 'embeddings.token_type_embeddings.weight', 'embeddings.LayerNorm.weight', 'embeddings.LayerNorm.bias', 'encoder.layer.0.attention.self.query.weight', 'encoder.layer.0.attention.self.query.bias', 'encoder.layer.0.attention.self.key.weight', 'encoder.layer.0.attention.self.key.bias', 'encoder.layer.0.attention.self.value.weight', 'encoder.layer.0.attention.self.value.bias', 'encoder.layer.0.attention.self.query_global.weight', 'encoder.layer.0.attention.self.query_global.bias', 'encoder.layer.0.attention.self.key_global.weight', 'encoder.layer.0.attention.self.key_global.bias', 'encoder.layer.0.attention.self.value_global.weight', 'encoder.layer.0.attention.self.value_global.bias', 'encoder.layer.0.attention.output.dense.weight', 'encoder.layer.0.attention.output.dense.bias', 'encoder.layer.0.attention.output.LayerNorm.weight', 'encoder.layer.0.attention.output.LayerNorm.bias', 'encoder.layer.0.intermediate.dense.weight', 'encoder.layer.0.intermediate.dense.bias', 'encoder.layer.0.output.dense.weight', 'encoder.layer.0.output.dense.bias', 'encoder.layer.0.output.LayerNorm.weight', 'encoder.layer.0.output.LayerNorm.bias', 'encoder.layer.1.attention.self.query.weight', 'encoder.layer.1.attention.self.query.bias', 'encoder.layer.1.attention.self.key.weight', 'encoder.layer.1.attention.self.key.bias', 'encoder.layer.1.attention.self.value.weight', 'encoder.layer.1.attention.self.value.bias', 'encoder.layer.1.attention.self.query_global.weight', 'encoder.layer.1.attention.self.query_global.bias', 'encoder.layer.1.attention.self.key_global.weight', 'encoder.layer.1.attention.self.key_global.bias', 'encoder.layer.1.attention.self.value_global.weight', 'encoder.layer.1.attention.self.value_global.bias', 'encoder.layer.1.attention.output.dense.weight', 'encoder.layer.1.attention.output.dense.bias', 'encoder.layer.1.attention.output.LayerNorm.weight', 'encoder.layer.1.attention.output.LayerNorm.bias', 'encoder.layer.1.intermediate.dense.weight', 'encoder.layer.1.intermediate.dense.bias', 'encoder.layer.1.output.dense.weight', 'encoder.layer.1.output.dense.bias', 'encoder.layer.1.output.LayerNorm.weight', 'encoder.layer.1.output.LayerNorm.bias', 'encoder.layer.2.attention.self.query.weight', 'encoder.layer.2.attention.self.query.bias', 'encoder.layer.2.attention.self.key.weight', 'encoder.layer.2.attention.self.key.bias', 'encoder.layer.2.attention.self.value.weight', 'encoder.layer.2.attention.self.value.bias', 'encoder.layer.2.attention.self.query_global.weight', 'encoder.layer.2.attention.self.query_global.bias', 'encoder.layer.2.attention.self.key_global.weight', 'encoder.layer.2.attention.self.key_global.bias', 'encoder.layer.2.attention.self.value_global.weight', 'encoder.layer.2.attention.self.value_global.bias', 'encoder.layer.2.attention.output.dense.weight', 'encoder.layer.2.attention.output.dense.bias', 'encoder.layer.2.attention.output.LayerNorm.weight', 'encoder.layer.2.attention.output.LayerNorm.bias', 'encoder.layer.2.intermediate.dense.weight', 'encoder.layer.2.intermediate.dense.bias', 'encoder.layer.2.output.dense.weight', 'encoder.layer.2.output.dense.bias', 'encoder.layer.2.output.LayerNorm.weight', 'encoder.layer.2.output.LayerNorm.bias', 'encoder.layer.3.attention.self.query.weight', 'encoder.layer.3.attention.self.query.bias', 'encoder.layer.3.attention.self.key.weight', 'encoder.layer.3.attention.self.key.bias', 'encoder.layer.3.attention.self.value.weight', 'encoder.layer.3.attention.self.value.bias', 'encoder.layer.3.attention.self.query_global.weight', 'encoder.layer.3.attention.self.query_global.bias', 'encoder.layer.3.attention.self.key_global.weight', 'encoder.layer.3.attention.self.key_global.bias', 'encoder.layer.3.attention.self.value_global.weight', 'encoder.layer.3.attention.self.value_global.bias', 'encoder.layer.3.attention.output.dense.weight', 'encoder.layer.3.attention.output.dense.bias', 'encoder.layer.3.attention.output.LayerNorm.weight', 'encoder.layer.3.attention.output.LayerNorm.bias', 'encoder.layer.3.intermediate.dense.weight', 'encoder.layer.3.intermediate.dense.bias', 'encoder.layer.3.output.dense.weight', 'encoder.layer.3.output.dense.bias', 'encoder.layer.3.output.LayerNorm.weight', 'encoder.layer.3.output.LayerNorm.bias', 'encoder.layer.4.attention.self.query.weight', 'encoder.layer.4.attention.self.query.bias', 'encoder.layer.4.attention.self.key.weight', 'encoder.layer.4.attention.self.key.bias', 'encoder.layer.4.attention.self.value.weight', 'encoder.layer.4.attention.self.value.bias', 'encoder.layer.4.attention.self.query_global.weight', 'encoder.layer.4.attention.self.query_global.bias', 'encoder.layer.4.attention.self.key_global.weight', 'encoder.layer.4.attention.self.key_global.bias', 'encoder.layer.4.attention.self.value_global.weight', 'encoder.layer.4.attention.self.value_global.bias', 'encoder.layer.4.attention.output.dense.weight', 'encoder.layer.4.attention.output.dense.bias', 'encoder.layer.4.attention.output.LayerNorm.weight', 'encoder.layer.4.attention.output.LayerNorm.bias', 'encoder.layer.4.intermediate.dense.weight', 'encoder.layer.4.intermediate.dense.bias', 'encoder.layer.4.output.dense.weight', 'encoder.layer.4.output.dense.bias', 'encoder.layer.4.output.LayerNorm.weight', 'encoder.layer.4.output.LayerNorm.bias', 'encoder.layer.5.attention.self.query.weight', 'encoder.layer.5.attention.self.query.bias', 'encoder.layer.5.attention.self.key.weight', 'encoder.layer.5.attention.self.key.bias', 'encoder.layer.5.attention.self.value.weight', 'encoder.layer.5.attention.self.value.bias', 'encoder.layer.5.attention.self.query_global.weight', 'encoder.layer.5.attention.self.query_global.bias', 'encoder.layer.5.attention.self.key_global.weight', 'encoder.layer.5.attention.self.key_global.bias', 'encoder.layer.5.attention.self.value_global.weight', 'encoder.layer.5.attention.self.value_global.bias', 'encoder.layer.5.attention.output.dense.weight', 'encoder.layer.5.attention.output.dense.bias', 'encoder.layer.5.attention.output.LayerNorm.weight', 'encoder.layer.5.attention.output.LayerNorm.bias', 'encoder.layer.5.intermediate.dense.weight', 'encoder.layer.5.intermediate.dense.bias', 'encoder.layer.5.output.dense.weight', 'encoder.layer.5.output.dense.bias', 'encoder.layer.5.output.LayerNorm.weight', 'encoder.layer.5.output.LayerNorm.bias', 'encoder.layer.6.attention.self.query.weight', 'encoder.layer.6.attention.self.query.bias', 'encoder.layer.6.attention.self.key.weight', 'encoder.layer.6.attention.self.key.bias', 'encoder.layer.6.attention.self.value.weight', 'encoder.layer.6.attention.self.value.bias', 'encoder.layer.6.attention.self.query_global.weight', 'encoder.layer.6.attention.self.query_global.bias', 'encoder.layer.6.attention.self.key_global.weight', 'encoder.layer.6.attention.self.key_global.bias', 'encoder.layer.6.attention.self.value_global.weight', 'encoder.layer.6.attention.self.value_global.bias', 'encoder.layer.6.attention.output.dense.weight', 'encoder.layer.6.attention.output.dense.bias', 'encoder.layer.6.attention.output.LayerNorm.weight', 'encoder.layer.6.attention.output.LayerNorm.bias', 'encoder.layer.6.intermediate.dense.weight', 'encoder.layer.6.intermediate.dense.bias', 'encoder.layer.6.output.dense.weight', 'encoder.layer.6.output.dense.bias', 'encoder.layer.6.output.LayerNorm.weight', 'encoder.layer.6.output.LayerNorm.bias', 'encoder.layer.7.attention.self.query.weight', 'encoder.layer.7.attention.self.query.bias', 'encoder.layer.7.attention.self.key.weight', 'encoder.layer.7.attention.self.key.bias', 'encoder.layer.7.attention.self.value.weight', 'encoder.layer.7.attention.self.value.bias', 'encoder.layer.7.attention.self.query_global.weight', 'encoder.layer.7.attention.self.query_global.bias', 'encoder.layer.7.attention.self.key_global.weight', 'encoder.layer.7.attention.self.key_global.bias', 'encoder.layer.7.attention.self.value_global.weight', 'encoder.layer.7.attention.self.value_global.bias', 'encoder.layer.7.attention.output.dense.weight', 'encoder.layer.7.attention.output.dense.bias', 'encoder.layer.7.attention.output.LayerNorm.weight', 'encoder.layer.7.attention.output.LayerNorm.bias', 'encoder.layer.7.intermediate.dense.weight', 'encoder.layer.7.intermediate.dense.bias', 'encoder.layer.7.output.dense.weight', 'encoder.layer.7.output.dense.bias', 'encoder.layer.7.output.LayerNorm.weight', 'encoder.layer.7.output.LayerNorm.bias', 'encoder.layer.8.attention.self.query.weight', 'encoder.layer.8.attention.self.query.bias', 'encoder.layer.8.attention.self.key.weight', 'encoder.layer.8.attention.self.key.bias', 'encoder.layer.8.attention.self.value.weight', 'encoder.layer.8.attention.self.value.bias', 'encoder.layer.8.attention.self.query_global.weight', 'encoder.layer.8.attention.self.query_global.bias', 'encoder.layer.8.attention.self.key_global.weight', 'encoder.layer.8.attention.self.key_global.bias', 'encoder.layer.8.attention.self.value_global.weight', 'encoder.layer.8.attention.self.value_global.bias', 'encoder.layer.8.attention.output.dense.weight', 'encoder.layer.8.attention.output.dense.bias', 'encoder.layer.8.attention.output.LayerNorm.weight', 'encoder.layer.8.attention.output.LayerNorm.bias', 'encoder.layer.8.intermediate.dense.weight', 'encoder.layer.8.intermediate.dense.bias', 'encoder.layer.8.output.dense.weight', 'encoder.layer.8.output.dense.bias', 'encoder.layer.8.output.LayerNorm.weight', 'encoder.layer.8.output.LayerNorm.bias', 'encoder.layer.9.attention.self.query.weight', 'encoder.layer.9.attention.self.query.bias', 'encoder.layer.9.attention.self.key.weight', 'encoder.layer.9.attention.self.key.bias', 'encoder.layer.9.attention.self.value.weight', 'encoder.layer.9.attention.self.value.bias', 'encoder.layer.9.attention.self.query_global.weight', 'encoder.layer.9.attention.self.query_global.bias', 'encoder.layer.9.attention.self.key_global.weight', 'encoder.layer.9.attention.self.key_global.bias', 'encoder.layer.9.attention.self.value_global.weight', 'encoder.layer.9.attention.self.value_global.bias', 'encoder.layer.9.attention.output.dense.weight', 'encoder.layer.9.attention.output.dense.bias', 'encoder.layer.9.attention.output.LayerNorm.weight', 'encoder.layer.9.attention.output.LayerNorm.bias', 'encoder.layer.9.intermediate.dense.weight', 'encoder.layer.9.intermediate.dense.bias', 'encoder.layer.9.output.dense.weight', 'encoder.layer.9.output.dense.bias', 'encoder.layer.9.output.LayerNorm.weight', 'encoder.layer.9.output.LayerNorm.bias', 'encoder.layer.10.attention.self.query.weight', 'encoder.layer.10.attention.self.query.bias', 'encoder.layer.10.attention.self.key.weight', 'encoder.layer.10.attention.self.key.bias', 'encoder.layer.10.attention.self.value.weight', 'encoder.layer.10.attention.self.value.bias', 'encoder.layer.10.attention.self.query_global.weight', 'encoder.layer.10.attention.self.query_global.bias', 'encoder.layer.10.attention.self.key_global.weight', 'encoder.layer.10.attention.self.key_global.bias', 'encoder.layer.10.attention.self.value_global.weight', 'encoder.layer.10.attention.self.value_global.bias', 'encoder.layer.10.attention.output.dense.weight', 'encoder.layer.10.attention.output.dense.bias', 'encoder.layer.10.attention.output.LayerNorm.weight', 'encoder.layer.10.attention.output.LayerNorm.bias', 'encoder.layer.10.intermediate.dense.weight', 'encoder.layer.10.intermediate.dense.bias', 'encoder.layer.10.output.dense.weight', 'encoder.layer.10.output.dense.bias', 'encoder.layer.10.output.LayerNorm.weight', 'encoder.layer.10.output.LayerNorm.bias', 'encoder.layer.11.attention.self.query.weight', 'encoder.layer.11.attention.self.query.bias', 'encoder.layer.11.attention.self.key.weight', 'encoder.layer.11.attention.self.key.bias', 'encoder.layer.11.attention.self.value.weight', 'encoder.layer.11.attention.self.value.bias', 'encoder.layer.11.attention.self.query_global.weight', 'encoder.layer.11.attention.self.query_global.bias', 'encoder.layer.11.attention.self.key_global.weight', 'encoder.layer.11.attention.self.key_global.bias', 'encoder.layer.11.attention.self.value_global.weight', 'encoder.layer.11.attention.self.value_global.bias', 'encoder.layer.11.attention.output.dense.weight', 'encoder.layer.11.attention.output.dense.bias', 'encoder.layer.11.attention.output.LayerNorm.weight', 'encoder.layer.11.attention.output.LayerNorm.bias', 'encoder.layer.11.intermediate.dense.weight', 'encoder.layer.11.intermediate.dense.bias', 'encoder.layer.11.output.dense.weight', 'encoder.layer.11.output.dense.bias', 'encoder.layer.11.output.LayerNorm.weight', 'encoder.layer.11.output.LayerNorm.bias', 'pooler.dense.weight', 'pooler.dense.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ` I would like to ask whether the presence of this warning will affect the results?How can remove this warning?thinks
01-09-2021 07:24:17
01-09-2021 07:24:17
Hey @joy20182018, We cannot guarantee that our library is in sync with other libraries like `https://github.com/allenai/longformer`. Please make sure you follow the advice as written on: https://huggingface.co/transformers/model_doc/longformer.html<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,491
closed
[trainer] round numbers in trainer state
This PR rounds very long fractions in trainer state e.g., ``` {'loss': 14.846837043762207, 'learning_rate': 6e-06, 'epoch': 0.3333333333333333} ``` to: * epoch 2 decimals * loss 4 decimals resulting in: ``` {'loss': 14.8468, 'learning_rate': 6e-06, 'epoch': 0.33} ```` If you want any other small tweaks for me to add please let me know. Fixes: #9475 @sgugger
01-08-2021 23:16:32
01-08-2021 23:16:32
transformers
9,490
closed
Using Huggingface library with DeepSpeed
I'm not completely sure if it's this library problem, but maybe you could help. Trying to run T5-large from huggingface's library with DeepSpeed library I got a strange result. When I change mode to fp16 training loss is going to be NaN value, as well as some of tensors in model's features output. I'm not sure, can it be a Transformers library fault? Original example that I used utilizes pytorch_pretrained_bert, and it works well. Training with FP32 does not result any NaN troubles. I have some code, made out of DeepSpeedExamples code: https://github.com/exelents/try_t5 If somebody would like to help and try to run it, here is compiled binary dataset: https://drive.google.com/file/d/1oxCxYCuCWebmaUQ_s9il7EDBkisL7x-_/view?usp=sharing https://drive.google.com/file/d/1WCzxAnp2bEllbQ0_2d_6hoq5tQjxBFXh/view?usp=sharing
01-08-2021 21:28:09
01-08-2021 21:28:09
There is an open PR by @patil-suraj for T5 FP16 https://github.com/huggingface/transformers/pull/9487 And here is an open PR for deepspeed Integration by @stas00 https://github.com/huggingface/transformers/pull/9211<|||||>Thank you!<|||||>Moving my answers from https://github.com/huggingface/transformers/pull/9487 as they were irrelevant to the PR itself: Context: @exelents struggles with making rtx-3090 work with pytorch and getting: ``` nvcc fatal : Unsupported gpu architecture 'compute_86 ``` I explained how I made it to work. ------------------------------ So I just did: ``` pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html -U ``` Then inside deepspeed github clone: ``` rm -rf build TORCH_CUDA_ARCH_LIST="6.1;8.6" DS_BUILD_OPS=1 pip install --no-cache -v --disable-pip-version-check -e . ``` 8.6 corresponds to rtx-3090 arch. You can remove 6.1, this is just my 2nd 1070 card's arch. And you can remove `-e` if you don't want the develop install. You can install it normally from pypi too: ``` pip install deepspeed ``` it'll use PTX/JIT - I tested it to work just fine - the explicit way from source repo just builds the most optimal specific version for my hardware and pre-compiles all features, which takes much longer to build. Now inside the deepspeed PR brach https://github.com/huggingface/transformers/pull/9211, I run against a small t5 model: ``` export BS=20; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 deepspeed --num_gpus=2 ./finetune_trainer.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 5 --n_train 100 --n_val 100 --n_test 100 --deepspeed ds_config.json --fp16 --save_steps 1 ``` All works. t5-base works too. Follow my steps and see if you can use your rtx-3090 card first. Then compare to what you are doing differently.<|||||>I have been using pt-nightly w/ rtx-3090 for the last 2 months, so yes it works. pt-1.7.1 doesn't work. For building modules that build pytorch extensions like deepspeed, and apex and fairscale I use cuda-11.1. Let me know what you're trying to build and I will tell you how to do it. I'm still on 11.1 for building extensions, while I know 11.2 is out since I'm not sure 11.1 is compatible with 11.2. 11.0 is compatible with 11.1 so one can use it to build against pt-nightly w/ 11.0 after hacking the build script.<|||||>> I change current cuda in my system to 11.0 version (cuda-toolkit-11-0 package in Ubuntu) > Then I install latest pytorch nighty by a command which you propose. I think the difference is that you need cuda-11.1 and not cuda-11.0 system-wide. This is where our setups diverge I think. Careful though, there is 11.2 out there. I'm on ubuntu, I'm not at all sure it'd work w/ pt-nightly, that's why I'm not upgrading mine. for rtx-3090 to work - tf requires cuda-11.1 - pt works with cuda-11.0 - pt extensions need cuda-11.1: apex, fairscale, deepspeed, The first 2 require hacking their build script to support 11.1 w/ pt built w/ 11.0. deepspeed works out of box. note: If someone reads this at a later time this will probably become incorrect once pt-nightly builds w/ cuda-11.2 - then you should be able to install 11.2 system-wide and hopefully the extensions will just work.<|||||>Moving my posts from PR #9487 due to they are irrelevant. I don't know what is my problem. I even tried solution made by @stas00 in #9211 but I still have the same problem. Maybe problem is I built Pytorch from source and forgot some option? I did it because pypi's version don't support Cuda 11.2 and supported cuda 11.0 don't support my gpu (rtx 3090)? Maybe I need install something to enable fp16 support?<|||||>> I use pytorch-nightly w/ cuda-11.0 which works with rtx-3090: > > ``` > pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html -U > ``` > > pt-nightly w/ cuda-11.2 should be released really soon now. You can track it here [pytorch/pytorch#50232](https://github.com/pytorch/pytorch/issues/50232) It doesn't work. Pytorch nighty gets an error: `nvcc fatal : Unsupported gpu architecture 'compute_86` That's mean that Cuda 11.0 doesn't support RTX 3090<|||||>> I have been using it for the last 2 months, so yes it works. pt-1.7.1 doesn't work. > > Chances are that you have more than one pytorch installed and you have the non-nightly version loaded, check your: > > ``` > print(torch.__version__) > ``` here is installed version: 1.8.0.dev20210109+cu110 > For building extensions like deepspeed, and apex and fairscale I use cuda-11.1 Okay, I'll try cuda 11.1, maybe it'll help. <|||||>> Excellent. so what exactly do you do when you get that error? I change current cuda in my system to 11.0 version (cuda-toolkit-11-0 package in Ubuntu) Then I install latest pytorch nighty by a command which you propose. Latest, I run deepspeed training script proposed in #9211 issue. But in parameters there I change model name to t5-large and remove language parameters from parameters fed to model This code in utils.py: ``` if data_args.src_lang is not None: self.dataset_kwargs["src_lang"] = data_args.src_lang if data_args.tgt_lang is not None: self.dataset_kwargs["tgt_lang"] = data_args.tgt_lang ``` This way I get a minimal training code that should run T5-large. It don't take my dataset like in two examples I have shown before, but it should work. What I see is only error "platform not supported" on pt nighty build, or NaNs in output tensors in version which I installed from source. Here is training script: https://gist.github.com/exelents/9dd3e6dec64dc0d640b85a7e0cfa53e9<|||||>Thank you for making this extra effort, @exelents! We got out of the PR's way now. Please let me know if you had success with: https://github.com/huggingface/transformers/issues/9490#issuecomment-757346485 taking into account this: https://github.com/huggingface/transformers/issues/9490#issuecomment-757346664 <|||||>Okay, @stas00 I have installed cuda-toolkit-11-1, installed Pytorch nighty version: `pip install --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cu110/torch_nightly.html -U` Then I compiled from source DeepSpeed with Cuda 11.1 and Nvidia 8.6 computing platform: `TORCH_CUDA_ARCH_LIST="8.6" DS_BUILD_OPS=1 pip install --no-cache -v --disable-pip-version-check -e .` Now everything works well, on the loss scale 256.0 upd: Also, I have installed Huggingface Transformers library from source, on brach from PR #9211 merged with branch from PR #9487 It seems that maybe DeepSpeed doesn't support new Cuda 11.2, or because I compiled PyTorch and deepspeed on Cuda 11.2 with TORCH_CUDA_ARCH_LIST=8.0 instead of 8.6.<|||||>Glad to hear it works now! > TORCH_CUDA_ARCH_LIST=8.0 Most likely this! When you build from source you need to add +PTX `8.0+PTX` for this newer arch to work if you don't specify 8.6 explicitly. It basically tells the cuda compiler to allow newer archs to be supported as well and will compile the extension during first run-time via JIT and cache and re-use it. This is how pytorch nightly is built (i.e. it includes `+PTX`). The whole PTX has been recently documented in https://pytorch.org/docs/master/cpp_extension.html#torch.utils.cpp_extension.CUDAExtension
transformers
9,489
closed
fix(wandb): fix config
# What does this PR do? Fix an issue introduced with PR #9441. There was just 2 lines to switch related to wandb config detection. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
01-08-2021 19:22:08
01-08-2021 19:22:08
Failures in the tests are unrelated so merging.
transformers
9,488
closed
Make doc styler detect lists on rst and better support for Windows
# What does this PR do? The new lines for the lists in rst files were not actually added because I made a mistake, this PR fixes that. @patrickvonplaten it adds some new lines in the benchmarking files which I think are okay, but let me know if I should write some special code to get the scripts to ignore them. Also, changed the line that added the new lines before doc special words since it seems to be not working properly on Windows. Let's see if this version is better! (Failures are because master is red at the time of this PR.) Fixes #9438
01-08-2021 19:19:28
01-08-2021 19:19:28
transformers
9,487
closed
[T5] enable T5 fp16
# What does this PR do? This PR enables fp16 for T5 models, by clamping hidden states to the max value of the current data type. As detailed in #9295, T5 produces large (`inf`) activations at 3 places 1. Output of `T5LayerFF` 2. Output of `T5LayerSelfAttention` 3. Output of `T5LayerCrossAttention` To avoid these `inf` activations this PR clamps the `hidden_states` after above 3 outputs
01-08-2021 17:54:38
01-08-2021 17:54:38
This is great!<|||||>Dear @patil-suraj Your PR works well for t5 model, thank you for your work. But now I tried new t5 model version released recently by Google: google/t5-v1_1-xl The same code after loading google/t5-v1_1-xl instead of t5-3b is going to return a lot "overflow" errors. Can you tell me, should your code fix fp16 on google/t5-v1_1-xl model? Here is training code: https://github.com/exelents/try_t5_qa Run ./run-qa-3b.sh Upd: I run my code on Transformers's branch from your current PR #9487 merged with PR #9211 needed for deepspeed integration. Can you confirm a problem, or it's just mine?<|||||>> Dear @patil-suraj > Your PR works well for t5 model, thank you for your work. > But now I tried new t5 model version released recently by Google: google/t5-v1_1-xl > The same code after loading google/t5-v1_1-xl instead of t5-3b is going to return a lot "overflow" errors. > > Can you tell me, should your code fix fp16 on google/t5-v1_1-xl model? > Here is training code: > https://github.com/exelents/try_t5_qa > Run ./run-qa-3b.sh > > Upd: I run my code on Transformers's branch from your current PR #9487 merged with PR #9211 needed for deepspeed integration. > Can you confirm a problem, or it's just mine? Hey @exelents, can you include a code snippet to reproduce your error as well as the full stack trace of your error?<|||||>@patrickvonplaten , @exelents as stated in #9432 This fix works for following models and versions, with apex `01` and `native amp` - T5v1: t5-small, t5-base, t5-large - T5v1_1: google/t5-v1_1-small, google/t5-v1_1-base - MT5: google/mt5-small, google/mt5-base Just did a small experiment with `t5-v1_1-large` and it still gives `nan` loss after 200 steps, so might not work for `xl`, also, @exelents by overflow error do you mean the gradient overflow warning thrown by `apex` ?<|||||>> @patrickvonplaten , @exelents > > as stated in #9432 > > This fix works for following models and versions, with apex `01` and `native amp` > > * T5v1: t5-small, t5-base, t5-large > * T5v1_1: google/t5-v1_1-small, google/t5-v1_1-base > * MT5: google/mt5-small, google/mt5-base > > Just did a small experiment with `t5-v1_1-large` and it still gives `nan` loss after 200 steps, so might not work for `xl`, > > also, @exelents by overflow error do you mean the gradient overflow warning thrown by `apex` ? Ah ok, we still see `nan's` with `t5-v1_1-large` then :-/ Do you think this could be fixed by adding one more clamp statement? @patil-suraj <|||||>> Hey @exelents, can you include a code snippet to reproduce your error as well as the full stack trace of your error? My code is here: https://github.com/exelents/try_t5_qa It requires deepspeed to run, as well as code from #9211 PR (deepspeed integration) be merged. Use run-qa-3b.sh to test. Here is error stack: https://gist.github.com/exelents/10f1d03e61059ddf2dfba7068114c93a Look at the end - we have a message after every step: `[2021-01-11 16:58:18,163] [INFO] [stage2.py:1361:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 256.0, reducing to 128.0` Wait a second, I'll try to check loss value tensor.<|||||>> Do you think this could be fixed by adding one more clamp statement? I'm again trying to locate where exactly in the model this happen. In case it's the same as above (first `inf` then `nan` ) then we could fix it by adding one more clamp<|||||>I have checked a loss value, and it seems in is not NaN. It got values like "48.7500" or "40.9688" but there are vaild values. Despite that I see messages like "OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 1024.0, reducing to 512.0", that it seems means that something bad happened with model's loss.<|||||>> Attempted loss scale: 1024.0, reducing to 512.0", that it seems means that something bad happened with model's loss. Those warnings don't mean anything went wrong, it's logical with dynamic loss scaling that some loss scale values are too big at the beginning of training.
transformers
9,486
closed
Update run_glue for do_predict with local test data (#9442)
# What does this PR do? Currently, run_glue.py cannot use the test set (`do_predict`) unless we give it a GLUE task name. This PR will allow us to use the local test dataset. As commented in #9442, I tried to achieve the functionality with only simple changes. - It still works with only the local train and valid files (in other words, this PR does not break the current operation.). - If we add `--do_predict` with out adding specific params, we will get an error statement saying that we need either the GLUE task name or the path of the local test file. Fixes #9442 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger Thank you for your kind comments on the issue. I have tried to keep it simple and hope there is no problem as an example script.
01-08-2021 17:44:16
01-08-2021 17:44:16
Error messages of the CircleCI are: ``` -- Docs: https://docs.pytest.org/en/stable/warnings.html =========================== short test summary info ============================ FAILED tests/test_pipelines_conversational.py::SimpleConversationPipelineTests::test_history_cache FAILED tests/test_pipelines_conversational.py::SimpleConversationPipelineTests::test_integration_torch_conversation ==== 2 failed, 4207 passed, 1744 skipped, 734 warnings in 190.84s (0:03:10) ==== ``` ``` FAILED tests/test_pipelines_conversational.py::SimpleConversationPipelineTests::test_history_cache ==== 1 failed, 4178 passed, 1774 skipped, 735 warnings in 260.31s (0:04:20) ==== ``` I'm sorry but I'd like to ask you if `run_glue.py` is related to the conversation pipeline. <|||||>@sgugger @LysandreJik Thank you for reviewing and merging!
transformers
9,485
closed
ProphetNetNgramAttention: Number of attention heads
## Information Model I am using (Bert, XLNet ...): ProphetNet The ProphetNet Ngram attention layer seem to refer to a wrong number of heads. The `ProphetNetNgramProphetNetSelfAttention` (seems to be a typo by the way, maybe `ProphetNetNgramSelfAttention` would be more appropriate?) is part of the decoder, and therefore I would expect it should contain a number of attention head equal to the configuration parameter `num_decoder_attention_heads `. However, when instantiated at https://github.com/huggingface/transformers/blob/02e05fb0a532e572b56ba75dad6ba3db625bbdeb/src/transformers/models/prophetnet/modeling_prophetnet.py#L759, it uses the property `num_attention_heads` that equals to the number of **encoder** attention heads. I assume that the correct value should be `config.num_decoder_attention_heads`. (Luckily?) no issue showed up in most models because pretrained models have the same number of encoder and decoder heads. Looking at the reference implementation, it does seem that the **decoder** number of attention heads is used for the Ngram attention (see https://github.com/microsoft/ProphetNet/blob/1d36bc5c4f334b0ed9b90fdf3a64785c174f5c45/GLGE_baselines/script/prophetnet/ngram_s2s_model.py#L586) ### Who can help @patrickvonplaten ? Again not sure since I do not see an owner for ProphetNet I would be happy to submit a small PR referencing to `config.num_decoder_attention_heads` rather than this property if you agree with this change Thanks!
01-08-2021 17:06:54
01-08-2021 17:06:54
Hey @guillaume-be, you're 100% correct about both the naming: We should remove one `ProphetNet` and also about the config. Thanks a lot for reporting this. I'll open a PR
transformers
9,484
closed
[Flax] Adapt Flax models to new structure
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> As discussed in https://github.com/huggingface/transformers/pull/9172, Flax model should get a design that is most similar to PyTorch and thus should use `def setup(...)` instead of `nn.compact(...)`. This PR refactors the model architecture of Bert & Roberta accordingly. The next step is now to add a general conversion method flax<>pytorch which might require some more follow-up changes to the naming of the weights. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
01-08-2021 16:13:27
01-08-2021 16:13:27
Will wait until https://github.com/huggingface/transformers/pull/10775 is merged, then rebase and then merge.<|||||>@patrickvonplaten I like the new structure but it seems this PR broke the flax example: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm_flax.py - This line (https://github.com/huggingface/transformers/blob/896d7be97401a85dc0ffc5460afd707e8e092781/examples/language-modeling/run_mlm_flax.py#L577) will raise the error ``` TypeError: __init__() got an unexpected keyword argument 'dropout_rate' ``` - In addition, this line https://github.com/huggingface/transformers/blob/896d7be97401a85dc0ffc5460afd707e8e092781/src/transformers/models/bert/modeling_flax_bert.py#L254 uses an undefined variable `self.dropout_rate`. I think we should make more test cases and make sure the examples are runnable. <|||||>I am very interested in the jax/flax integration. Could you also take a look at my PR? https://github.com/huggingface/transformers/pull/10796 If you are collaborative and welcome contributions from me, I can contribute more and improve the flax examples.
transformers
9,483
closed
Fixing tests. It seems master changed something in the warnings.
# What does this PR do? Trying to keep warning tests for now. Should be discarded if it becomes too hard to maintain. 60 started to trigger a new warning, saying that the input_ids length was longer than model max_length. I'm not really sure which commit triggered this, but it did not occur in the original PR <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patrickvonplaten @LysandreJik Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
01-08-2021 13:59:10
01-08-2021 13:59:10
I don't understand this: `"60 started to trigger a new warning, saying that the input_ids length was longer than model max_length."`<|||||>Would be nice to find what triggered this to be sure we didn't introduce a bug no?<|||||>@patrickvonplaten I think we're good. It's this commit 79bbcc5260c3acde3e7156966ba836afcbfd8808 that triggered the extra warning.
transformers
9,482
closed
Reformat the TF serving outputs
# What does this PR do? This PR properly reformat the `serving_output` methods.
01-08-2021 13:20:29
01-08-2021 13:20:29
Merged PR to unblock TF Bart - Split PR. However merge to master made tf templates test fail, see: https://github.com/huggingface/transformers/runs/1676878391 . @jplu, I think they need some updating.
transformers
9,481
closed
dataset not being sent to device when using Trainer (distributed)
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.1.1 - Platform: Linux-4.15.0-99-generic-x86_64-with-glibc2.27 - Python version: 3.8.0 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): 2.4.0 (False) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: distrubuted ### Who can help Trainer: @sgugger Text Generation /t5: @patrickvonplaten examples/seq2seq: @patil-suraj ## Information The intention is to train a t5 model (preferably as last as possible) in a distributed setting using the HF Trainer. However, when setting the model_parallel to True the training breaks. related issues might be: https://github.com/huggingface/transformers/issues/9229 https://github.com/huggingface/transformers/issues/6821 However, do note that the script works perfectly fine training on multiple GPU in a non distributed fashion (setting model_parallel to False). ## To reproduce I have created a minimal script for reproducing the behavior: ``` import transformers from transformers import ( MT5ForConditionalGeneration, MT5Model, Trainer, TrainingArguments, T5Tokenizer ) ``` <details> <summary> Click to see (minimal) dataset creation </summary> ``` import datasets # making minimal test for example sake def make_test_dataset(tokenizer="google/mt5-small"): if isinstance(tokenizer, str): tokenizer = T5Tokenizer.from_pretrained(tokenizer) ds = datasets.load_dataset("dane") def __tokenizer_input(batch): return tokenizer(batch['text'], padding="max_length", max_length=256, # actual max is 235 truncation=True) def __tokenizer_output(batch): tok = tokenizer(batch['text'], padding="max_length", max_length=256, truncation=True) tok["labels"] = tok.pop("input_ids") return tok # filter out empty strings (bug reported and fixed) ds = ds.filter(lambda batch: bool(batch["text"])) # tokenize both datasets (eos: </s> is added by tokenizer) ds = ds.map(__tokenizer_input, batched=True, batch_size=len(ds)) ds = ds.map(__tokenizer_output, batched=True, batch_size=len(ds)) return ds dataset = make_test_dataset() dataset.set_format(type='torch', columns=['input_ids', 'attention_mask', 'labels']) ``` </details> ``` model = MT5ForConditionalGeneration.from_pretrained("google/mt5-small") #using a small model for example training_args = TrainingArguments( output_dir='./results', num_train_epochs=1, logging_dir='./logs', evaluation_strategy="epoch", model_parallel=True # work fine when set to False ) trainer = Trainer( model=model, args=training_args, train_dataset=dataset["train"], eval_dataset=dataset["test"] ) trainer.remove_callback(transformers.integrations.WandbCallback) # removing wandb for conv. trainer.train() ``` Results: ``` RuntimeError: Input, output and indices must be on the current device ``` <details> <summary> Click to see full traceback </summary> ``` RuntimeError Traceback (most recent call last) ~/github/EDP-Efficient-Danish-Preprocessing/tmp.py in 64 65 trainer.remove_callback(transformers.integrations.WandbCallback) ---> 66 trainer.train() ~/.Envs/EDP/lib/python3.8/site-packages/transformers/trainer.py in train(self, model_path, trial) 797 tr_loss += self.training_step(model, inputs) 798 else: --> 799 tr_loss += self.training_step(model, inputs) 800 self._total_flos += self.floating_point_ops(inputs) 801 ~/.Envs/EDP/lib/python3.8/site-packages/transformers/trainer.py in training_step(self, model, inputs) 1137 loss = self.compute_loss(model, inputs) 1138 else: -> 1139 loss = self.compute_loss(model, inputs) 1140 1141 if self.args.n_gpu > 1: ~/.Envs/EDP/lib/python3.8/site-packages/transformers/trainer.py in compute_loss(self, model, inputs) 1161 Subclass and override for custom behavior. 1162 """ -> 1163 outputs = model(**inputs) 1164 # Save past state if it exists 1165 # TODO: this needs to be fixed and made cleaner later. ~/.Envs/EDP/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/.Envs/EDP/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, encoder_outputs, past_key_values, head_mask, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict) 1422 if encoder_outputs is None: 1423 # Convert encoder inputs in embeddings if needed -> 1424 encoder_outputs = self.encoder( 1425 input_ids=input_ids, 1426 attention_mask=attention_mask, ~/.Envs/EDP/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/.Envs/EDP/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py in forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 858 if inputs_embeds is None: 859 assert self.embed_tokens is not None, "You have to initialize the model with valid token embeddings" --> 860 inputs_embeds = self.embed_tokens(input_ids) 861 862 batch_size, seq_length = input_shape ~/.Envs/EDP/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), ~/.Envs/EDP/lib/python3.8/site-packages/torch/nn/modules/sparse.py in forward(self, input) 122 123 def forward(self, input: Tensor) -> Tensor: --> 124 return F.embedding( 125 input, self.weight, self.padding_idx, self.max_norm, 126 self.norm_type, self.scale_grad_by_freq, self.sparse) ~/.Envs/EDP/lib/python3.8/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1850 # remove once script supports set_grad_enabled 1851 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 1852 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1853 1854 RuntimeError: Input, output and indices must be on the current device ``` </details> ## Expected behavior A training model PS: I personally found that the model_parallel name was slightly confusing. I assume a more fitting name would be model_distributed (but this is a minor thing) Thanks for great work
01-08-2021 11:40:02
01-08-2021 11:40:02
The `model_parallel` argument has nothing to do with training in a parallel fashion (and is going to be deleted very soon since you're not the first user its name confuses). To use parallel training with: - PyTorch DataParallel, there is nothing to do, the Trainer does it automatically - PyTorch DistributedDataParallel, you should launch your script with the `python -m torch.distributed.launch` command (see the examples).<|||||>Glad to hear about the name. I was aware that DataParallel was used by default. By examples you must refer to: https://huggingface.co/transformers/examples.html but it doesn't seem to provide any information on `torch.distributed.launch` am I missing something? will try to run with flag, but would love if I could find some documentation on this Thanks for the quick response <|||||>you can find the commands/docs to launch distributed training in the examples [readme](https://github.com/huggingface/transformers/tree/master/examples#distributed-training-and-mixed-precision)<|||||>Thanks, I will try this out as soon as our GPU's are available again!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,480
closed
request for run_text_classification.py
# 🚀 Feature request There is a run_tf_text_classification.py file under text_classification examples, but no run_text_classification.py.
01-08-2021 11:02:03
01-08-2021 11:02:03
Hi! The `run_glue.py` script does what you're looking for.<|||||>> Hi! The `run_glue.py` script does what you're looking for. Let's say I have train.csv, dev.csv and test.csv, how can I do it without modify the code? Thank you for your patience.<|||||>Please use the [forums](https://discuss.huggingface.co/) for questions around the script. Running it with the -h option will give you the list of arguments it accepts, in particular `--train_file` and `--validation_file`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,479
closed
Makes HfArgumentParser compatible with Python 3.9
Python 3.9 changed the format of the string serialization of `typing.Optional`. For example, `str(typing.Optional[str])` is `typing.Union[str, NoneType]` in python 3.8 and `typing.Optional[str]` in Python 3.9.
01-08-2021 10:53:22
01-08-2021 10:53:22
LGTM
transformers
9,478
closed
Fix TF s2s models
# What does this PR do? This PR aims to fix the Seq2Seq models in order to make them able to be served through TF Serving. The problem is stated by @patrickvonplaten in #9313. The reason why it failed was because we use a model as a layer in the `TFXXXForConditionalGeneration` models. The tracing mechanism of TensorFlow when building a graph calls one by one all the layers for building the graph. In order to know what are the inputs needed by each layer, the tracing mechanism check if a layer has a custom input signature, if not, it takes as default a signature where only the first argument is mandatory. Here stand the problem, the Seq2Seq models needs have two mandatory arguments (`input_ids` and `decoder_input_ids` or `inputs_embeds` and `decoder_inputs_embeds`) and then the tracing fails. The fix to this problem is to manually set the expected input signature of the base model when instantiating it in `__init__`. To be harmonized with the required serving, the same signature is used.
01-08-2021 10:09:12
01-08-2021 10:09:12
@patrickvonplaten Should I remove the following hack in BART? ```python if inputs["decoder_input_ids"] is None and inputs["input_ids"] is not None: inputs["decoder_input_ids"] = shift_tokens_right( inputs["input_ids"], self.config.pad_token_id, self.config.eos_token_id ) ```<|||||>> In general I really don't like the tf.cond(condition, do_fn_one, do_fn_two) design. I think I understand that it is sometimes necessary, but I really like to keep the usage of this function to a minimum in general. The functional approach is very different to our general library design and make the code much much harder to read. It always creates an abstraction by having to wrap parts of the code into a function with no args, like def attn_mask_from_inp_ids() which is not easy to follow and to me always looks like a hack. In Bart we manage to do this part of the code without the usage of tf.cond and Bart has the same exact logic as LED has there -> so we can make it easier I think. I understand your point and agree with you and I share you opinion on this, and unfortunately if you come to control flow (conditions and loops) there are some strict rules that one cannot overcome. `tf.cond` is somehow mandatory for autograph. A solution I think that should work would be to force the `layer.call` function to be run in graph mode with `@tf.function` which takes care of making itself the translation of all these conditions and loops. This work in some cases, let's see if it works... Does it sounds a proper solution for you?<|||||>> @patrickvonplaten Should I remove the following hack in BART? > > ```python > if inputs["decoder_input_ids"] is None and inputs["input_ids"] is not None: > inputs["decoder_input_ids"] = shift_tokens_right( > inputs["input_ids"], self.config.pad_token_id, self.config.eos_token_id > ) > ``` Please don't - it's needed for some use-cases in Bart and for backward comp<|||||>Actually, one thing I'd like to know more in general about our models in TF is the following: "Can we use normal if-else statements in the forward pass"? I always thought that the answer is: "Yes we can as long as the output type and shape of each case is the same" So for me statements like: ```python if shape_list(input_ids) > n: attention_mask = torch.zeros(shape_list(input_ids)) else: attention_mask = torch.ones(shape_list(input_ids)) ``` (this code snippet doesn't exist -> it's just an example) are totally fine. Is the assumption correct @jplu ? Or can we in general **never** use normal if-else statements in TF's forward pass and have to rely on `tf.cond(....)`? This would really surprise me as we're having tons of if statements everywhere in the TF code... <|||||>The general answer is yes, but it has some conditions. If you run this condition in eager mode, it will works by default (you can basically do almost anything in eager mode) If you run this condition in graph mode you have two solution to make it works: 1. Either use `tf.cond` 2. Or to wrap your condition into a function decorated with `tf.function`. This will have to effect to apply the Autograph library over the content of your decorated function. Autograph will automatically converts `if-then` clauses, loops, `break`, `return`, `continue`, and more. You can have more information here https://www.tensorflow.org/guide/function#autograph_transformations<|||||>Now that we all agree on a solution, I will apply it for all the models 👍 <|||||>Ok, LGTM!! Feel free to merge whenever you feel it^^
transformers
9,477
closed
rename "gpu" --> "device"
# What does this PR do? Rename the arg names from "per_gpu" to "per_device" such that it aligns with the instruction in the readme https://github.com/huggingface/transformers/tree/master/examples/text-classification#xnli
01-08-2021 09:06:57
01-08-2021 09:06:57
Could you run the code quality scripts for the code quality test? `make style && make quality`, after installing the latest code quality versions: `pip install -U .[quality]`<|||||>Looks like there is some styling issue. Could you run `make style` on your branch?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,476
closed
Improve LayoutLM
# What does this PR do? - [x] Improve documentation of `LayoutLM`, explaining how people can normalize bounding boxes before passing them to the model, add links to the various datasets on which the model achieves state-of-the-art results, add code examples in the documentation for the various models - [x] Add notebook to the list of community notebooks showcasing how to fine-tune `LayoutLMForTokenClassification` on the [FUNSD](https://guillaumejaume.github.io/FUNSD/) dataset (on which the model achieves SOTA results) - [x] Add integration tests, which confirm that the model outputs the same tensors as the original implementation on the same input data - [x] Add `LayoutLMForSequenceClassification`, which makes it possible to fine-tune LayoutLM for document image classification tasks (such as the [RVL-CLIP dataset](https://www.cs.cmu.edu/~aharley/rvl-cdip/)), extra tests included. Fixes the following issues: - #9228 - #9097 - #8866 - #8524 ## Who can review? @LysandreJik, @patrickvonplaten, @sgugger
01-08-2021 08:52:02
01-08-2021 08:52:02
Thanks for the reviews, I've addressed all comments. There are 2 things remaining: - in the code examples, I use both `tokenize()` and `convert_tokens_to_ids` as the bounding boxes (which are at word-level) need to be converted to token-level. Is there a better solution? ``` words = ["Hello", "world"] normalized_word_boxes = [637, 773, 693, 782], [698, 773, 733, 782] tokens = [] token_boxes = [] for word, box in zip(words, normalized_word_boxes): word_tokens = tokenizer.tokenize(word) tokens.extend(word_tokens) token_boxes.extend([box] * len(word_tokens)) ``` - according to @sgugger the input data on which the integration tests are ran are maybe too long, and black formatting causes them to be flattened vertically. Could you maybe fix this @LysandreJik? <|||||>I pushed the reformat you asked for @NielsRogge, make sure to pull before doing any more changes!<|||||>Ok thank you, so the only thing remaining is make the code examples more efficient? Is there a way to make the code block (see comment above) better?
transformers
9,475
closed
[trainer] fractional epoch
Running ``` export BS=20; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 ./finetune_trainer.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_predict --do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 5 --n_train 100 --n_val 100 --n_test 100 --fp16 --save_steps 1 ``` on master, gives: ``` {'loss': 14.846837043762207, 'learning_rate': 6e-06, 'epoch': 0.3333333333333333} ``` epoch can't be fractional.
01-08-2021 02:44:25
01-08-2021 02:44:25
The fraction (and float) `'epoch': 0.3333333333333333` comes from here: https://github.com/huggingface/transformers/blob/1c19b423bf274a465f95725a79819bf82f71329e/src/transformers/trainer.py#L899 @sgugger - is this by design or should it be `ceil`: `'epoch': 1` ``` self.state.epoch = math.ceil( epoch + (step + 1) / steps_in_epoch) ``` or `floor`: `'epoch': 0` ``` self.state.epoch = math.floor( epoch + (step + 1) / steps_in_epoch) ``` `'epoch': 0.3333333333333333` is telling me it's somewhere in the first epoch but isn't done yet? Perhaps it's just fine, it's just very odd to see epoch not being an int. Thanks. <|||||>This is not a bug, it's completely normal. See the [dcoumentation](https://huggingface.co/transformers/main_classes/callback.html#trainerstate) of `TrainerState.epoch`.<|||||>Ah, OK. Some rounding then perhaps - `0.3333333333333333` is just too loud. 2 decimals?<|||||>Sure, there is no formatting at all for those results, but we can add some.<|||||>Anything else to format while I'm at it? loss I guess - 4 decimals, right? `{'loss': 14.846837043762207, 'learning_rate': 6e-06, 'epoch': 0.3333333333333333} `<|||||>I don't see why not.
transformers
9,474
closed
Fast imports part 3
# What does this PR do? This is the last PR to make the import of transformers and defer the imports of torch/tensorflow to when is necessary. It does the same work as #9446 but in each itnermediate init, so that ``` from transformers import BertModel ``` only imports torch and not TensorFlow (ans is thus very fast). The templates are adapted to the new init format, so users adding models don't have to worry about this. In passing, I noticed that the `tokenization_utils_base` was importing everything at init, so I deferred imports there to only do them when necessary. There might be a few places like this left, but we can address those later on. Fixes #8733
01-07-2021 21:39:53
01-07-2021 21:39:53
transformers
9,473
closed
[Generation Tests] Small speed-up by just generating two tokens
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> @LysandreJik @sgugger I originally thought that the PR: https://github.com/huggingface/transformers/commit/c89f1bc92e340600bde526b7ff54ad692b4e48c9 made the PyTorch tests much slower, but after checking the time of `run_tests_torch` in 10+ merges to master after and before this commit, I noticed that the PR didn't really affect the testing time of PyTorch. The testing time varies quite a bit, but it seemed on average to be a bit higher after the merged PR, so in this PR I want to reduce the testing time for generation a bit. The generation length is reduced by one which halves the testing time of all generation tests by 30% without any loss in testing coverage / cases. Generating two tokens is enough => the first token can be generated without `past_key_values`, but the second token has to be generated with `past_key_values` if `use_cache` is enabled and all generation steps following this one can only be the same. So we should always test at least two tokens, but don't really need to test more in general generation tests that apply to all models. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
01-07-2021 20:58:12
01-07-2021 20:58:12
Just noticed that generation tests are completely irrelevant for the overall testing time...no generation test takes more than 0.5 seconds and 95 % of the generation tests take less than 0.05 seconds
transformers
9,472
closed
[Generation] Fix bug for manual decoder_input_ids + warning message
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Some improvements on the design of how `decoder_input_ids` are extracted that solve: https://github.com/huggingface/transformers/issues/9400 Also adds a nicer warning to prevent non-understandable errors as shown in: https://github.com/huggingface/transformers/issues/9464 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
01-07-2021 19:31:27
01-07-2021 19:31:27
Also checked that slow tests are passing
transformers
9,471
closed
model.generate() has the same speed on CPU and GPU
Hi, I find that model.generate() of BART and T5 has roughly the same running speed when running on CPU and GPU. Why doesn't GPU give faster speed? Thanks! ## Environment info - `transformers` version: 4.1.1 - Python version: 3.6 - PyTorch version (GPU?): 1.3.1 - Using GPU in script?: yes ### Who can help TextGeneration: @TevenLeScao Bart: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): BART and T5 ## To reproduce ```python import time from transformers import BartTokenizer, BartForConditionalGeneration device = 'cpu' # change to GPU # device = 'cuda:0' text_to_summarize = "My friends are cool but they eat too many carbs." tokenizer = BartTokenizer.from_pretrained('facebook/bart-base') model = BartForConditionalGeneration.from_pretrained('facebook/bart-base') inputs = tokenizer(text_to_summarize, return_tensors='pt') inputs = inputs['input_ids'].to(device) model = model.to(device) start = time.time() summary_ids = model.generate(inputs) print("Time spent (s): ", time.time() - start) ``` ## Expected behavior I expected running on GPU should give me much faster speed. But running on GPU gave me roughly the same speed as CPU, both around 0.3s in this case.
01-07-2021 18:58:54
01-07-2021 18:58:54
Just realize that I used a single input... Issued closed<|||||>Thanks, your post helped me so much! I'm using BloomModel on AWS Lambda Function but Lambda doesn't support GPU. So I write the code like that: device = 'cpu' #topic variable is already given prompt = f' About {Topic} is what I think: ' inputs = tokenizer(prompt, return_tensors='pt') inputs = inputs['input_ids'].to(device) model = model.to(device) sample = model.generate(inputs, max_length=100, temperature=0.9, repetition_penalty = 2.0) output = tokenizer.decode(sample[0], truncate_before_pattern=[r"\n\n^#", "^'''", "\n\n\n"])
transformers
9,470
closed
max_target length for question answering system
Please could you tell the max target length for question answering systems? I was trying and it does not work if the target length is more than 47 this is the [notebook](https://colab.research.google.com/drive/1JzsuPb68L-G4nsMXu57XkwbmzAVrizSS?usp=sharing)
01-07-2021 18:44:00
01-07-2021 18:44:00
This doesn't seem to be related to your length, but rather to this: ```py data['labels'].squeeze(): 60 --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-12-33cc96af719d> in <module>() 2 input = tokenizer.decode(data['input_ids'].squeeze(), skip_special_tokens=True) 3 print("data['labels'].squeeze(): ", len(data['labels'].squeeze())) ----> 4 label = tokenizer.decode(data['labels'].squeeze(), skip_special_tokens=True) 5 print("data keys: ", data.keys(),"\n") 6 lines = textwrap.wrap("Query:\n%s\n" % data['question'], width=150) 5 frames /usr/local/lib/python3.6/dist-packages/sentencepiece/__init__.py in _func(v, n) 492 def _func(v, n): 493 if type(n) is int and (n < 0 or n >= v.piece_size()): --> 494 raise IndexError('piece id is out of range.') 495 return func(v, n) 496 IndexError: piece id is out of range. ``` Your tokenizer doesn't manage to decode your label.<|||||>@LysandreJik yes understand that buy I did not understand what I did wrong first. Then I realized that labels that are -100 should be 0 again. Anyway good to figure out late than not doing that
transformers
9,469
closed
Cannot Evaluate While Training Using the Trainer
@sgugger ## Environment info - `transformers` version: 4.0.0 - Platform: AWS Amazon Linux - Python version: 3.6.10 - PyTorch version (GPU?): 1.5.1 - Tensorflow version (GPU?): N/A - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ## Information Model I am using (Bert, XLNet ...): Third Party Model ( Routing Transformer) The problem arises when using: * [ ] my own modified scripts: I am using a custom Trainer to train the third party model. The training loops runs smoothly without evaluation. But as soon as I try to do evaluation while training, it stops training after a couple of evaluations even though it has not reached the max_steps. I copy my args and trainer below. I understand in previous versions there was a evaluate_during_training flag that some have suggested as a fix until Sep 20. But that flag doesn't seem to exist anymore. Any help or pointers would be appreciated. The tasks I am working on is: * [ ] my own task or dataset: Masked Language Modeling ## To reproduce Steps to reproduce the behavior: ``` custom_args = MLMArguments( output_dir='../models/', mask_ratio=0.2, do_train=True, do_eval=True, max_steps=2000000, save_steps=50, logging_steps=5, per_device_train_batch_size=1, per_device_eval_batch_size=1, logging_dir = '../logs/', evaluation_strategy="steps", prediction_loss_only=True ) checkpoint_callback=TrainerCallback() tb_callback = TensorBoardCallback() custom_trainer = MLMTrainer( rt_model, args=custom_args, train_dataset=train_dataset, eval_dataset=eval_dataset, callbacks=[checkpoint_callback, tb_callback] ) custom_trainer.train() ``` Step | Training Loss | Validation Loss -- | -- | -- 5 | 10.033914 | 10.017973 10 | 10.028201 | 9.935969 and at this point it prints the output and stops training.
01-07-2021 18:17:35
01-07-2021 18:17:35
Hi there, I would like to help you, but the code you are providing is not runable on my side (MLMTrainer, MLMArguments, train_dataset, eval_dataset and rt_model for instance are not defined). Could you please post a complete and short reproducer of the bug?
transformers
9,468
closed
Have RAG return generator cross-attentions when output_attentions=True
# 🚀 Have RAG return generator cross-attentions when output_attentions=True This feature request is for the RAG code to be modified so that if `output_attentions=True`, it returns the generator's cross-attentions in addition to the attentions it already returns. ## Motivation I'm interested in extracting the generator's attentions from a RAG generator model. Currently, `transformers` allows you to extract the generator's encoder attentions and decoder attentions, but not it's cross attentions. For example, inside `modeling_rag.py`, the return objects such as [RetrievAugLMMarginOutput](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/modeling_rag.py#L38), have fields for these other attentions, but not the cross-attentions. Because both T5 and BART can output cross-attentions, I think they could simply propagate up through the RAG code. Is there a reason this isn't already the case? Or could I do a PR to include the cross attentions along with the other attentions in the model output? ## Your contribution On my own fork of `transformers`, I've already added this feature and would happily submit a PR!
01-07-2021 18:02:28
01-07-2021 18:02:28
@patrickvonplaten, @lhoestq Any feedback on this? <|||||>Feel free to open a PR indeed :) What do you think @patrickvonplaten ? I guess it can be part of the RetrievAugLMMarginOutput attributes.<|||||>Hey @dblakely, It would be great if you could open a PR. Both Bart and T5 already return the `cross_attentions`, so it should be a pretty easy change by just adding ```python generator_cross_attentions=gen_outputs.cross_attentions, ``` here: https://github.com/huggingface/transformers/blob/fac7cfb16a437a97584f6a14c3856b2e06bf0eaa/src/transformers/models/rag/modeling_rag.py#L657 and then adding `generator_cross_attentions` to all output classes as suggested by @lhoestq
transformers
9,467
closed
Unable to train sequence classification task using TFTrainer
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.1.1 - Platform: Linux-4.15.0-123-generic-x86_64-with-debian-buster-sid - Python version: 3.7.9 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.4.0 (False) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. --> Trainer: @sgugger Tensorflow: @jplu ## Information Model I am using (Bert, XLNet ...): distilbert-base-cased The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The following is how I load the model and the trainer ```{python} from sklearn.metrics import accuracy_score, precision_recall_fscore_support from datasets import load_from_disk import tensorflow as tf from transformers import TFAutoModelForSequenceClassification, AutoTokenizer from transformers import TFTrainingArguments, TFTrainer # Load dataset def load_dataset(data_dir, split="train", batch_size=32, shuffle=100): dataset = load_from_disk(data_dir)[split] label_type = tf.int32 input_names = ["input_ids", "attention_mask", "token_type_ids"] def gen(): for ex in dataset: d = {k: v for k, v in ex.items() if v is not None} label = d.pop("tag") yield (d, label) tf_dataset = tf.data.Dataset.from_generator( gen, ({k: tf.int32 for k in input_names}, label_type), ({k: tf.TensorShape([None]) for k in input_names}, tf.TensorShape([])), ) tf_dataset = tf_dataset.apply(tf.data.experimental.assert_cardinality(len(dataset))) return tf_dataset # Load the model def load_model(name="distilbert-base-cased", num_labels=11, learning_rate=3e-5): tokenizer = AutoTokenizer.from_pretrained(name) tokenizer.add_special_tokens({"bos_token": "<s>", "eos_token": "</s>"}) model = TFAutoModelForSequenceClassification.from_pretrained( name, num_labels=num_labels ) model.resize_token_embeddings(len(tokenizer)) optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate) loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) model.compile(optimizer=optimizer, loss=loss) model.summary() return model #metrics def compute_metrics(pred): labels = pred.label_ids predictions = pred.predictions.argmax(-1) precision, recall, f1, _ = precision_recall_fscore_support( labels, predictions, average="weighted" ) acc = accuracy_score(labels, predictions) return {"accuracy": acc, "f1": f1, "precision": precision, "recall": recall} # train model def train_model( model_args, data_dir, model_dir, logs_dir, batch_size=32, num_epochs=10 ): training_args = TFTrainingArguments( output_dir=model_dir, num_train_epochs=num_epochs, do_train=True, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size * 2, evaluation_strategy="steps", warmup_steps=500, weight_decay=0.01, logging_dir=logs_dir, dataloader_num_workers=15, ) datasets = { "train": load_dataset(data_dir=data_dir, split="train", batch_size=batch_size), "val": load_dataset( data_dir=data_dir, split="validation", batch_size=batch_size ), } model = load_model(**model_args) trainer = TFTrainer( model=model, args=training_args, train_dataset=datasets["train"], eval_dataset=datasets["val"], compute_metrics=compute_metrics, ) trainer.train() return trainer ``` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) A multi-class classification task to classify a sentence into one of 11 known categories ## To reproduce Steps to reproduce the behavior: 1. A classification task with more the 2 categories - i.e. num_labels > 2. 2. Use pretrained distill bert model for sequence classification 3. Load the dataset and finetune the model with TFTrainer. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> Here is the stack trace from the error. ``` ValueError: in user code: /root/.local/share/virtualenvs/ai-dialogue-acts-classifier-ZQsBEC0q/lib/python3.7/site-packages/transformers/trainer_tf.py:678 distributed_training_steps * self.args.strategy.run(self.apply_gradients, inputs) /root/.local/share/virtualenvs/ai-dialogue-acts-classifier-ZQsBEC0q/lib/python3.7/site-packages/transformers/trainer_tf.py:641 apply_gradients * self.optimizer.apply_gradients(list(zip(gradients, self.model.trainable_variables))) /root/.local/share/virtualenvs/ai-dialogue-acts-classifier-ZQsBEC0q/lib/python3.7/site-packages/transformers/optimization_tf.py:232 apply_gradients * return super(AdamWeightDecay, self).apply_gradients(zip(grads, tvars), name=name, **kwargs) /root/.local/share/virtualenvs/ai-dialogue-acts-classifier-ZQsBEC0q/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:604 apply_gradients ** self._create_all_weights(var_list) /root/.local/share/virtualenvs/ai-dialogue-acts-classifier-ZQsBEC0q/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:783 _create_all_weights self._create_slots(var_list) /root/.local/share/virtualenvs/ai-dialogue-acts-classifier-ZQsBEC0q/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/adam.py:127 _create_slots self.add_slot(var, 'm') /root/.local/share/virtualenvs/ai-dialogue-acts-classifier-ZQsBEC0q/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:844 add_slot .format(strategy, var)) ValueError: Trying to create optimizer slot variable under the scope for tf.distribute.Strategy (<tensorflow.python.distribute.one_device_strategy.OneDeviceStrategy object at 0x7efd64765bd0>), which is different from the scope used for the original variable (<tf.Variable 'tf_distil_bert_for_sequence_classification/distilbert/embeddings/tf_distil_bert_for_sequence_classification/distilbert/embeddings/word_embeddings/weight:0' shape=(28998, 768) dtype=float32, numpy= array([[-0.02513016, -0.03304445, -0.00243959, ..., -0.01084836, -0.04682418, -0.00948554], [-0.00482445, -0.02148623, -0.00871447, ..., -0.02602929, -0.03786189, -0.02410287], [-0.01653061, -0.01786226, 0.00105964, ..., -0.01637051, -0.03567044, -0.03141942], ..., [ 0.01190545, -0.02329331, -0.02250608, ..., -0.02713599, -0.04355597, 0.00010529], [ 0.00688736, 0.02267248, 0.02263871, ..., -0.00735895, -0.00814128, 0.00426289], [ 0.00320692, -0.0061747 , 0.01624888, ..., 0.00641411, 0.00060032, 0.01258053]], dtype=float32)>). Make sure the slot variables are created under the same strategy scope. This may happen if you're restoring from a checkpoint outside the scope ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The same model trains successfully when trained as a tf.keras model with a batched tfdataset. ``` # modified training code to use the keras model instance that trains the model successfully. def train_model( model_args, data_dir, model_dir, logs_dir, batch_size=32, num_epochs=10 ): training_args = TFTrainingArguments( output_dir=model_dir, num_train_epochs=num_epochs, do_train=True, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size * 2, evaluation_strategy="steps", warmup_steps=500, weight_decay=0.01, logging_dir=logs_dir, dataloader_num_workers=15, ) datasets = { "train": load_dataset(data_dir=data_dir, split="train", batch_size=batch_size), "val": load_dataset( data_dir=data_dir, split="validation", batch_size=batch_size ), } model = load_model(**model_args) history = model.fit(datasets["train"].batch(32), verbose=1) return history ```
01-07-2021 17:05:44
01-07-2021 17:05:44
Hello! This is because you do not instantiate your model in the created strategy. You already have an example on how to train such models in the [repo](https://github.com/huggingface/transformers/tree/master/examples/text-classification)<|||||>Hi @jplu, Are you referring to this: https://github.com/huggingface/transformers/blob/f33a6f34461fea61b579a7ec732fcd174b2b41cd/examples/text-classification/run_tf_text_classification.py#L263 i.e. do i just need to wrap `load_model` in the above code in the context manager `with training_args.strategy.scope():` resulting in ``` # train model def train_model( model_args, data_dir, model_dir, logs_dir, batch_size=32, num_epochs=10 ): training_args = TFTrainingArguments( output_dir=model_dir, num_train_epochs=num_epochs, do_train=True, per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size * 2, evaluation_strategy="steps", warmup_steps=500, weight_decay=0.01, logging_dir=logs_dir, dataloader_num_workers=15, ) datasets = { "train": load_dataset(data_dir=data_dir, split="train", batch_size=batch_size), "val": load_dataset( data_dir=data_dir, split="validation", batch_size=batch_size ), } with training_args.strategy.scope(): model = load_model(**model_args) trainer = TFTrainer( model=model, args=training_args, train_dataset=datasets["train"], eval_dataset=datasets["val"], compute_metrics=compute_metrics, ) trainer.train() return trainer ``` If not could you point me to the right place?<|||||>Yes this is what I meant :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,466
closed
RuntimeError when running Reformer model
## Environment info - `transformers` version: 2.10.0 - Platform: Linux-5.4.0-1034-aws-x86_64-with-debian-buster-sid - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help --> @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): Reformer The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run the example code from [here](https://huggingface.co/google/reformer-crime-and-punishment?text=My+name+is+Julien+and+I+like+to): ``` model = ReformerModelWithLMHead.from_pretrained("google/reformer-crime-and-punishment") tok = ReformerTokenizer.from_pretrained("google/reformer-crime-and-punishment") tok.decode(model.generate(tok.encode("A few months later", return_tensors="pt"), do_sample=True,temperature=0.7, max_length=100)[0]) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior RuntimeError: Overflow when unpacking long (more details below) ``` <ipython-input-33-0a824540b4e0> in <module> 2 tok = ReformerTokenizer.from_pretrained("google/reformer-crime-and-punishment") 3 tok.decode(model.generate(tok.encode("A few months later", return_tensors="pt"), ----> 4 do_sample=True,temperature=0.7)[0]) ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs) 13 def decorate_context(*args, **kwargs): 14 with self: ---> 15 return func(*args, **kwargs) 16 return decorate_context 17 ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/transformers/modeling_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, num_return_sequences, attention_mask, decoder_start_token_id, use_cache, **model_specific_kwargs) 1179 attention_mask=attention_mask, 1180 use_cache=use_cache, -> 1181 model_specific_kwargs=model_specific_kwargs, 1182 ) 1183 ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/transformers/modeling_utils.py in _generate_no_beam_search(self, input_ids, cur_len, max_length, min_length, do_sample, temperature, top_k, top_p, repetition_penalty, no_repeat_ngram_size, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, decoder_start_token_id, batch_size, encoder_outputs, attention_mask, use_cache, model_specific_kwargs) 1221 ) 1222 -> 1223 outputs = self(**model_inputs) 1224 next_token_logits = outputs[0][:, -1, :] 1225 ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/transformers/modeling_reformer.py in forward(self, input_ids, position_ids, attention_mask, head_mask, inputs_embeds, num_hashes, labels, do_output_hidden_states, do_output_attentions) 1738 num_hashes=num_hashes, 1739 do_output_hidden_states=do_output_hidden_states, -> 1740 do_output_attentions=do_output_attentions, 1741 ) 1742 ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/transformers/modeling_reformer.py in forward(self, input_ids, attention_mask, position_ids, head_mask, inputs_embeds, num_hashes, do_output_hidden_states, do_output_attentions) 1588 num_hashes=num_hashes, 1589 do_output_hidden_states=do_output_hidden_states, -> 1590 do_output_attentions=do_output_attentions, 1591 ) 1592 sequence_output = encoder_outputs.hidden_states ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/transformers/modeling_reformer.py in forward(self, hidden_states, attention_mask, head_mask, num_hashes, do_output_hidden_states, do_output_attentions) 1324 all_attentions, 1325 do_output_hidden_states, -> 1326 do_output_attentions, 1327 ) 1328 ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/transformers/modeling_reformer.py in forward(ctx, hidden_states, layers, attention_mask, head_mask, num_hashes, all_hidden_states, all_attentions, do_output_hidden_states, do_output_attentions) 1220 head_mask=layer_head_mask, 1221 num_hashes=num_hashes, -> 1222 do_output_attentions=do_output_attentions, 1223 ) 1224 attn_output = layer_outputs.attn_output ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/transformers/modeling_reformer.py in forward(***failed resolving arguments***) 1111 # for dropout and save seed for forward fn in backward 1112 # to have correct dropout -> 1113 self._init_feed_forward_seed() 1114 # Y_2 = X_2 + g(Y_1) 1115 hidden_states = hidden_states + self.feed_forward(attn_output) ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/transformers/modeling_reformer.py in _init_feed_forward_seed(self) 1075 else: 1076 # CPU -> 1077 self.feed_forward_seed = int(torch.seed() % sys.maxsize) 1078 torch.manual_seed(self.feed_forward_seed) 1079 ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/torch/random.py in seed() 43 44 if not torch.cuda._is_in_bad_fork(): ---> 45 torch.cuda.manual_seed_all(seed) 46 47 return seed ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/torch/cuda/random.py in manual_seed_all(seed) 111 default_generator.manual_seed(seed) 112 --> 113 _lazy_call(cb) 114 115 ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/torch/cuda/__init__.py in _lazy_call(callable) 133 def _lazy_call(callable): 134 if is_initialized(): --> 135 callable() 136 else: 137 # Don't store the actual traceback to avoid memory cycle ~/anaconda3/envs/Reformer/lib/python3.6/site-packages/torch/cuda/random.py in cb() 109 for i in range(device_count()): 110 default_generator = torch.cuda.default_generators[i] --> 111 default_generator.manual_seed(seed) 112 113 _lazy_call(cb) RuntimeError: Overflow when unpacking long ```
01-07-2021 17:00:22
01-07-2021 17:00:22
Hey @albusdemens, could you maybe update your transformers version to 4.0.0?<|||||>Thanks @patrickvonplaten, that fixed the issue!
transformers
9,465
closed
[README] Add new models
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adds LED and BlenderbotSmall to the Readme. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
01-07-2021 16:04:41
01-07-2021 16:04:41
transformers
9,464
closed
UnboundLocalError when generating sequences
## Environment info - `transformers` version: 4.2.0dev0 - Platform: macOS-11.1-x86_64-i386-64bit - Python version: 3.8.7 - PyTorch version (GPU?): 1.7.1 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @patrickvonplaten ## Information Model I am using GPT2LMHeadModel: The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) Generate sequences using with the following snippet: ```python model.generate( input_ids, do_sample=False, num_beams=beam_width, num_return_sequences=beam_width, early_stopping=False, output_scores=True, return_dict_in_generate=True, ) ``` Generate The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Try to generate sequences as mentioned above. ### Traceback ``` Traceback (most recent call last): File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py", line 394, in run_asgi result = await app(self.scope, self.receive, self.send) File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/uvicorn/middleware/proxy_headers.py", line 45, in __call__ return await self.app(scope, receive, send) File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/fastapi/applications.py", line 199, in __call__ await super().__call__(scope, receive, send) File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/starlette/applications.py", line 111, in __call__ await self.middleware_stack(scope, receive, send) File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/starlette/middleware/errors.py", line 181, in __call__ raise exc from None File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/starlette/middleware/errors.py", line 159, in __call__ await self.app(scope, receive, _send) File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/starlette/middleware/cors.py", line 86, in __call__ await self.simple_response(scope, receive, send, request_headers=headers) File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/starlette/middleware/cors.py", line 142, in simple_response await self.app(scope, receive, send) File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/starlette/exceptions.py", line 82, in __call__ raise exc from None File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/starlette/exceptions.py", line 71, in __call__ await self.app(scope, receive, sender) File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/starlette/routing.py", line 566, in __call__ await route.handle(scope, receive, send) File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/starlette/routing.py", line 227, in handle await self.app(scope, receive, send) File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/starlette/routing.py", line 41, in app response = await func(request) File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/fastapi/routing.py", line 201, in app raw_response = await run_endpoint_function( File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/fastapi/routing.py", line 150, in run_endpoint_function return await run_in_threadpool(dependant.call, **values) File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/starlette/concurrency.py", line 34, in run_in_threadpool return await loop.run_in_executor(None, func, *args) File "/usr/local/opt/[email protected]/Frameworks/Python.framework/Versions/3.8/lib/python3.8/concurrent/futures/thread.py", line 57, in run result = self.fn(*self.args, **self.kwargs) File "./server.py", line 47, in doSampleGPT2 results = sampleGPT2v2(model=model, tokenizer=tokenizer, sequence=src) File "./ccompletion/samplers/gpt2sampler.py", line 151, in sampleGPT2v2 outputs = model.generate( File "/Users/miguelvictor/.virtualenvs/transformers/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context return func(*args, **kwargs) File "/Users/miguelvictor/Projects/transformers/src/transformers/generation_utils.py", line 943, in generate return self.beam_search( File "/Users/miguelvictor/Projects/transformers/src/transformers/generation_utils.py", line 1655, in beam_search input_ids, beam_scores, next_tokens, next_indices, pad_token_id=pad_token_id, eos_token_id=eos_token_id UnboundLocalError: local variable 'next_tokens' referenced before assignment ``` ## Expected behavior No errors raised.
01-07-2021 15:18:00
01-07-2021 15:18:00
Hey @miguelvictor, the problem is that `max_length` is set to a value that is too small. You need to increase either, ```model.config.max_length``` or pass a `max_length` parameter to `generate()` that is longer than your input_ids.<|||||>Ohh... my bad. Thank you!
transformers
9,463
closed
FileNotFoundError when instantiating RagRetriever
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.1.1 - Platform: Linux-5.4.0-58-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.7.1 (True) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten, @lhoestq ## Information I am trying to use RAG but I am having issues downloading the compressed index. ## To reproduce ```python from transformers import RagRetriever retriever = RagRetriever.from_pretrained('facebook/rag-sequence-nq', dataset='wiki_dpr', index_name='compressed') ``` Which results in: ```python FileNotFoundError: Couldn't find file at https://storage.googleapis.com/huggingface-nlp/cache/datasets/wiki_dpr/psgs_w100.nq.compressed/0.0.0/psgs_w100.nq.IVF4096_HNSW128_PQ128-IP-train.faiss ```
01-07-2021 14:28:13
01-07-2021 14:28:13
Hi ! Thanks for reporting Can you try again ? I fixed the missing file<|||||>Thanks a lot! I'll try right away (with my internet speed, it should take ~1h30)<|||||>Yup, I can confirm now the issue is resolved. Thanks a lot! Shall I close the issue?<|||||>Hey @poccio, usually, always feel free to close issues that you opened. As maintainers, we don't always feel comfortable closing an issue since it's not always clear whether the author's issue is solved. So if it's solved for you, it's great if you close it :-) Thanks for reporting the issue.
transformers
9,462
closed
Fix scatter import
Scatter is wrongly spelled
01-07-2021 13:56:15
01-07-2021 13:56:15
transformers
9,461
closed
Error while loading finetuned distilbert model: embedding dimension mismatch
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Python version: 3.6.9 - Tensorflow version (GPU?): 2.3.0 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik @patil-suraj @jplu <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten @TevenLeScao Blenderbot: @patrickvonplaten Bart: @patrickvonplaten Marian: @patrickvonplaten Pegasus: @patrickvonplaten mBART: @patrickvonplaten T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao RAG: @patrickvonplaten, @lhoestq FSMT: @stas00 examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information I am using `TFAutoModelForSequenceClassification` with `distilbert-base-multilingual-cased` model. For finetuning i have freezed the embedding layer. Finetuning is successful and i have saved the weights using `save_pretrained`. However after finetuning when i load the model for inference using `TFAutoModelForSequenceClassification` or `TFDistilBertForSequenceClassification` it throws error. However, I did not face any issues with `TFXLMRobertaForSequenceClassification` and `jplu/tf-xlm-roberta-base` while trying the same thing. ## To reproduce ``` tokenizer = AutoTokenizer.from_pretrained("distilbert-base-multilingual-cased") tokenizer_output = tokenizer.batch_encode_plus(train_texts, max_length=100, padding="max_length", truncation=True,return_attention_mask=True, add_special_tokens=True) input_ids, attention_mask = tokenizer_output["input_ids"], tokenizer_output["attention_mask"] config = AutoConfig.from_pretrained("distilbert-base-multilingual-cased", num_labels=num_classes,label2id=label2id, id2label=id2label,finetuning_task="text-classification") model = TFAutoModelForSequenceClassification.from_pretrained("distilbert-base-multilingual-cased", config=config) # freezing embedding layers model.layers[0].embeddings.trainable = False loss = tf.keras.losses.CategoricalCrossentropy(from_logits=True) metric = tf.keras.metrics.Accuracy() optimizer = tf.keras.optimizers.Adam(learning_rate=2e-6, epsilon=1e-08) model.compile(loss=loss, optimizer=optimizer, metrics=[metric]) model.fit([input_ids, attention_mask], train_labels, epochs=10, batch_size=16) model.save_pretrained("model") #Throws error for both cases #loaded_model = TFAutoModelForSequenceClassification.from_pretrained("model") loaded_model = TFDistilBertForSequenceClassification.from_pretrained("model") ``` Error while loading saved model: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-29-0ad4a4ce38ca> in <module> 2 3 #config = AutoConfig.from_pretrained(os.path.join("/home/Rajat/Rohan/models/xlmr104", "model")) ----> 4 model = TFDistilBertForSequenceClassification.from_pretrained(os.path.join("/home/Rajat/Rohan/models/dbert103", "model")) 5 6 t2 = time.time() /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 614 # see https://github.com/tensorflow/tensorflow/blob/00fad90125b18b80fe054de1055770cfb8fe4ba3/tensorflow/python/keras/engine/network.py#L1339-L1357 615 try: --> 616 model.load_weights(resolved_archive_file, by_name=True) 617 except OSError: 618 raise OSError( /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in load_weights(self, filepath, by_name, skip_mismatch, options) 2207 if by_name: 2208 hdf5_format.load_weights_from_hdf5_group_by_name( -> 2209 f, self.layers, skip_mismatch=skip_mismatch) 2210 else: 2211 hdf5_format.load_weights_from_hdf5_group(f, self.layers) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/hdf5_format.py in load_weights_from_hdf5_group_by_name(f, layers, skip_mismatch) 784 symbolic_weights[i])) + 785 ', but the saved weight has shape ' + --> 786 str(weight_values[i].shape) + '.') 787 788 else: ValueError: Layer #0 (named "distilbert"), weight <tf.Variable 'tf_distil_bert_for_sequence_classification_9/distilbert/embeddings/word_embeddings/weight:0' shape=(119547, 768) dtype=float32, numpy= array([[ 0.00207364, 0.01255192, 0.01065131, ..., 0.0182375 , -0.01671835, -0.02844721], [ 0.0333954 , 0.03589885, -0.03751937, ..., -0.01915496, -0.00888181, -0.00063128], [ 0.01174717, 0.00945629, -0.01179059, ..., 0.03340805, -0.00715566, -0.02317093], ..., [ 0.01775699, -0.01719745, -0.03220321, ..., 0.00817569, -0.00393617, -0.00730391], [ 0.03056052, -0.00136884, -0.02507194, ..., 0.01245719, -0.00362111, -0.01495665], [ 0.03703629, 0.01664717, -0.01278388, ..., 0.02537051, 0.02492457, 0.01191532]], dtype=float32)> has shape (119547, 768), but the saved weight has shape (768, 768). ``` <!-- A clear and concise description of what you would expect to happen. -->
01-07-2021 11:46:01
01-07-2021 11:46:01
Hello! First of all, can you try with the source version in order to see if the problem still occurs.<|||||>Hello @jplu , Just running these 4 lines throws error. ``` model = TFDistilBertForSequenceClassification.from_pretrained("distilbert-base-multilingual-cased") model.layers[0].embeddings.trainable = False model.save_pretrained("model") loaded_model = TFDistilBertForSequenceClassification.from_pretrained("model") ``` ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-8-aa416dc4d078> in <module> ----> 1 loaded_model = TFDistilBertForSequenceClassification.from_pretrained("model") /usr/local/lib/python3.6/dist-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 614 # see https://github.com/tensorflow/tensorflow/blob/00fad90125b18b80fe054de1055770cfb8fe4ba3/tensorflow/python/keras/engine/network.py#L1339-L1357 615 try: --> 616 model.load_weights(resolved_archive_file, by_name=True) 617 except OSError: 618 raise OSError( /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py in load_weights(self, filepath, by_name, skip_mismatch, options) 2207 if by_name: 2208 hdf5_format.load_weights_from_hdf5_group_by_name( -> 2209 f, self.layers, skip_mismatch=skip_mismatch) 2210 else: 2211 hdf5_format.load_weights_from_hdf5_group(f, self.layers) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/hdf5_format.py in load_weights_from_hdf5_group_by_name(f, layers, skip_mismatch) 784 symbolic_weights[i])) + 785 ', but the saved weight has shape ' + --> 786 str(weight_values[i].shape) + '.') 787 788 else: ValueError: Layer #0 (named "distilbert"), weight <tf.Variable 'tf_distil_bert_for_sequence_classification_1/distilbert/embeddings/word_embeddings/weight:0' shape=(119547, 768) dtype=float32, numpy= array([[ 0.00801877, -0.01047559, -0.03101005, ..., 0.02595956, -0.01114979, 0.0103603 ], [ 0.00097553, -0.00474179, -0.0065623 , ..., 0.03424093, -0.0189246 , 0.01545161], [-0.02869349, -0.03147252, -0.02191292, ..., 0.00606783, 0.0091517 , 0.00140686], ..., [ 0.00324067, 0.01025188, -0.0173355 , ..., 0.00799547, 0.00298822, -0.00772437], [ 0.00393043, 0.02751113, 0.00989435, ..., 0.00630352, -0.01590282, 0.00017761], [-0.02440546, -0.02454552, 0.01318205, ..., -0.02244014, 0.02798119, -0.006583 ]], dtype=float32)> has shape (119547, 768), but the saved weight has shape (768, 768). ``` ​ <|||||>With which transformers version?<|||||>I'm using this docker image `huggingface/transformers-tensorflow-gpu:3.3.1`<|||||>Ok I just tried your code snipped on the source version and it works as expected, so it looks like this issue has already been fixed. Then I suggest you to update your container to the last release.<|||||>Thanks<|||||>I'm getting the same error, with code that hasn't changed since it worked. I guess the model downloaded by from_pretrained() is no longer compatible with older tokenizers or transformers code. tokenizers==0.8.1.rc1 transformers==3.0.2 But what to upgrade to so that it works? I'm trying to maintain compatibility with javascript tokenizers 0.6.2 because there are other version issues there on the nodejs side. However it seems this may not be possible. Just got same error with: tokenizers==0.8.1.rc2 transformers==3.3.1 <|||||>Can't even get working with latest tokenizers and transformers. Although upgrading change the error to: `ValueError: cannot reshape array of size 22268928 into shape (30522,768)`<|||||>Reproduction script: ``` from transformers import DistilBertConfig, TFDistilBertModel config = DistilBertConfig(dropout=0.2, attention_dropout=0.2) config.output_hidden_states = False print('loaading') transformer_model = TFDistilBertModel.from_pretrained( "distilbert-base-cased", config=config ) print('loaded') ``` Python: 3.8.4 absl-py==0.10.0 astunparse==1.6.3 datasets==1.8.0 filelock==3.0.12 gast==0.3.3 h5py==2.10.0 keras==2.4.3 keras-applications==1.0.8 keras-preprocessing==1.1.2 numpy==1.19.2 opt-einsum==3.3.0 protobuf==3.12.2 regex==2020.7.14 requests==2.24.0 sacremoses==0.0.43 sentencepiece==0.1.94 six==1.15.0 scikit-learn==0.24.2 scipy==1.4.1 tensorboard==2.4.0 tensorflow==2.4.0 termcolor==1.1.0 tokenizers==0.10.3 tqdm==4.48.0 transformers==4.7.0 wrapt==1.12.1 <|||||>Interestingly `distibert-base-uncased` works just not `distilbert-base-cased`. Maybe missing some config?<|||||>+1
transformers
9,460
closed
[TFGPT2] - Fix flaky past_key_values test
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR attempts to fix the flaky TFGPT2 test: [tests/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_gpt2_model_past_large_inputs](https://app.circleci.com/pipelines/github/huggingface/transformers/18086/workflows/2c889716-285d-489f-9c1b-03c99155ea37/jobs/145873) To be honest, I'm not really sure what is/was going on there. I don't see an obvious bug in any of the test and `TFGPT2` wasn't changed for a long time -> so not sure what's going on. Also before doing the changes in the PR the test failed 1/20 times in my bash loop. The only change in this PR is to change the batch_size from 13 to 1 as it's done in other TF tests as well (see: https://github.com/huggingface/transformers/blob/a400fe8931cce276df74c7c7a5ee4dd28b5674ec/tests/test_modeling_tf_t5.py#L203). => so I think the test should have passed previously as well (there should be no difference between batch_size 1 and 13 ...) After the change, I ran the test 60 times in a loop and it never failed - we should still keep an eye on it though. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
01-07-2021 11:34:59
01-07-2021 11:34:59
transformers
9,459
closed
[LED Test] fix common inputs pt for flaky pt-tf led test
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes flaky led test: [tests/test_modeling_tf_led.py::TFLEDModelTest::test_pt_tf_model_equivalence](https://app.circleci.com/pipelines/github/huggingface/transformers/18159/workflows/e39565cd-188f-406a-bc8c-3db64a5829c5/jobs/146772/steps). It's the classic bug for Longformer, I forgot to set the global attention mask correctly for the common inputs for PT ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
01-07-2021 11:03:51
01-07-2021 11:03:51
@LysandreJik @sgugger - should fix flaky TFLED test.<|||||>Thanks!
transformers
9,458
closed
Closed
01-07-2021 11:03:50
01-07-2021 11:03:50
transformers
9,457
closed
[Blenderbot] Model yields weird results
As discussed with @Narsil offline, Blenderbot seems to yield weird generation results. I think we have to dive deeper into the original `Parlai` lib and make sure that there is no flaw in the model or generate function. Also on my Todo list. Also pinging @patil-suraj and @Narsil for notice.
01-07-2021 10:38:19
01-07-2021 10:38:19
Yes, I'm actually investigating this, also see #9365<|||||>Any new insights into this issue? <|||||>Yes, most of the work was done here: https://github.com/huggingface/transformers/pull/10002 and https://github.com/huggingface/transformers/pull/9984 It was mostly linked to something which was not supported by the `generate` function (namely `encoder_no_repeat_n_gram_size`) at the time. I've seen a few issues creep up again about blenderbot (namely questioning the separation scheme of conversation items). I didn't have time to dive more into it again to double check, but at the time of the mentionned PRs, the separation scheme was tripled checked against the `master` branch of ParlAI (the questionning was mentionning the docs, which could always be outdated). Also keep in mind, ParlAI actually uses more scheme to prevent the model from outputting too many odd stuff. There's an hardcoded banned word list + an actual model to detect anything inappropriate (maybe more, what I found was way out of scope for transformers and also extremely specific to Blenderbot). The "personna" thing, are usable within transformers, but do rely on tricks. A "personna" is actually just a prompt at the start of the conversation looking like "your personna: You live in a mansion". So prefixing your conversation with "your persona: You live in a mansion Hi there!" should yield the same results as Blenderbot. Check ParlAI implementations to confirm (I'm not sure about the actual casing used and so on).<|||||>Thanks for the reply @Narsil as well as the links to the related PRs. Yes, I'm aware of ParlAI's implementation of a safety detector. Thanks also for the point about the persona implementation - that is what I assumed but it's great that you've confirmed. Just to check, is the separation scheme a total of three spaces between turns? (2 in the join operator plus an extra at the start of each sentence) This is what I see in `tests/test_pipelines_conversational.py` If so, the [documentation](https://huggingface.co/transformers/model_doc/blenderbot.html#tfblenderbotforconditionalgeneration) may be outdated, as it uses `</s> <s>` between turns, which produces different results. <|||||>Yes, I confirmed that it was 3 spaces. It's supposed to be 4 spaces, but if I remember correctly, it was actually 2 + 1 hardcoded. I checked at the token level in the end, and it's 228, 228 all the time. Found the persona code, the sentence split was a bit more spread out, I can't find it right away, it's somewhere in there https://github.com/facebookresearch/ParlAI/blob/master/parlai/core/torch_generator_agent.py if you want to start inspecting live code.<|||||>> Yes, I confirmed that it was 3 spaces. > It's supposed to be 4 spaces, but if I remember correctly, it was actually 2 + 1 hardcoded. I checked at the token level in the end, and it's 228, 228 all the time. > > Found the persona code, the sentence split was a bit more spread out, I can't find it right away, > > it's somewhere in there https://github.com/facebookresearch/ParlAI/blob/master/parlai/core/torch_generator_agent.py if you want to start inspecting live code. Perfect, thanks for the reference. I just managed to do some poking around in the ParlAI library and confirmed the delimiter token in the history object. It is also what you found. ```python from parlai.core.agents import create_agent_from_model_file blender_agent = create_agent_from_model_file("zoo:blender/blender_400Mdistill/model", {"skip_generation": False}) print(blender_agent.history.delimiter_tok) # Output: [228, 228] ``` For persona, looks like they just separate all the persona details with newlines, and bundle it into the first turn. E.g. your persona: I like cheese`\n`your persona: I am from New York City`[228, 228]`Hi, where are you from`[228, 228]`Hi, I'm from the city of new york city. How about you? Do you like cheese?`[228,228]`do you like cheese?`[228, 228]`Yes, I love cheese. It is one of my favorite foods. What is your favorite food? Reference: https://github.com/facebookresearch/ParlAI/issues/2872 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,456
closed
[EncoderDecoder] Make sure `use_cache` is set to `True` for all Bert2Bert, Roberta2Roberta by default
At the moment when one loads a Bert2Bert: ```python model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-cased", "bert-base-cased") ``` does not automatically set `use_cache` to True -> so that the user "silently" has to be fine with a much slower than optimal inference speed. Also all Bert2Bert configs online don't have `use_cache` set to True. This should be changed at least for the heavily used Bert2Bert models. I'll try to take care of that in the next couple days. Also pinging @patil-suraj for information. Thanks @Narsil for binging up the topic.
01-07-2021 10:24:02
01-07-2021 10:24:02
@patrickvonplaten Can we instead set `use_cache` to `True` by default in `generate`? That way we won't need to rely on `config` Right now, the `generate` [docstring](https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L689) says that it defaults to `True`, but it's set to `None` https://github.com/huggingface/transformers/blob/28d74872cc049e0cbee3fafd15cbbabfe348ebd4/src/transformers/generation_utils.py#L618 <|||||>Hmm, that goes a bit against the philosophy because we never "set" any variables in `generate()`. We should do it in `EncoderDecoderConfig` and in `from_encoder_decoder_pretrained`. Note that all args in `generate()` are set to `None`, but default to the respective config defaults which should be set correctly<|||||>Also `use_cache` is newly introduced in bert/roberta config and is `True` by default, so even if the model's config file online doesn't have `use_cache` it should still be `True,` no? Could you maybe provide an example where the above issue occurs?<|||||>@patil-suraj, you're 100% right! I initially thought it's a problem because `EncoderDecoderConfig` does not have a `use_cache` param set to `True`, but it doesn't actually matter since `model.decoder.config.use_cache` will always be set to `True` by default which forces `use_cache` to be True in the decoder which makes it return the `past_key_values` => so all good then - thanks a lot for double-checking this :-)
transformers
9,455
closed
Rename `nlp` variables into more appropriate names
# 🚀 Feature request In `pipelines` tests and documentation they are recurringly named `nlp`, the goal is to rename them to something more appropriate. <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation ```python nlp = pipeline(task='conversational', model='XXX') ``` This is a bit pretentious as it does cover all NLP, and a better name help understanding to users too. For instance the `conversational` task pipeline could be named `conversational_agent`. Or maybe still a generic name but less pretentious `pipe`, `pipeline` (caveat: those are less clear in what they intend to achieve) The goal is to rename all occurences of `nlp` by better names within both tests and the documentation. <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution The better names could be discussed here. <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
01-07-2021 09:19:36
01-07-2021 09:19:36
Thanks for creating the issue! Here are examples of what good names are in my humble opinion: ``` classifier = pipeline("sentiment-analysis") unmasker = pipeline("fill-mask") text_generator = pipeline("text-generation") ``` In short, something that describes the task it achieves.<|||||>Hi guys, I'm new and I'd like to start helping out. Can I take over this request? And to clarify, you're referring to the references primarily in the /tests directory and in the /docs directory?<|||||>Hi @terrenceedmonds , Thanks for taking this on ! Yes, for both directories, but docs are also found within docstrings within the `src/transformers/pipelines` directory.<|||||>Hii. Is this issue still open? I want to take this issue. Also, this will be my first contribution. Any help in getting me started will be highly appreciated.<|||||>Let's see if @terrenceedmonds wants to to finish it first (the PR was almost ready to merge).<|||||>Can I work on this issue, if @terrenceedmonds is not working on it?<|||||>Yes, you can go ahead!
transformers
9,454
closed
[Docs] Improve model sharing doc
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> #9431 was merged too early - I should have waited for @julien-c feedback. This PR corrects the docs accordingly. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
01-07-2021 09:17:01
01-07-2021 09:17:01
LGTM! Thanks for fixing
transformers
9,453
closed
Prophetnet optimization
# What does this PR do? This PR proposes an optimization for the ProphetNet model. The current implementation calculates an attention bias mask by looping through the position to unmask. It performs a high number of assignments (`ngram` * `sequence_length`) which can be in the order of ~1000. Single tensor assignments, especially on accelerators, are inefficient. This PR proposes a vectorized implementation which performs at most `ngram` assignments, which should be significantly lower than `ngram * sequence_length`. A quick experiment shown at https://gist.github.com/guillaume-be/e6b862c701fac1b54765e7af7e71c641 shows that: 1. this `ngram_attention_bias` calculation is very expensive, taking close to 230ms (!) on a GPU 2. the vectorized implementation is several orders of magnitude faster (the same calculation takes less than 1ms on the same example) ## Who can review? @patrickvonplaten maybe you would be a good candidate? I could not find anyone assigned for ProphetNet edit: pushed some further optimization, further accelerating by ~40%
01-07-2021 09:12:33
01-07-2021 09:12:33
All slow tests are passing! Very nice PR - thanks a mille @guillaume-be
transformers
9,452
closed
Error when running run_clm.py on Python3.9/MacOS
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.1.1 - Platform: macOS-11.0-arm64-arm-64bit - Python version: 3.9.1 - PyTorch version (GPU?): 1.8.0a0+c20b916 (False) - Tensorflow version (GPU?): not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ## Information Model I am using (Bert, XLNet ...): GPT2 The problem arises when using: * [yes ] the official example scripts: (give details below) The tasks I am working on is: * [no] an official GLUE/SQUaD task: language-modeling task; dataset: wikitext ## To reproduce Steps to reproduce the behavior: 1. install transformers from the master branch of version 4.1.1 2. run examples/language-modeling/run_clm.py 3. arguments are as following: `--model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir test-clm/` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ``` /Users/liyucheng/miniforge3/bin/python /Users/liyucheng/projects/comments_generation/run_clm.py --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir test-clm/ Traceback (most recent call last): File "/Users/liyucheng/projects/comments_generation/run_clm.py", line 388, in <module> main() File "/Users/liyucheng/projects/comments_generation/run_clm.py", line 145, in main parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments)) File "/Users/liyucheng/miniforge3/lib/python3.9/site-packages/transformers/hf_argparser.py", line 52, in __init__ self._add_dataclass_arguments(dtype) File "/Users/liyucheng/miniforge3/lib/python3.9/site-packages/transformers/hf_argparser.py", line 85, in _add_dataclass_arguments elif hasattr(field.type, "__origin__") and issubclass(field.type.__origin__, List): File "/Users/liyucheng/miniforge3/lib/python3.9/typing.py", line 829, in __subclasscheck__ return issubclass(cls, self.__origin__) TypeError: issubclass() arg 1 must be a class Process finished with exit code 1 ``` This error is bizarre cause it only occurs on my OSX and I cannot reproduce it on my PC. I think the main reason is about the decorator `dataset`, but I am not sure about that. Thanks for any helps.
01-07-2021 07:49:42
01-07-2021 07:49:42
It seems quite cryptic, but maybe @sgugger has already been confronted to that issue, so pinging him here.<|||||>Never seen this before. There is some code in the HFArgumentParser to make it work with Python 3.9 that was added by @julien-c so maybe he has more insight?<|||||>I want to provide more valuable information about this issue. The field of the corresponding argument `--model_name_of_path` on my Mac/Python3.9 is like the following: ``` Field(name='model_name_or_path',type=typing.Optional[str],default=None,default_factory=<dataclasses._MISSING_TYPE object at 0x1065a6220>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': "The model checkpoint for weights initialization.Don't set if you want to train a model from scratch."}),_field_type=_FIELD) ``` However, it is different on my PC/Python3.7.9. ``` Field(name='model_name_or_path',type=typing.Union[str, NoneType],default=None,default_factory=<dataclasses._MISSING_TYPE object at 0x00000227D9888A48>,init=True,repr=True,hash=None,compare=True,metadata=mappingproxy({'help': "The model checkpoint for weights initialization.Don't set if you want to train a model from scratch."}),_field_type=_FIELD) ``` The critical change of it is the `type` attribute. The function in `transformers/data_classes.py` do not give `type=typing.Optional[str]` a appropriate solution. But, I have no idea why the `type` attribute has that different value when I run it on Mac/Python3.9.1.<|||||>#9479 will fix this I believe.<|||||>Closed by #9479!
transformers
9,451
closed
[trainer] remove `--model_parallel`
Per @sgugger's request removing `--model_parallel` in trainer, as it was never tested or made to work with the trainer. We will get back to it in the future. This PR doesn't introduce breaking changes, since `--model_parallel` never worked (well other than in my MP PRs that have been parked for now, since they are very inefficient and we are looking for a better approach, rather than waste time on sorting those out). @LysandreJik, @sgugger
01-07-2021 04:59:14
01-07-2021 04:59:14
Thanks for putting it back. Since we're in a PR on this test alone, can we "fix" it to ignore the `args.model_parallel` argument? This argument will be removed/renamed (I'd prefer the first option as it's not useful) since peoples are confusing it with something that will enable `DataParallel`. The test can be replaced by `model.is_parallelizable and model.parallel` I believe, with the current API.<|||||>2 things: 1. you must be referring to `self.model_parallel`? But it will be always `False` unless `model.parallelize()` is called! So while you can rename the argument, you can't remove it, the user needs to activate this explicitly and the trainer then must activate MP with `model.parallelize()` Wrt `DataParallel`. Why are we turning it on automatically in first place? Why not make it manual and call it `--data_parallel` - no more confusion. Loud and clear: - `--model_parallel` - `--data_parallel` 2. As we discovered last night current trainer doesn't work at all with --model_parallel - see https://github.com/huggingface/transformers/pull/9211#discussion_r553172405 there is no activation of that parallel mode - nobody calls `model.parallelize()` so it's very broken I change this code last night to; ``` if self.args.model_parallel: if model.is_parallelizable: model.parallelize() else: raise ValueError( f"{model.__class__.__name__} implementation currently doesn't support model parallelism, therefore --model_parallel cl arg cannot be used" ) ``` and it doesn't work when I try: ``` rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 ./finetune_trainer.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval --do_train --evaluation_strategy=steps --fp16 --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size 4 --per_device_train_batch_size 4 --predict_with_generate --eval_steps 25000 --save_steps 25000 --sortish_sampler --task translation_en_to_ro --test_max_target_length 128 --val_max_target_length 128 --warmup_steps 1 --n_train 2 --n_val 2 --n_test 2 --do_predict --model_parallel ``` It doesn't look it ever worked... i.e. MP works when setup up manually but doesn't work in trainer. p.s. I tagged you on that discussion - not sure if you saw it.<|||||>> i.e. MP works when setup up manually but doesn't work in trainer. > As we discovered last night current trainer doesn't work at all with --model_parallel - see #9211 (comment) there is no activation of that parallel mode - nobody calls model.parallelize() so it's very broken That's not a discovery on my side, that is exactly why I keep saying that the argument `--model_parallel` should be removed. It doesn't actually do anything and is confusing for the user. The call to `model.parallelize()` can always be done outside of `Trainer` IMO, which is why the test can be changed as suggested. We can think of integrating it inside the Trainer later, when the API is stable and actually used, for now I don't see the point of adding this. > Wrt DataParallel. Why are we turning it on automatically in first place? Why not make it manual and call it --data_parallel That would be a big breaking change in the API, and beginners actually want to have the parallelism work out of the box when they have several GPUs, so I don't see why change something that works.<|||||>> The call to model.parallelize() can always be done outside of Trainer IMO, which is why the test can be changed as suggested. It doesn't work > Wrt DataParallel. Why are we turning it on automatically in first place? Why not make it manual and call it --data_parallel > > That would be a big breaking change in the API, and beginners actually want to have the parallelism work out of the box when they have several GPUs, so I don't see why change something that works. OK, then the flag should be there with the default On? Surely a user should be able not to run DP and it's not possible at the moment.<|||||>OK, so I did remove `--model_parallel` - no problem in `trainer.py` since I used `model.is_parallelizable and model.parallel` instead - and I now understand that the point is that the user has to activate `model.parallelize()` themselves before passing the `model` to the trainer - i.e. no examples scripts will support MP at the moment. The problem is `training_args.py` - how do I deal with: ``` if not self.model_parallel: train_batch_size = per_device_batch_size * max(1, self.n_gpu) else: train_batch_size = per_device_batch_size ``` `self` is args here, and there is no `trainer` object. Suggestions? But I guess I need to first figure out how to make MP work in trainer at all, I doesn't look it was ever tried or tested. As it fails for me.<|||||>FWIW, `--model_parallel` works just fine with my Bart MP PR: https://github.com/huggingface/transformers/pull/9384#issuecomment-756300194 in case someone needs it. I suspect t5 MP wasn't tested/made to work with `generate` tools (beam search, etc.) - **edit** It works now in this PR https://github.com/huggingface/transformers/pull/9323 - but super slow in beam search! <|||||>OK, I committed the bulk of it, and @sgugger will push some magic to deal with `training_args.py` tests should be failing I think until he does that. <|||||>So now I can see I can jokingly blame my initial mistake on @sgugger since he wanted it removed all along and so I unconsciously did it during rebasing and he unconsciously saw this as the right thing to do during the review ;) It's all Freud's fault anyway ;)<|||||>I added a wrapped first, but it looked out of place so I introduced and documented a new attribute: `self.is_model_parallel` - hope it's loud and clear.<|||||>@sgugger, I must be doing something wrong - that docstring section of `Important attributes` that I started in model_wrapped PR gets wrapped all funny - so I tried to add bullets and then it gets all messed up, as it bunches it all up into one paragraph. If I add new lines then `make docs` fails. Your magic touch is needed. Thank you.<|||||>and here is why I removed `init=False` in https://github.com/huggingface/transformers/pull/9451/commits/a7a39216e99aae60238962ec3d6c96ecf23da42b The tests were failing with: ``` TypeError: __init__() got an unexpected keyword argument '_n_gpu' ``` https://circle-production-customer-artifacts.s3.amazonaws.com/picard/forks/5bdabdd888af1f000130874a/278[…]cc8b6d6c390aab800d0cc1350f731a19529ac82f48 <|||||>Thank you for fixing the docs, @sgugger!
transformers
9,450
closed
Some layers of pretrained Albert model albert-base-v2 didn't match the architecture of AlbertForMaskedLM in latest transfomers 4.1.1.
albert: @LysandreJik ## Information Model I am using is Albert: The problem arises when using: * [x] my own modified scripts: (give details below) When I load pretrained albert-base-v2 model, I find some of the medata of the model.state_dict can not match the latest AlbertForMaskedLM model of transformers. And it seem to be that the pretrained model didn't repretrain after the albert code change. I find the AlbertAttention class in transformer 2.2.0 is: ```python class AlbertAttention(BertSelfAttention): def __init__(self, config): super(AlbertAttention, self).__init__(config) self.output_attentions = config.output_attentions self.num_attention_heads = config.num_attention_heads self.hidden_size = config.hidden_size self.attention_head_size = config.hidden_size // config.num_attention_heads self.dropout = nn.Dropout(config.attention_probs_dropout_prob) ...... ``` It has one layer `self.dropout`. However, the AlbertAttention class in transformer 4.1.1 is: ```python class AlbertAttention(nn.Module): def __init__(self, config): super().__init__() if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"): raise ValueError( "The hidden size (%d) is not a multiple of the number of attention " "heads (%d)" % (config.hidden_size, config.num_attention_heads) ) self.num_attention_heads = config.num_attention_heads self.hidden_size = config.hidden_size self.attention_head_size = config.hidden_size // config.num_attention_heads self.all_head_size = self.num_attention_heads * self.attention_head_size self.query = nn.Linear(config.hidden_size, self.all_head_size) self.key = nn.Linear(config.hidden_size, self.all_head_size) self.value = nn.Linear(config.hidden_size, self.all_head_size) self.attention_dropout = nn.Dropout(config.attention_probs_dropout_prob) self.output_dropout = nn.Dropout(config.hidden_dropout_prob) ...... ``` It has two layer `self.attention_dropout ` and `self.output_dropout`. When I load pretrained model of albert, I find it still maintain the architecture of that in transformers 2.2.0. So those unmatched layer cannot load pretrained parameters, which make the model load from albert-base-v2 only has very low accuracy when training.
01-07-2021 04:05:17
01-07-2021 04:05:17
Hello! When you speak of `unmatched layers`, do you mean the dropout layers? These layers have no weights. Furthermore, when setting the verbosity level to `INFO` and loading the `albert-base-v2` weights in current `master`'s `AlbertForMaskedLM`: ```py >>> from transformers import AlbertForMaskedLM >>> from transformers import logging >>> logging.set_verbosity_info() >>> model = AlbertForMaskedLM.from_pretrained("albert-base-v2") [...] All model checkpoint weights were used when initializing AlbertForMaskedLM. All the weights of AlbertForMaskedLM were initialized from the model checkpoint at albert-base-v2. If your task is similar to the task the model of the checkpoint was trained on, you can already use AlbertForMaskedLM for predictions without further training. ``` This tells you that all weights were correctly initialized. It would seem the issue comes from somewhere else, or maybe I have misunderstood your issue? Could you expand on how you identify the "unmatched layers that cannot load pretrained parameters"?<|||||>Thanks. I made a mistake. The dropout layer is turely no weights. But I still have a question. I am using the AlbertForMaskLM on cloth dataset, and I load the pretrained model albert-base-v2, the train accuracy is start from 0.28; I load the pretrained model albert-xxlarge-v2, the train accuracy is start from 0.79. Is it normal? Thanks a lot.<|||||>I do not have any experience with the CLOTH dataset, but taking a quick look at it it seems to be a cloze task, which is one of the pre-training objectives of the ALBERT model. It isn't surprising to me that the largest ALBERT model gets better results with no fine-tuning.<|||||>Yes, the larger pretrained deserve better performance. But the base model is only start from 0.28, which mean just like randomly to choose answer in cloze(random is 0.25). And after convergence, the accuracy can reach 0.77. It seem to be the pretrained model doesn't learn any prior, just like from scratch. Anyway, thanks a lot.
transformers
9,449
closed
[make fixup] a more reliable version of branching point discovery
This PR replaces: ``` git merge-base --fork-point master ``` with: ``` git merge-base master HEAD ``` in `utils/get_modified_files.py` (which is used by `make fixup`) As I reported in https://github.com/huggingface/transformers/issues/9425 the former method sometimes doesn't work when used with `gh pr checkout` or `git-pr`, rendering the relatively recently added git ` --fork-point` feature unreliable. I have re-tested and the new way works for any of: 1. `gh pr checkout` 2. `git-pr` 3. `git pr` 4. normal local git branch So this is what we are doing now to get only the modified files of the current branch: ``` git diff --name-only $(git merge-base master HEAD) ``` If we get complex branches that have various re-merges we will want to find not the most recent ancestor which the above gives, but the oldest ancestor - after some research found this: https://stackoverflow.com/a/4991675/9201239, which suggests: ``` diff --changed-group-format='' <(git rev-list --first-parent "${1:-master}") <(git rev-list --first-parent "${2:-HEAD}") | head -1 ``` and in the simple case where there is just one common ancestor it will find it too. So let's keep this as an option if you find the current solution isn't satisfactory. Fixes: #9425 @LysandreJik
01-07-2021 03:53:39
01-07-2021 03:53:39
> why --fork-point doesn't work with the GitHub CLI I wasn't able to figure out what exactly those 2 tools do differently, but yes, `--fork-point` only works when specific conditions are met in the reflog, and it fails when some entries (the sha we are after) missing from it. I suppose `gh and` `git-pr` fetch just part of the reflog? Apparently there are multiple causes. The first one is described at https://stackoverflow.com/a/53981615/9201239 and then it links to a discussion with additional causes.<|||||>I see! Thank you, this is interesting!
transformers
9,448
closed
Cannot use TransfoXLLMHeadModel with Trainer class because it returns a non scalar loss
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.5.1 - Platform: Linux-4.15.0-72-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): 2.3.1 (False) - Using GPU in script?: Yes through Trainer class - Using distributed or parallel set-up in script?: No ### Who can help @TevenLeScao ## Information Model I am using: TransfoXLLMHeadModel The problem arises when using: - [ ] the official example scripts: (give details below) - [x] my own modified scripts: (give details below) The tasks I am working on is: - [x] my own task or dataset: (give details below) I am using a set of music data encoded as a language modeling problem. I have a Pytorch Dataset that returns a dictionary with the keys `input_ids`and `labels` from its `__getitem__` method which are 1D tensors that contain the example sequence to train on and predict. ## To reproduce Steps to reproduce the behavior: 1. Create a Pytorch dataset whose `__getitem__` method returns a dictionary with `input_ids` and `labels` with 1D Tensors ```python class ExampleDataset(Dataset): def __getitem__(self, index): sample = self.encodings[index] return {'input_ids': torch.tensor(sample.ids), 'labels': torch.tensor(sample.ids), 'mems': None} example_dataset = ExampleDataset() example_dataset[0] # { "input_ids": torch.tensor(0, 1,3 5, ... 330, 330), "labels": torch.tensor(0, 1, 3, 5, .. 330, 330) } # len: 512 len: 512 # '330' is pad token ``` 2. Instantiate the needed config and model ```python from transformers import TransfoXLConfig, TransfoXLLMHeadModel configuration = TransfoXLConfig( dropatt=0.1, vocab_size=len(tokenizer.get_vocab()), # Current size of vocab mem_len=512, # WordLevel tokenizers.Tokenizer d_inner=2048, n_layer=12, d_embed=512, n_head=8, d_head=64, cutoffs=[] ) test_conf = TransfoXLConfig(vocab_size=len(tokenizer.get_vocab())) model = TransfoXLLMHeadModel(configuration) model.resize_token_embeddings(len(tokenizer.get_vocab())) ``` 3. Instatiate TrainingArguments and Trainer, begin training ```python train_args = TrainingArguments( overwrite_output_dir=True, # Change this to continue training, ie load from checkpoint output_dir = 'example-train', do_train = True, num_train_epochs=2, per_device_train_batch_size=1, ) trainer = Trainer( model=model, args=train_args, train_dataset=example_dataset, ) trainer.train() ``` ```error /usr/local/lib/python3.6/dist-packages/transformers/modeling_transfo_xl.py:445: UserWarning: This overload of nonzero is deprecated: nonzero() Consider using one of the following signatures instead: nonzero(*, bool as_tuple) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:766.) indices_i = mask_i.nonzero().squeeze() --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-11-3435b262f1ae> in <module> ----> 1 trainer.train() /usr/local/lib/python3.6/dist-packages/transformers/trainer.py in train(self, model_path, trial) 773 tr_loss += self.training_step(model, inputs) 774 else: --> 775 tr_loss += self.training_step(model, inputs) 776 self._total_flos += self.floating_point_ops(inputs) 777 /usr/local/lib/python3.6/dist-packages/transformers/trainer.py in training_step(self, model, inputs) 1124 scaled_loss.backward() 1125 else: -> 1126 loss.backward() 1127 1128 return loss.detach() /usr/local/lib/python3.6/dist-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph) 183 products. Defaults to ``False``. 184 """ --> 185 torch.autograd.backward(self, gradient, retain_graph, create_graph) 186 187 def register_hook(self, hook): /usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables) 119 grad_tensors = list(grad_tensors) 120 --> 121 grad_tensors = _make_grads(tensors, grad_tensors) 122 if retain_graph is None: 123 retain_graph = create_graph /usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in _make_grads(outputs, grads) 45 if out.requires_grad: 46 if out.numel() != 1: ---> 47 raise RuntimeError("grad can be implicitly created only for scalar outputs") 48 new_grads.append(torch.ones_like(out, memory_format=torch.preserve_format)) 49 else: RuntimeError: grad can be implicitly created only for scalar outputs ``` ## Expected behavior The model should be able to successfully complete training.
01-07-2021 03:48:06
01-07-2021 03:48:06
According to the error here, this seems to be because the ouptut of the `TransfoXLLMHeadModel` is not a scalar output. Taking a look at this model's loss output, named `losses`, it is an array of size `[bsz, tgt_len - 1]`. Maybe @TevenLeScao or @sgugger can chime in here at what the best procedure would be here, from a quick look the loss needs to be reduced. It seems this should be happening in the `Trainer` itself, but I'll let @sgugger decide.<|||||>`TransfoXLLMHeadModel` is not compatible with `Trainer` as it does not output a loss. The model should be fixed to output one loss and not the losses, like all the other ones (which would be a breaking change).<|||||>I see thank you for your replies. So to make this model compatible, I would need to create a custom `Trainer` class which overrides the `training_step` method and reduces the `losses` output to a scalar? How should I reduce the set? Would it be simpler to just train with a different causal language model from the library?<|||||>I think it would be easier to use another model, in all honesty. If you really want this one, you can use a subclass of `Trainer` and override the `compute_loss` function. There is an example of this in the [documentation](https://huggingface.co/transformers/main_classes/trainer.html). I think taking the mean would be a proper reduction.<|||||>Thank you for your help. I've changed the title to better reflect the issue. You can close this ticket is you'd prefer this flagged a different way. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,447
closed
urgent please help on memory issue during save
Hi I am getting very large memory usage during saving model/evaluation of T5, resulting in job kill, this is very urgent as I lose access to train the models please help thanks
01-07-2021 03:00:03
01-07-2021 03:00:03
Hi @juliahane It would be hard to answer without knowing the details Could you post the command that you are using, env info, which T5 model, training details etc ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,446
closed
Transformers fast import part 2
# What does this PR do? This is the second test to allow a fast import of transformers by deferring the imports of dependencies when they are actually needed (almost but not quite, see below). It results in the line `import transformers` running in 239ms instead of 2.3s, so quite a nice speedup. To do this, the main init is changed to have a bid private dictionary that maps modules names to public object names instead of directly importing those objects. A submodule or object is then only imported when explicitly requested, which means the line `import transformers` by itself doesn't import any of the dependencies. This mechanism is incompatible with absolute imports inside the library, hence the big diff as I had to change quite a few `from transformers.xxx import yyy` to `from .xxx import yyy`. Also, this misses the last piece to be completely efficient: the intermediate init (in models) should use the mechanism to avoid importing TensorFlow when we only request a PyTorch model. This will be done in another PR as this one is already quite big by itself. The script that created the dummy objects needed some update because it used to parse the init. I took this opportunity to also refactor the dupe code. Obviously the templates also needed an update. The rework of the init also makes it important to have the intermediate init of `models` be nonempty, otherwise things like ``` import transformers auto_module = transformers.model.auto ``` will break. I don't think this is a big inconvenience (especially since the update template will fill this for the user).
01-06-2021 22:28:47
01-06-2021 22:28:47
Just followed a bug back to this PR, wanted to send a message here since it seemed relevant to ping @sgugger The check for version in file_utils.py: `if version.parse(sys.version) < version.parse("3.8"):` doesn't seem to be reliable for me (or on multiple machines and images I have) Specifically: ``` Python 3.8.5 (default, Aug 6 2020, 14:13:36) [GCC 9.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import sys >>> from packaging import version >>> sys.version '3.8.5 (default, Aug 6 2020, 14:13:36) \n[GCC 9.3.0]' >>> version.parse("3.8.5") <Version('3.8.5')> >>> version.parse("3.8.5") < version.parse("3.8") # expect false False >>> version.parse(sys.version) < version.parse("3.8") # expect false True ``` Instead, it seems more reliable/functional to not rely on `packaging.version` at all and instead do `sys.version_info < (3, 8)`. I can also put in an Issue if that's a more appropriate way to raise / flag a concern. Just thought i'd ping here since i was able to trace it back to this PR from today. <|||||>Oh, thanks for reporting! Will add this to #9474 which should be merged tomorrow.
transformers
9,445
closed
Loading fine-tuned models
Since the transformers update, I am unable to load a newly trained model. OSError: Unable to load weights from pytorch checkpoint file for './model_source_450_v2' at './model_source_450_v2/pytorch_model.bin'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. Have tried to set "from_tf=True" but still not loading successfully The model is being created in pytorch
01-06-2021 19:09:35
01-06-2021 19:09:35
Hello, could you please provide all the information requested in the issue template? The environment is important, so is your code. Which update did you do? To v4.1.1? From which version? Thank you.<|||||>Working in Google Colab, so the second to most recent version and then the most recent version. Using BertForSequenceClassification and fine-tuning the model I'm trying to output and reload. ``` #Save a trained model, configuration and tokenizer using `save_pretrained()`. model_to_save = model.module if hasattr(model, 'module') else model # Take care of distributed/parallel training model_to_save.save_pretrained(output_dir) tokenizer.save_pretrained(output_dir) # Copy the model files to a directory in your Google Drive. !cp -r './model_source_450_v2/' "./drive/My Drive" ``` Then this code for a GPU node on a supercomputer which works on previously created model files ``` from transformers import AutoTokenizer, AutoModel import torch import random # setting device to GPU if available device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') print('using device: ', device) print() """model_source_450_v2 files * config.json * pytorch_model.bin * special_tokens_map.json * tokenizer_config.json * vocab.txt """ #set modelpath modelpath = "./model_source_450_v2" #location of fully trained model from transformers import BertTokenizer, BertModel # Retrieve fine-tuned BERT. bert_model = BertModel.from_pretrained(modelpath, output_hidden_states = True) bert_tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') bert_model.eval() bert_model.to(device) ```<|||||>Could you share your PyTorch versions as well? On both setups. PyTorch changed their saved models format, so you may have the issue of saving in a newer torch version (>= 1.6.0), and reloading in an older (<1.6.0) torch version<|||||>on supercomputer: "import torch; print(torch.__version__)" 1.7.0 on colab: 1.7.0+cu101<|||||>Hmmm, I'm having a hard time understanding what might be happening from the stack-trace. You wouldn't happen to have the entire stack-trace, would you? If you do, please share it. Is it possible the file was corrupted between saving and loading?<|||||>I tried resaving and had the same issue. ``` using device: cuda Traceback (most recent call last): File "/afs/crc.nd.edu/user/d/dheryadi/mcob/lib/python3.8/site-packages/transformers/modeling_utils.py", line 951, in from_pretrained state_dict = torch.load(resolved_archive_file, map_location="cpu") File "/afs/crc.nd.edu/user/d/dheryadi/mcob/lib/python3.8/site-packages/torch/serialization.py", line 587, in load with _open_zipfile_reader(opened_file) as opened_zipfile: File "/afs/crc.nd.edu/user/d/dheryadi/mcob/lib/python3.8/site-packages/torch/serialization.py", line 242, in __init__ super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer)) RuntimeError: [enforce fail at inline_container.cc:145] . PytorchStreamReader failed reading zip archive: failed finding central directory During handling of the above exception, another exception occurred: Traceback (most recent call last): File "gpu_v2.py", line 77, in <module> bert_model = BertModel.from_pretrained(modelpath, File "/afs/crc.nd.edu/user/d/dheryadi/mcob/lib/python3.8/site-packages/transformers/modeling_utils.py", line 953, in from_pretrained raise OSError( OSError: Unable to load weights from pytorch checkpoint file for './model_source_450_v2' at './model_source_450_v2/pytorch_model.bin'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. ```<|||||>Do you manage to reload the checkpoint without moving it to the new "supercomputer environment" ? The error seems to be with PyTorch rather than with Transformers given the error message: ``` PytorchStreamReader failed reading zip archive: failed finding central directory ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||> I have the same error, is anyone able to solve this problem line 477, in load_state_dict raise OSError( OSError: Unable to load weights from pytorch checkpoint file for '{Mydict}.cache\huggingface\transformers\4a74c6c9128ba518e61fbdf559d03e64b6bd0ad6db588419dfd865ace535942a.a48b7b4437be34e24274c9cf6cf57e2424d3f1eec537ec03b905e6f01d19ed77' at '{Mydict}.cache\huggingface\transformers\4a74c6c9128ba518e61fbdf559d03e64b6bd0ad6db588419dfd865ace535942a.a48b7b4437be34e24274c9cf6cf57e2424d3f1eec537ec03b905e6f01d19ed77'. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. Hey Please can you help me to solve this problem
transformers
9,444
closed
Fix init
# What does this PR do? "RobertaPreTrainedModel" is missing in models' __init__.py. It is needed, in case we need to create a subclass of the same like "RobertaForTokenClassification". <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Model Cards: @julien-c -->
01-06-2021 18:52:54
01-06-2021 18:52:54
Yes, let's wait for #9446 to be merged please :-)<|||||>@LysandreJik yes, I'm not sure regarding the CI errors, as I'm not that much into it. I ran the tests in Pycharm, they passed. Can we check the Logs if we could find some clue there? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,443
closed
[GenerationOutputs] Fix GenerationOutputs Tests
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> The `GenerationOutputs` PR: https://github.com/huggingface/transformers/pull/9150 was not rebased, so that the cicrle ci on master is red now. This PR fixes it. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
01-06-2021 17:30:07
01-06-2021 17:30:07
This PR actually made me correct 2 bugs additionally: 1) past_key_values for BertForCausalLM 2) T5 should not return T5 cross attentions if just encoder model -> make sure encoder model has never `config.is_decoder=True`
transformers
9,442
closed
[examples/text-classification] `do_predict` for the test set of local datasets
# 🚀 Feature request It seems that `run_glue.py` has the train set and the validation set management for local CSV/JSON files, but it doesn't have args for managing the test set of the local datasets. https://github.com/huggingface/transformers/blob/7a9f1b5c99e9a5d1772649d029acdf5160419239/examples/text-classification/run_glue.py#L90-L95 I think the script is intended to be used not only for the train/validation but the test, as `glue` tasks test sets are downloaded as shown in https://huggingface.co/docs/datasets/loading_datasets.html#selecting-a-configuration. It has the `--do_predict` option for the test sets. If there is no particular reason for not having the ability to read the test set in the local dataset, would it be ok for me to add the feature? Or is there some intention behind this implementation? ## Motivation I'd like to train, validate, and test my own local dataset. ## Your contribution I think some modifications like the below may help to add the feature. ``` python test_file: Optional[str] = field( default=None, metadata={"help": "A csv or a json file containing the test data."} ) ``` ``` python datasets = load_dataset( "csv", data_files={"train": data_args.train_file, "validation": data_args.validation_file, "test": data_args.test_file} ) ``` ``` python # if data_args.task_name is not None: # test_dataset = datasets["test_matched" if data_args.task_name == "mnli" else "test"] test_dataset = datasets["test_matched" if data_args.task_name == "mnli" else "test"] ``` Thank you in advance.
01-06-2021 16:19:37
01-06-2021 16:19:37
As long as it's kept simple (we want short and focused example scripts, so that they are easy to understand and tweak), I don't mind adding this feature. Feel free to open a PR with your suggestions!<|||||>Thanks, I'll try a modification as simple as possible, and if it can fix this issue without making the example difficult to understand, I’ll open a PR!
transformers
9,441
closed
Fast transformers import part 1
# What does this PR do? This PR is the first step for a fast `import transformers`. It changes all the test for `is_xxx_available` to avoid importing `xxx` and makes sure all integrations are only imported when needed (apart from comet ml which needs to be imported first). The second test will be a bit more complex, to avoid importing torch and tf unless necessary, and will touch all inits like in [this repo](https://github.com/sgugger/lazy_init).
01-06-2021 16:00:06
01-06-2021 16:00:06
PR looks very clean! I'm no real import expert, so I'll leave it up to @LysandreJik and @sgugger :-) But I'm very much welcoming this change. I think it's even cleaner that the libraries are no public attributes even more<|||||>re-based this into the deepspeed branch, and all was good until I tried: ``` from .integrations import is_deepspeed_available ``` inside `training_args.py`, and got: ``` Traceback (most recent call last): File "./finetune_trainer.py", line 23, in <module> from transformers import ( File "/mnt/nvme1/code/huggingface/transformers-deepspeed/src/transformers/__init__.py", line 2092, in __getattr__ return super().__getattr__(name) File "/mnt/nvme1/code/huggingface/transformers-deepspeed/src/transformers/file_utils.py", line 1452, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/mnt/nvme1/code/huggingface/transformers-deepspeed/src/transformers/__init__.py", line 2086, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/mnt/nvme1/code/huggingface/transformers-deepspeed/src/transformers/trainer_seq2seq.py", line 24, in <module> from .trainer import Trainer File "/mnt/nvme1/code/huggingface/transformers-deepspeed/src/transformers/trainer.py", line 32, in <module> from .integrations import ( # isort: split File "/mnt/nvme1/code/huggingface/transformers-deepspeed/src/transformers/integrations.py", line 55, in <module> from .trainer_callback import TrainerCallback # noqa: E402 File "/mnt/nvme1/code/huggingface/transformers-deepspeed/src/transformers/trainer_callback.py", line 28, in <module> from .training_args import TrainingArguments File "/mnt/nvme1/code/huggingface/transformers-deepspeed/src/transformers/training_args.py", line 24, in <module> from .integrations import is_deepspeed_available ImportError: cannot import name 'is_deepspeed_available' from partially initialized module 'transformers.integrations' (most likely due to a circular import) (/mnt/nvme1/code/huggingface/transformers-deepspeed/src/transformers/integrations.py) ``` I fixed that by moving the import to the middle of the file where I needed it.<|||||>When I run `from transformers import DistilBertTokenizerFast ` I see imports of tensorflow and tensorboard, so what was the purpose of this PR? I only need a tokenizer, not asking for bloatware that every model in transformers uses. transformers-4.14.1<|||||>Thanks for raising the issue @evrial, this patch https://github.com/huggingface/transformers/pull/14855 will be released in v4.15 sometime this week. Please open a new issue with the issue you're facing next time so that we may get to it faster.<|||||>> Thanks for raising the issue @evrial, this patch #14855 will be released in v4.15 sometime this week. > > Please open a new issue with the issue you're facing next time so that we may get to it faster. Thanks! God bless and merry Christmas!
transformers
9,440
closed
Remove nested lxmert
# What does this PR do? Remove duplicate of LXMERT
01-06-2021 15:57:57
01-06-2021 15:57:57
transformers
9,439
closed
Adding Stochastic Weight Averaging to transformer optimizers
# 🚀 Feature request I would like to train my models with SWA optimizer. According to this [paper](https://arxiv.org/pdf/1803.05407.pdf), SWA leads to better models and wider optima. ## Motivation As humans, we are all willing to get better results :) . I think adding this feature will lead to better models without costing more and it may be easy to implement.
01-06-2021 15:10:25
01-06-2021 15:10:25
In the meantime, I have started to work on adding SWA to huggingface. After doing some experiments, If I get better results, I will create a PR. You can check my work from [here](https://github.com/hasansalimkanmaz/transformers/tree/add-SWA-optimizer) Any feedback will be appreciated.<|||||>Based on my custom experiments, I couldn't produce better results with SWA. So I am closing this issue. My implementation for SWA is so custom and I didn't go through all tests. So, I will not create a PR due to the lack of benefits.
transformers
9,438
closed
Doc styling utils adds parasites new lines
## Environment info - `transformers` version: 4.2.0dev0 - Platform: Windows-10-10.0.18362-SP0 - Python version: 3.7.9 - PyTorch version (GPU?): 1.7.1 (False) - Tensorflow version (GPU?): 2.3.1 (False) - Using GPU in script?: Nope - Using distributed or parallel set-up in script?: Nope ### Who can help @sgugger ## Information Running the python util to style docs adds parasite new lines in every single docstring. See: ```bash $ python utils/style_doc.py src/transformers docs/source --max_len 119 --check_only Traceback (most recent call last): File "utils/style_doc.py", line 491, in <module> main(*args.files, max_len=args.max_len, check_only=args.check_only) File "utils/style_doc.py", line 479, in main raise ValueError(f"{len(changed)} files should be restyled!") ValueError: 345 files should be restyled! ``` See this commit for an example of what it does: https://github.com/huggingface/transformers/pull/9150/commits/b4dedd5ca25f043c66d12c774fa00a34c74dffb2 ## To reproduce Steps to reproduce the behavior: 1. Checkout and update master branch 2. run `python utils/style_doc.py src/transformers docs/source --max_len 119 --check-only` from transformers root Output: ```python Traceback (most recent call last): File "utils/style_doc.py", line 491, in <module> main(*args.files, max_len=args.max_len, check_only=args.check_only) File "utils/style_doc.py", line 479, in main raise ValueError(f"{len(changed)} files should be restyled!") ValueError: 345 files should be restyled! ``` It might have something to do with Windows or a particular setup of my machine because behavior cannot be reproduced by @patrickvonplaten. ## Expected behavior On master branch, documentation should not need to be restyled
01-06-2021 13:23:54
01-06-2021 13:23:54
@sgugger do you maybe have an idea here? Now that I see that it's Window I think this could be the reason<|||||>Mmm, @jplu didn't have any issue with this I believe. Sorry, reading again, you're not running make style but launching the script directly. There is nothing there to properly support Windows special line endings so it would probably require rewriting it from scratch to fix this. Can you use WSL for the styling?<|||||>I can run properly `python utils/style_doc.py src/transformers docs/source --max_len 119` on Windows (not on WSL) without any error. In order to properly run all the make targets on Windows I use the steps given in the [CONTRIBUTING readme](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#develop-on-windows).<|||||>Ah, there was one special magic option from GitHub for the line endings that could help, @LysandreJik do you remember which one? We had this problem with Niels on TAPAS.<|||||>It is the `core.autocrlf` config, you can see how to set it in the doc https://git-scm.com/book/fr/v2/Personnalisation-de-Git-Configuration-de-Git<|||||>Asked Lysandre and running `git config core.autocrlf false` solved the issue last time a contributor ran into it. @SBrandeis could you test if it does solve the issue for you? If that's the case, we'll add it to the `CONTRIBUTING` guide.<|||||>Setting `git config core.autocrlf` to `false` did not solve my issue, neither did running the python util from WSL. <|||||>Just tried on my Windows laptop and I'm unable to reproduce, the line runs just fine on my side :-/, so it must be something else. The `newline="\n"` that jplu added everywhere we open a file should make it so that there is no different line endings problem in the first place, but there is still something here... From the diff, it comes from [this regex](https://github.com/huggingface/transformers/blob/1c19b423bf274a465f95725a79819bf82f71329e/utils/style_doc.py#L417) but there should be no weird `\r` from Windows at this stage. Anyhow, will rework the regex as a for loop and it should work everywhere hopefully.<|||||>(Not sure the PR above will actually fix the issue since I can't reproduce, please confirm if it does or no @SBrandeis )<|||||>Hi @sgugger, thanks a lot for the PR. Unfortunately, it does not solve the issue on my side (the style_doc util still updates 196 files). Since none of @jplu, @patrickvonplaten and you can reproduce the issue, it must be related to my particular setup. Not sure what is the cause though, but I'll let you know if I figure this out !<|||||>@SBrandeis as we are both on Windows, do you want we check that together offline?<|||||>I actually forgot to push the changes in #9488 because it was Friday evening and my brain was dead :-/ Will open a new PR.<|||||>@SBrandeis #9516 actually contains the code I wanted you to test, so if you could try again on this branch?<|||||>@jplu helped me troubleshoot this (thanks @jplu !) Turns out my `git` was misconfigured, running `git config --global core.autocrlf input` solved my issue 😓 I'll add a note in the `CONTRIBUTING.md` guide.
transformers
9,437
closed
Can't find pretrained model for TFPegasusForConditionalGeneration
## Environment info - `transformers` version: 4.1.1 - Platform: Windows-10-10.0.18362-SP0 - Python version: 3.8.0 - PyTorch version (GPU?): 1.7.0+cu101 (True) - Tensorflow version (GPU?): 2.3.1 (True) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help Pegasus: @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): pegasus-xsum The problem arises when using: * [X] my own modified scripts: (give details below) The tasks I am working on is: * [X] my own task or dataset: (give details below) ## To reproduce Try to download model for TFPegasusForConditionalGeneration Steps to reproduce the behavior: 1. Choose pegasus-xsum model 2. Fetch pretrained model 3. ``` import tensorflow as tf from transformers import TFPegasusForConditionalGeneration, PegasusTokenizer src_text = [ """ PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow.""" ] model_name = 'google/pegasus-xsum' tokenizer = PegasusTokenizer.from_pretrained(model_name) model = TFPegasusForConditionalGeneration.from_pretrained(model_name) batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest') translated = model.generate(**batch) tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) ``` ## Expected behavior The model should download. Instead the model cannot be found `404 Client Error: Not Found for url: https://huggingface.co/google/pegasus-xsum/resolve/main/tf_model.h5 `
01-06-2021 13:03:27
01-06-2021 13:03:27
Hey @demongolem, yes sadly those models were not yet uploaded in TF. Could you instead just run: ```python import tensorflow as tf from transformers import TFPegasusForConditionalGeneration, PegasusTokenizer src_text = [ """ PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow.""" ] model_name = 'google/pegasus-xsum' tokenizer = PegasusTokenizer.from_pretrained(model_name) model = TFPegasusForConditionalGeneration.from_pretrained(model_name, from_pt=True) batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest', return_tensors="tf") translated = model.generate(**batch) tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True) ``` Also note that you have to pass `return_tensors="tf"` in the tokenizer.<|||||>Thanks @patrickvonplaten . The above code does do as necessary for me. Thanks for pointing out the `return_tensors` part as well, I missed that one.
transformers
9,436
closed
RuntimeError: The size of tensor a (128) must match the size of tensor b (32) at non-singleton dimension 1
# 📚 Migration from pytorch-pretrained-bert to transfomers ## Information <!-- Important information --> Model I am using (Bert, XLNet ...): bert Language I am using the model on (English, Chinese ...):english The problem arises when using: * [ ] the official example scripts: (give details below) when I use this model my code work correctly ```py from pytorch_pretrained_bert.modeling import BertForSequenceClassification model = BertForSequenceClassification.from_pretrained(args.bert_model, num_labels=num_labels) ``` but when I change to below I get the error, what Is the problem ```py from transformers import BertTokenizer, BertForSequenceClassification model = BertForSequenceClassification.from_pretrained('bert-base-cased', num_labels=num_labels) ``` ``` error : Traceback (most recent call last): File "paragraph_selection/train.py", line 293, in <module> loss = model(input_ids, segment_ids, input_mask, label_ids) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/models/bert/modeling_bert.py", line 1375, in forward return_dict=return_dict, File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/models/bert/modeling_bert.py", line 862, in forward input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/models/bert/modeling_bert.py", line 204, in forward embeddings += position_embeddings RuntimeError: The size of tensor a (128) must match the size of tensor b (32) at non-singleton dimension 1 ``` * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## Details <!-- A clear and concise description of the migration issue. If you have code snippets, please provide it here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code. --> ## Environment info colab - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: <!-- IMPORTANT: which version of the former library do you use? --> * `pytorch-transformers` or `pytorch-pretrained-bert` version (or branch): ## Checklist - [ ] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [ ] I checked if a related official extension example runs on my machine.
01-06-2021 11:51:58
01-06-2021 11:51:58
Hi, could you please provide either: - a reproducible code example - some more information related to your script. The error doesn't happen at the model load, but here: `File "paragraph_selection/train.py", line 293, in <module>`. - What is your `num_labels` It's complicated to identify the issue here, but could you try replacing the following line: ```py loss = model(input_ids, segment_ids, input_mask, label_ids) ``` with: ```py loss = model(input_ids, attention_mask=input_mask, token_type_ids=segment_ids, labels=label_ids) ```<|||||>yes, your code is correct thank you<|||||>**RuntimeError: The size of tensor a (128) must match the size of tensor b (32) at non-singleton dimension 3** Main.py import torch.nn as nn import torch from torchvision import models from utils import save_net,load_net class CSRNet(nn.Module): def __init__(self, load_weights=False): super(CSRNet, self).__init__() self.seen = 0 self.frontend_feat = [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 'M', 512, 512, 512, 'M', 512,512,512,'M'] self.backend_feat = [512, 512, 512,256,128,64] self.frontend = make_layers(self.frontend_feat) self.backend = make_layers(self.backend_feat,in_channels = 512,dilation = True) self.output_layer = nn.Conv2d(64, 1, kernel_size=1) if not load_weights: mod = models.vgg16(pretrained = True) self._initialize_weights() for i in range(len(self.frontend.state_dict().items())): list(self.frontend.state_dict().items())[i][1].data[:] = list(mod.state_dict().items())[i][1].data[:] def forward(self,x): x = self.frontend(x) x = self.backend(x) x = self.output_layer(x) return x def _initialize_weights(self): for m in self.modules(): if isinstance(m, nn.Conv2d): nn.init.normal_(m.weight, std=0.01) if m.bias is not None: nn.init.constant_(m.bias, 0) elif isinstance(m, nn.BatchNorm2d): nn.init.constant_(m.weight, 1) nn.init.constant_(m.bias, 0) def make_layers(cfg, in_channels = 3,batch_norm=False,dilation = False): if dilation: d_rate = 2 else: d_rate = 1 layers = [] for v in cfg: if v == 'M': layers += [nn.MaxPool2d(kernel_size=2, stride=2)] else: conv2d = nn.Conv2d(in_channels, v, kernel_size=3, padding=d_rate,dilation = d_rate) if batch_norm: layers += [conv2d, nn.BatchNorm2d(v), nn.ReLU(inplace=True)] else: layers += [conv2d, nn.ReLU(inplace=True)] in_channels = v return nn.Sequential(*layers) <|||||>Train.py import sys import os import warnings from model import CSRNet from utils import save_checkpoint import torch import torch.nn as nn from torch.autograd import Variable from torchvision import datasets, transforms import numpy as np import argparse import json import cv2 import dataset import time parser = argparse.ArgumentParser(description='PyTorch CSRNet') parser.add_argument('train_json', metavar='TRAIN', help='path to train json') parser.add_argument('test_json', metavar='TEST', help='path to test json') parser.add_argument('--pre', '-p', metavar='PRETRAINED', default=None,type=str, help='path to the pretrained model') parser.add_argument('gpu',metavar='GPU', type=str, help='GPU id to use.') parser.add_argument('task',metavar='TASK', type=str, help='task id to use.') def main(): global args,best_prec1 best_prec1 = 1e6 args = parser.parse_args() args.original_lr = 1e-7 args.lr = 1e-7 args.batch_size = 1 args.momentum = 0.95 args.decay = 5*1e-4 args.start_epoch = 0 args.epochs = 400 args.steps = [-1,1,100,150] args.scales = [1,1,1,1] args.workers = 4 args.seed = time.time() args.print_freq = 30 with open(args.train_json, 'r') as outfile: train_list = json.load(outfile) with open(args.test_json, 'r') as outfile: val_list = json.load(outfile) os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu torch.cuda.manual_seed(args.seed) model = CSRNet() model = model.cuda() criterion = nn.MSELoss(size_average=False).cuda() optimizer = torch.optim.SGD(model.parameters(), args.lr, momentum=args.momentum, weight_decay=args.decay) if args.pre: if os.path.isfile(args.pre): print("=> loading checkpoint '{}'".format(args.pre)) checkpoint = torch.load(args.pre) args.start_epoch = checkpoint['epoch'] best_prec1 = checkpoint['best_prec1'] model.load_state_dict(checkpoint['state_dict']) optimizer.load_state_dict(checkpoint['optimizer']) print("=> loaded checkpoint '{}' (epoch {})" .format(args.pre, checkpoint['epoch'])) else: print("=> no checkpoint found at '{}'".format(args.pre)) for epoch in range(args.start_epoch, args.epochs): adjust_learning_rate(optimizer, epoch) train(train_list, model, criterion, optimizer, epoch) prec1 = validate(val_list, model, criterion) is_best = prec1 < best_prec1 best_prec1 = min(prec1, best_prec1) print(' * best MAE {mae:.3f} ' .format(mae=best_prec1)) save_checkpoint({ 'epoch': epoch + 1, 'arch': args.pre, 'state_dict': model.state_dict(), 'best_prec1': best_prec1, 'optimizer' : optimizer.state_dict(), }, is_best,args.task) def train(train_list, model, criterion, optimizer, epoch): losses = AverageMeter() batch_time = AverageMeter() data_time = AverageMeter() train_loader = torch.utils.data.DataLoader( dataset.listDataset(train_list, shuffle=True, transform=transforms.Compose([ transforms.ToTensor(),transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ]), train=True, seen=model.seen, batch_size=args.batch_size, num_workers=args.workers), batch_size=args.batch_size) print('epoch %d, processed %d samples, lr %.10f' % (epoch, epoch * len(train_loader.dataset), args.lr)) model.train() end = time.time() for i,(img, target)in enumerate(train_loader): data_time.update(time.time() - end) img = img.cuda() img = Variable(img) output = model(img) target = target.type(torch.FloatTensor).unsqueeze(0).cuda() target = Variable(target) loss = criterion(output, target) losses.update(loss.item(), img.size(0)) optimizer.zero_grad() loss.backward() optimizer.step() batch_time.update(time.time() - end) end = time.time() if i % args.print_freq == 0: print('Epoch: [{0}][{1}/{2}]\t' 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\t' 'Data {data_time.val:.3f} ({data_time.avg:.3f})\t' 'Loss {loss.val:.4f} ({loss.avg:.4f})\t' .format( epoch, i, len(train_loader), batch_time=batch_time, data_time=data_time, loss=losses)) def validate(val_list, model, criterion): print ('begin test') test_loader = torch.utils.data.DataLoader( dataset.listDataset(val_list, shuffle=False, transform=transforms.Compose([ transforms.ToTensor(),transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ]), train=False), batch_size=args.batch_size) model.eval() mae = 0 for i,(img, target) in enumerate(test_loader): img = img.cuda() img = Variable(img) output = model(img) mae += abs(output.data.sum()-target.sum().type(torch.FloatTensor).cuda()) mae = mae/len(test_loader) print(' * MAE {mae:.3f} ' .format(mae=mae)) return mae def adjust_learning_rate(optimizer, epoch): """Sets the learning rate to the initial LR decayed by 10 every 30 epochs""" args.lr = args.original_lr for i in range(len(args.steps)): scale = args.scales[i] if i < len(args.scales) else 1 if epoch >= args.steps[i]: args.lr = args.lr * scale if epoch == args.steps[i]: break else: break for param_group in optimizer.param_groups: param_group['lr'] = args.lr class AverageMeter(object): """Computes and stores the average and current value""" def __init__(self): self.reset() def reset(self): self.val = 0 self.avg = 0 self.sum = 0 self.count = 0 def update(self, val, n=1): self.val = val self.sum += val * n self.count += n self.avg = self.sum / self.count if __name__ == '__main__': main() <|||||>> Hi, could you please provide either: > > * a reproducible code example > * some more information related to your script. The error doesn't happen at the model load, but here: `File "paragraph_selection/train.py", line 293, in <module>`. > * What is your `num_labels` > > It's complicated to identify the issue here, but could you try replacing the following line: > > ```python > loss = model(input_ids, segment_ids, input_mask, label_ids) > ``` > > with: > > ```python > loss = model(input_ids, attention_mask=input_mask, token_type_ids=segment_ids, labels=label_ids) > ``` Thank you very much for your help
transformers
9,435
closed
Fix URLs to TAPAS notebooks
# What does this PR do? As I updated the repository structure of my [Transformers tutorials](https://github.com/NielsRogge/Transformers-Tutorials) repository, some URLs related to TAPAS need to be updated. Thanks @mrm8488 for already updating one URL in #9413.
01-06-2021 10:00:56
01-06-2021 10:00:56
transformers
9,434
closed
Making Conversation possible to create directly a full conversation
# What does this PR do? - Currently conversations contain some state (`conversation.history` namely). - There is no obvious way to create a conversation from pure logs aside from mutating state. - The actual result is still buggy because `history` is not correctly updated by the Conversation object. Objectives of this PR: - Enable creation of a Conversation from existing exchanges. ```Conversation("Why do you recommend it ?", past_user_inputs=["Can you recommend a book ?"], generated_responses=["I recommend reading the Lord of the Rings."])``` - Keep relatively close to previous code. - Fix the bug, that simply discarded history if you created a Conversation through mutation of state. (**Could be backward incompat**) - `history` renamed `_history` + `_index` as it's now treated as a cache variable (namely to prevent recreating tokens of the conversation all the time. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @mfuntowicz @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
01-06-2021 09:24:32
01-06-2021 09:24:32
That's really cool! Also pinging @guillaume-be here as he might is the original author of the pipeline :-)<|||||>Also, @Narsil do you if it's possible to have a chat not widget in the inference API for this pipeline? I think it would be really nice to place around Blenderbot and DialoGPT<|||||>> Also, @Narsil do you if it's possible to have a chat not widget in the inference API for this pipeline? I think it would be really nice to place around Blenderbot and DialoGPT @patrickvonplaten it's in the pipes, but I've not yet created the widget for huggingface.co, the `api-inference` is ready though. @patrickvonplaten, @sgugger can you please re-review. There sort of major bug, where we used `tokenizer.encode(inputs, add_special_tokens=False)` so that BOS end EOS were **not** added on models that required them (instead EOS was added "manually" by the pipeline, leading to poor results on Blenderbot for instance). Ping @mfuntowicz to make sure we can safely remove that or if there was a strong reason for bypassing tokenizer logic there.<|||||>Also changed the tokenizer behavior to use a real one.<|||||>Thanks for looping me in! It looks like there are a lot of changes, a few comments on my side: - regarding the change from ``` inputs = self.tokenizer(inputs, add_special_tokens=False, padding=False).get("input_ids", []) for input in inputs: input.append(self.tokenizer.eos_token_id) ``` to: ``` inputs = self.tokenizer(inputs, **kwargs).get("input_ids", []) ``` are you sure that the behaviour remains correct for DialoGPT? As far as I know DialoGPT uses the GPT2 tokenizer that does not add a `eos` automatically at the end of the encoded input. Test for BlenderBot were added in https://github.com/huggingface/transformers/blob/74f6f91a9dc944b1f8872a0d22abd60050aa41bc/tests/test_pipelines_conversational.py#L102 and I did not observe a poor performance back then - did something change? Also note that BlenderBot does not seem to require a BOS token (https://github.com/huggingface/transformers/blob/f33a6f34461fea61b579a7ec732fcd174b2b41cd/src/transformers/models/blenderbot/tokenization_blenderbot.py#L57) - The `if len(new_input) > max_length - self.min_length_for_response` was set-up to allow the history to leave some space for future responses. Is this now done as part of the history further capabilities? - Could you please clarify the need for `_get_history` instead of accessing the history directly? - Regarding the title of the PR, if you are interested I added this feature to the Rust version of this pipeline a few months ago. The approach seems simpler than the changes proposed here, am I missing something? See https://github.com/guillaume-be/rust-bert/blob/7890d2daffea8e2c792a2e8930294e403b2321dd/src/pipelines/conversation.rs#L416 for reference (I see from your activity that you are familiar with Rust!) Thanks!<|||||>Hi @guillaume-be , Those changes do not belong in this PR anyway, I'll make a separate PR following this one, we should continue the discussion over there.<|||||>It seems the tests are failing in `master` since this merge: https://app.circleci.com/pipelines/github/huggingface/transformers/18333/workflows/72042bfe-4d42-42de-8389-bc0d1cc5494c/jobs/148896<|||||>Yes, was missing a rebase before test, another commit introduced a new warning, which broke the test. I am not sure what's the strategy concerning warnings and tests. I've tried to be conservative (meaning explicitly testing them), but I know it might become cumbersome at some point, I can remove those checks if needed.
transformers
9,433
closed
Removing duplicated code for Translation,Summarization and Text2TextGeneration pipelines
# What does this PR do? `TranslationPipeline`, `SummarizationPipeline` and `Text2TextGenerationPipeline` share quite a bit of code for the generation part. This PR aims to remove that code duplication to prevent future errors in argument handling while preserving, documentation for all methods and functions and the full behavior. Translation and Summarization now inherit from Text2TextGenerationPipeline. They retain their own docstrings to be more readable in the docs. New function `check_inputs` has appeared which does all the current variation between the 3 classes, basically by raising different warnings based on inputs and underlying model config. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @LysandreJik @sgugger Edit: Sorry about the diff, just gave a look, it's totally unreadable mostly because I reordered the classes so that the Base classe (Text2TextGenerationPipeline) is before the subclasses. I'll happily switch that back to make review on the actual code easier (and maybe change back later the order for cleaner code in the end) Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
01-06-2021 09:09:09
01-06-2021 09:09:09
Feel free to merge whenever @Narsil
transformers
9,432
closed
Enable TruncationStrategy override for pipelines
# What does this PR do? Right now truncation argument for tokenizers was not overridable, which leads to poor UX on some pipelines, most notably Summarization. Summaries trigger an error on text that end up with too many tokens for the underlying model. Current strategy is just to enable the argument to be overrided as truncating by default is not necessarily good either. More complex strategies are required to "solve" the problem (chunk original text into chunk of ~max_length, drop if some chunk is small enough <0.1 max_length?, then concatenate result summaries ?). The current PR is a small step in that direction. There should not be any backward incompatibilities with current changes. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> @LysandreJik @patrickvonplaten ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
01-06-2021 08:52:05
01-06-2021 08:52:05
transformers
9,431
closed
[Docs] Add useful links to model sharing
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR extends the **model_sharing** doc by two additional links that point to helper scripts to more efficiently change multiple configs and upload organization-specific repos. Since some people have been asking for these kinds of scripts, I think it makes sense to link them here. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
01-06-2021 08:32:09
01-06-2021 08:32:09
transformers
9,430
closed
T5 base use a lot of memory to train on
Hi I am using transformers 3.5.1, t5-base uses a lot of memory I cannot even train it on 4 V100 GPUs with batch size of 32 could you elaborate is there is any memory issue with this model? thanks
01-06-2021 07:57:22
01-06-2021 07:57:22
Hi @juliahane Could you post the training command ? batch size 32 is total batch size or `per_device_batch_size` ? In my experiments I was able to use max 4 `per_device_batch_size` with `max_input_length` 512 and `max_target_length` 64. Also, this kind of question should be asked on the[ forum ](https://discuss.huggingface.co/t/t5-finetuning-tips/684) as it's not a bug or issue. [This](https://discuss.huggingface.co/t/t5-finetuning-tips/684) discussion might help.<|||||>this is per device batch size. max_length = 128 to me this is a bug that the model requires this much memory for small I can run the same thing with batch size =64 On Wed, Jan 6, 2021 at 8:40 AM Suraj Patil <[email protected]> wrote: > Hi @juliahane <https://github.com/juliahane> > Could you post the training command ? > batch size 32 is total batch size or per_device_batch_size ? > > In my experiments I was able to use max 4 per_device_batch_size with > max_input_length 512 and max_target_length 64. > > Also, this kind of question should be asked on the forum > <https://discuss.huggingface.co/t/t5-finetuning-tips/684> as it's not a > bug or issue. > > This <https://discuss.huggingface.co/t/t5-finetuning-tips/684> discussion > might help. > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/9430#issuecomment-755164832>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AM3GZM72EZRVYSEBPLPVBF3SYQOXJANCNFSM4VXEGT3A> > . > <|||||>coud you tell me if Adafactor save on memory? On Wed, Jan 6, 2021 at 8:40 AM Suraj Patil <[email protected]> wrote: > Hi @juliahane <https://github.com/juliahane> > Could you post the training command ? > batch size 32 is total batch size or per_device_batch_size ? > > In my experiments I was able to use max 4 per_device_batch_size with > max_input_length 512 and max_target_length 64. > > Also, this kind of question should be asked on the forum > <https://discuss.huggingface.co/t/t5-finetuning-tips/684> as it's not a > bug or issue. > > This <https://discuss.huggingface.co/t/t5-finetuning-tips/684> discussion > might help. > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/9430#issuecomment-755164832>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AM3GZM72EZRVYSEBPLPVBF3SYQOXJANCNFSM4VXEGT3A> > . > <|||||>Hi @juliahane it is indeed the case that adafactor improves memory usage, which is why the original author uses it. You can check out the [paper](https://arxiv.org/abs/1804.04235) on adafactor for more info, but the abstract says the most. My intuition here is that adafactor (or similar memory-efficient optimizer) is required to train the large t5 models.<|||||>thank you, very helpful, I will try it. On Wed, Jan 6, 2021 at 11:39 AM Kenneth Enevoldsen <[email protected]> wrote: > Hi @juliahane <https://github.com/juliahane> it is indeed the case that > adafactor improves memory usage, which is why the original author uses it. > You can check out the paper <https://arxiv.org/abs/1804.04235> on > adafactor for more info, but the abstract says the most. My intuition here > is that adafactor (or similar memory-efficient optimizer) is required to > train the large t5 models. > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/9430#issuecomment-755223491>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AM3GZM7WKUR23L23FVKAMZTSYQ4UZANCNFSM4VXEGT3A> > . > <|||||>In the thread they say to set autoscaling to off, do you know @kenneth how I can do it? Apart from this, I could not find more suggestion for saving memory on GPU in that thread thanks On Wed, Jan 6, 2021 at 2:13 PM julia hane <[email protected]> wrote: > thank you, very helpful, I will try it. > > On Wed, Jan 6, 2021 at 11:39 AM Kenneth Enevoldsen < > [email protected]> wrote: > >> Hi @juliahane <https://github.com/juliahane> it is indeed the case that >> adafactor improves memory usage, which is why the original author uses it. >> You can check out the paper <https://arxiv.org/abs/1804.04235> on >> adafactor for more info, but the abstract says the most. My intuition here >> is that adafactor (or similar memory-efficient optimizer) is required to >> train the large t5 models. >> >> — >> You are receiving this because you were mentioned. >> Reply to this email directly, view it on GitHub >> <https://github.com/huggingface/transformers/issues/9430#issuecomment-755223491>, >> or unsubscribe >> <https://github.com/notifications/unsubscribe-auth/AM3GZM7WKUR23L23FVKAMZTSYQ4UZANCNFSM4VXEGT3A> >> . >> > <|||||>I assume it is the: ``` scale_parameter (bool, optional, defaults to True) – If True, learning rate is scaled by root mean square ``` in the adafactor ([documentation](https://huggingface.co/transformers/main_classes/optimizer_schedules.html#adafactor-pytorch)) but maybe @patil-suraj could confirm this? But as they used the scaling in the original paper I couldn't imagine it to be highly influential.<|||||>For reference the optimizer I use is: ``` optimizer = transformers.Adafactor(model.parameters(), lr=0.001, relative_step=False, warmup_init=False, decay_rate=0.0, clip_threshold=1.0) scheduler = None ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,429
closed
Apache Hadoop (HDFS) File Loading from_pretrained
# 🚀 Feature request Loading configuration file for **transformers.AutoConfig** and **transformers.AutoModelForSequenceClassification** using the function **from_pretrained** by giving the HDFS file path ## Motivation In case of file that is not locally available, the library utilizes the **get_from_cache** function inside **transformers/file_utils.py** file to try to download the model from the remote resource. But, in case of no ETag being present in the header of the response returned, an OSError("Distant resource does not have an ETag, we won't be able to reliably ensure reproducibility.") exception is raised. In case of HDFS, this ETag validation shouldn't be considered as a mandatory requirement but as an optional one. Either another mechanism should be used to ensure the reliability of the resource or this ETag check should be made optional. Please see below the code-snippet screenshot of the aforementioned file. ![image](https://user-images.githubusercontent.com/44267622/103738038-262dc080-5015-11eb-98f4-047be7b81bdd.png) Additional information. Apache Hadoop Version: 2.7.7, rc1aad84bd27cd79c3d1a7dd58202a8c3ee1ed3ac
01-06-2021 06:49:06
01-06-2021 06:49:06
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,428
closed
Improve documentation coverage for Herbert
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #9035 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 --> @sgugger
01-06-2021 05:36:34
01-06-2021 05:36:34
Still not letting me assign you @sgugger :(
transformers
9,427
closed
Improve documentation coverage for Phobert
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #9035 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
01-06-2021 03:37:42
01-06-2021 03:37:42
Can't pin you for review @sgugger, so tagging you!
transformers
9,426
closed
Is it possible to export a pytorch .pt file after finetuning a model?
Hi, Considering how I am having troubles with the tflite interpreter at issue https://github.com/huggingface/transformers/issues/9392 . I was wondering if I will have better luck trying Pytorch mobile, since the base models are pytorch to begin with. But to use the pytorch converter I need the saved ``.pt`` file. The checkpoints saved during training are in ``.bin`` format. Is there any way to get the exported pytorch ``.pt`` file from a checkpoint folder? Thanks
01-06-2021 00:47:40
01-06-2021 00:47:40
Just rename your `.bin` to `.pt`<|||||>Hi @julien-c Sorry to come back to this but I am having trouble with this. As the .bin file also has a much larger accompanying ``optimizer`` file that I assume holds the weights. I am trying to deploy a fine tuned model to google cloud. And even when using a custom prediction routine to load the entire folder for distil GPT2 the folder size exceeds the limit of ``500MB``. Is there a way to export the fine tuned model to be used alone. Either export to a standalone pytorch model or tensorflow model. I searched, but could not find any documentation on this. Would appreciate any help on this, or even direct me towards relevant documentation that can help me. I'm using ``transformers==2.8.0`` Thank you<|||||>The optimizer file does not contain the weights of your model, but the state of the optimizer during your training. If you do not plan on continuing training, then you can safely discard that file. You can have more information about [optimizers here](https://pytorch.org/docs/stable/optim.html).
transformers
9,425
closed
[utils/get_modified_files.py] fails with a few PR checkout tools
I have noticed that when using [gh cli](https://github.com/cli/cli) to checkout a pr ``` git merge-base --fork-point master ``` fails, which breaks `utils/get_modified_files.py` e.g.: ``` gh pr checkout 9423 python utils/get_modified_files.py Traceback (most recent call last): File "utils/get_modified_files.py", line 27, in <module> fork_point_sha = subprocess.check_output("git merge-base --fork-point master".split()).decode("utf-8") File "/home/stas/anaconda3/envs/main-38/lib/python3.8/subprocess.py", line 411, in check_output return run(*popenargs, stdout=PIPE, timeout=timeout, check=True, File "/home/stas/anaconda3/envs/main-38/lib/python3.8/subprocess.py", line 512, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['git', 'merge-base', '--fork-point', 'master']' returned non-zero exit status 1. ``` So `make fixup` fails to check the modified files then. It fails if I use `git-pr` too (this tool is from the git-extras package). It works fine if I use the native `git pr`. This needs to be investigated. Until this is resolved if you use those tools please use `make style` / `make quality`, but `make fixup` should work just fine elsewhere.
01-05-2021 20:56:04
01-05-2021 20:56:04
transformers
9,424
closed
improve readme text to private models/versioning/api
01-05-2021 19:55:18
01-05-2021 19:55:18
👍
transformers
9,423
closed
Upgrade styler to better handle lists
# What does this PR do? This PR upgrades the doc styling script to automatically add new lines before lists. This makes the script more robust as it will avoid reformatting those lists and make them appear properly once sphinx has done its thing. In passing a few badly formatted docstrings/doc pages are fixed, just waiting for some input from @patrickvonplaten for the problems in LED/Longformer. Fixes #9408
01-05-2021 19:25:34
01-05-2021 19:25:34
I suppose I need to stick to the common bullet format, as it couldn't handle this `\d\) ` style of bullets. Leading to this rewrite: ``` -1) Optimizer State Partitioning (stage 1) -2) Add Gradient Partitioning (stage 2) +1) Optimizer State Partitioning (stage 1) 2) Add Gradient Partitioning (stage 2) ``` This is not a problem - will fix the style.<|||||>We also need a new line injector for bulleted lists in .rst checker pretty please. In .rst I had: ``` Miscellaneous notes: - DeepSpeed works with the PyTorch Trainer but not TF Trainer. - While DeepSpeed has a pip installable PyPI package, ``` the style wrapper broke the bullets and made them into one paragraph/line. ``` Miscellaneous notes: - DeepSpeed works with the PyTorch Trainer but not TF Trainer. - While DeepSpeed has a pip installable PyPI package, ``` same problem as with docstring - it's missing a new line again. Could we do the same fix for .rst to inject a new line before bullets if an unwary writer forgot to add one? Thank you! <|||||>Mmm, the patch should be applied to the rst files too (can't link to the diff but it's line 384 of the last file in the diff shown by GitHub).<|||||>I re-based just in case, and no, it still doesn't insert the line. Here is the exact para: ``` Miscellaneous notes: * DeepSpeed works with the PyTorch Trainer but not TF Trainer. * While DeepSpeed has a pip installable PyPI package, it is highly recommended that it be `installed from source <https://github.com/microsoft/deepspeed#installation>`__ to best match your hardware and also to enable features like 1-bit Adam, which aren't available in the pypi distribution. ```<|||||>Indeed, I made some stupid mistake, #9488 should fix this.
transformers
9,422
closed
[Announcement] Changing model type of Barthez
We are currently undergoing some major refactoring of Bart-like models as shown in: https://github.com/huggingface/transformers/pull/9343. After the refactoring, the Barthez models would not work anymore with the `AutoModel` and `AutoModelForSeq2SeqLM` classes because Barthez actually corresponds more to the mbart model structure than to the Bart structure (compare to PR in https://github.com/huggingface/transformers/pull/9343), but has `bart` and `BartForConditionalGeneration` defined as their default models. In order to make the Barthez models work after merging the PR, the model type needs to be changed online to `mbart` for those models: https://huggingface.co/models?search=barthez . Since MBart is identical to Bart previous to merging the above PR the change won't affect older versions. I want to do the change soon, just wanted to ping you @moussaKam. Please do let me know if you have are not happy with it or have any questions
01-05-2021 16:54:20
01-05-2021 16:54:20
Applied the change<|||||>Hi @patrickvonplaten. Sorry for the late reply. Actually I tested the model with `BartForConditionalGeneration` and everything was working well. On the other hand, after the modification I am getting the following error: ``` Unrecognized configuration class for this kind of AutoModel: AutoModelForMaskedLM. Model type should be one of LayoutLMConfig, DistilBertConfig, AlbertConfig, BartConfig, CamembertConfig, XLMRobertaConfig, LongformerConfig, RobertaConfig, SqueezeBertConfig, BertConfig, MobileBertConfig, FlaubertConfig, XLMConfig, ElectraConfig, ReformerConfig, FunnelConfig, MPNetConfig, TapasConfig. ``` Maybe we only need to change `model_type` (even if I am not sure why) and not the architecture, because [mBART](https://huggingface.co/facebook/mbart-large-cc25/blob/main/config.json) itself is using `BartForConditionalGeneration`. We still have the problem of the tokenizer when using AutoTokenizer: ``` Tokenizer class BarthezTokenizer does not exist or is not currently imported. ``` Is it possible to force the api to import and use `BarthezTokenizer` instead of `AutoTokenizer`?<|||||>Hey @moussaKam, Thanks for your answer! Yeah the `AutoTokenizer` is still a problem and actually showcases a deeper problem we're having for the `AutoTokenziers` in the lib. We'll need a new design, something like proposed here: https://github.com/huggingface/transformers/pull/9305 to fix this issue. It's on my Todo list. <|||||>Regarding the error with `AutoTokenizer` I cannot reproduce it :-/ Could you maybe provide code snippet showcasing the problem?<|||||>Hi @patrickvonplaten, Here's a snippet: ```python text_sentence = "Paris est la capitale de la <mask>" import torch from transformers import ( AutoTokenizer, BartForConditionalGeneration ) barthez_tokenizer = AutoTokenizer.from_pretrained("moussaKam/barthez") barthez_model = BartForConditionalGeneration.from_pretrained("moussaKam/barthez") input_ids = torch.tensor( [barthez_tokenizer.encode(text_sentence, add_special_tokens=True)] ) mask_idx = torch.where(input_ids == barthez_tokenizer.mask_token_id)[1].tolist()[0] predict = barthez_model.forward(input_ids)[0] barthez_tokenizer.decode(predict[:, mask_idx, :].topk(5).indices[0]) ``` ``` ----> 9 barthez_tokenizer = AutoTokenizer.from_pretrained("moussaKam/barthez") 10 barthez_model = BartForConditionalGeneration.from_pretrained("moussaKam/barthez") 11 ~/anaconda3/envs/transformers/lib/python3.8/site-packages/transformers-4.1.1-py3.8.egg/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 357 358 if tokenizer_class is None: --> 359 raise ValueError( 360 "Tokenizer class {} does not exist or is not currently imported.".format(tokenizer_class_candidate) 361 ) ValueError: Tokenizer class BarthezTokenizer does not exist or is not currently imported. ``` The expected output (if we use BarthezTokenizer instead of AutoTokenizer): ``` 'France culture francophonie gastronomie mode' ```<|||||>Ok, @LysandreJik found a nice fix for the tokenizer. Regarding the model, I think from now on we should use `MBart` for Barthez since after the new release Bart is not compatible with Barthez anymore<|||||>However, there seems to be an issue remaining with the `BarthezTokenizer`, as the code shared by @moussaKam outputs the following in v4.1.0: ``` France culture francophonie gastronomie mode ``` but outputs the following on `master`: ``` ompeolin corporelleenfin1-1 ``` It also mentions the following: ``` Some weights of the model checkpoint at moussaKam/barthez were not used when initializing BartForConditionalGeneration: ['encoder.layer_norm.weight', 'encoder.layer_norm.bias', 'decoder.layer_norm.weight', 'decoder.layer_norm.bias'] ```<|||||>My bad, changing from `BartForConditionalGeneration` to `MBartForConditionalGeneration` fixes the issue.<|||||>Yeah, Barthez is the only model that is not longer compatible with Bart looking forward - we have to stick to MBart. But the model architecture corresponds 1-to-1 to MBart, so I think it's fine. Hope it's ok for you @moussaKam <|||||>It's OK @patrickvonplaten if BARThez works well with `AutoModel`. Currently the shared code outputs (on the master): 'France culture francophonie gastronomie mode' if we use `MBartForConditionalGeneration` 'édappraiav comme' if we use `AutoModel` 'ompeolin corporelleenfin1-1' if we use `BartForConditionalGeneration`<|||||>Ah yeah, so instead of `AutoModel`, you'll have to use `AutoModelForSeq2SeqLM`. And it should not work anymore on master with `BartForConditionalGeneration`, but only with `MBartForConditionalGeneration`. Is the output of `MBartForConditionalGeneration` correct/reasonable in your opinion? => so the model classes to use in the future are `AutoModelForSeq2SeqLM` (as before) and `MBartForConditionalGeneration` (this worked before as well), but now `BartForConditionalGeneration` should not work anymore. If you could verify that this is actually the case on master now, that would be super nice<|||||>yes the output is reasonable with `MBartForConditionalGeneration` and `AutoModelForSeq2SeqLM`. However we still have one last (I hope) problem when using `pipeline`. The following code returns an error: ```python from transformers import pipeline pbase = pipeline(task="fill-mask", model="moussaKam/barthez") src_text = ["Paris est la capitale de la <mask>"] results = [x["token_str"] for x in pbase(src_text)] ``` ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-12-d7b2e5a78b7c> in <module> 1 from transformers import pipeline 2 ----> 3 pbase = pipeline(task="fill-mask", model="moussaKam/barthez") 4 src_text = ["Paris est la capitale de la <mask>"] 5 results = [x["token_str"] for x in pbase(src_text)] /datadisks/datadisk1/transformers/src/transformers/pipelines/__init__.py in pipeline(task, model, config, tokenizer, framework, revision, use_fast, **kwargs) 403 ) 404 --> 405 model = model_class.from_pretrained(model, config=config, revision=revision, **model_kwargs) 406 if task == "translation" and model.config.task_specific_params: 407 for key in model.config.task_specific_params: /datadisks/datadisk1/transformers/src/transformers/models/auto/modeling_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 1123 pretrained_model_name_or_path, *model_args, config=config, **kwargs 1124 ) -> 1125 raise ValueError( 1126 "Unrecognized configuration class {} for this kind of AutoModel: {}.\n" 1127 "Model type should be one of {}.".format( ValueError: Unrecognized configuration class <class 'transformers.models.mbart.configuration_mbart.MBartConfig'> for this kind of AutoModel: AutoModelForMaskedLM. Model type should be one of LayoutLMConfig, DistilBertConfig, AlbertConfig, BartConfig, CamembertConfig, XLMRobertaConfig, LongformerConfig, RobertaConfig, SqueezeBertConfig, BertConfig, MobileBertConfig, FlaubertConfig, XLMConfig, ElectraConfig, ReformerConfig, FunnelConfig, MPNetConfig, TapasConfig. ``` We got the same error when using the the inference [api](https://huggingface.co/moussaKam/barthez?text=Paris+est+la+%3Cmask%3E+de+la+France.).<|||||>Ah yeah, that's something unrelated to the Bart Split PR I think. Do you mind opening a new issue where you can copy paste your code example from above? Feel free to tag me on it :-) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
9,421
closed
Store transformers version info when saving the model
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR stores the transformers version info in the model config. It makes debugging saved models from the model hub easier without affecting any actual function. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
01-05-2021 16:39:37
01-05-2021 16:39:37
transformers
9,420
closed
Transformer models for semantic parsing
Hi! Thank you for your awesome work! I want to perform semantic parsing. Unfortunately, I couldn't find any examples on hugging face repo for that. Could you please let me know how I should proceed? I suppose I could use a Seq2Seq EncoderDecoder model like BERT2BERT and finetune it for semantic parsing. Or do you think there is a better way? For more context, I have natural language grounding descriptions and I want to generate logical parse tree from it. In literature, there are a few tree transformer-based techniques and Seq2Tree technique which I think hugging face do not support yet (or does it?). Thanks :)
01-05-2021 16:20:53
01-05-2021 16:20:53
Hi @ayushjain1144 That's an interesting question, would be better if you ask it on the [forum](https://discuss.huggingface.co/)
transformers
9,419
closed
New serving
# What does this PR do? This PR proposes a new way to create a saved model to be properly served via TF Serving. The logic behind is to create a `serving` that will be used to create the expected saved model with a proper input signature. Currently the saved models are very limited: - input sequence length limited to exactly 5 tokens - input parameters limited to have only `input_ids` - When `output_attentions` or `output_hidden_states` was set to True, the saved model output contained as many outputs as the number of attentions or hidden state This PR fixes these 3 issues. A new behavior is also introduced, when doing `model.save_pretrained(...)` a saved model version is also created in same time than the `.h5` weights file. The proposed logic allows anybody to create its own input signature simply by overwriting the new `serving` method. For example, the default inputs for BERT are now `input_ids`, `attention_mask` and `token_type_ids`, if one wants to replace `input_ids` by `inputs_embeds`, a new model has to be created overwriting the `serving` method like: ``` class CustomBertModel(TFBertModel): @tf.function( input_signature=[ { "inputs_embeds": tf.TensorSpec((None, None, 768), tf.float32, name="inputs_embeds"), "attention_mask": tf.TensorSpec((None, None), tf.int32, name="attention_mask"), "token_type_ids": tf.TensorSpec((None, None), tf.int32, name="token_type_ids"), } ] ) def serving(self, inputs): output = self.call(inputs) return self.serving_output(output) model = CustomBertModel.from_pretrained("bert-base-cased") model.save_pretrained("saving_path") ``` Slow/quick tests are passing. EDIT: ping @sgugger @patrickvonplaten and @LysandreJik for review.
01-05-2021 16:02:06
01-05-2021 16:02:06
Humm looks like the quick test I have added to make sure that a saved model can be properly created is a bit too long for at least one of the models. @LysandreJik any idea how I can figure out for which model?<|||||>Ok, the test `test_saved_model_creation` is skipped if it needs more than 30sec to be executed for a model. For now these models are skipped: BART BlenderBot Funnel Longformer Lxmert Marian MBart Mobilebert Pegasus T5 Let's see if the test becomes faster once I will optimise these model like I did for BERT. LGTM!<|||||>Cool, maybe @sgugger can take a look as well :-) <|||||>Oops forgot to rebase and then the changes for the LED model is missing, and also the changes in the Seq2Seq template. Please wait my next push before merging.<|||||>I should have addressed all the comments. The saved model creation tests are silent for the Seq2Seq models until I find a proper fix.<|||||>Great should we merge @jplu @LysandreJik @? - it's blocking the TF-Bart Split PR a bit. <|||||>For me it is good to merge if there are no other comments :)<|||||>Cool merging then<|||||>I had some more comments actually! With the short names, most of the outputs of the serving methods fit on one line now. black does not put things back on the same line once it has split on several, so it's not fixed by the quality scripts. I also think it would make future maintenance easier to add the # Copied from comments for dupe code.<|||||>They are all on one line (when it is possible, which means not too many characters to fit in). I will open a PR to take care of adding the `#copied from` comments once I finish to fix the S2S models.<|||||>See comment above, and this is just one example, most of those now fit in one line with your last changes.
transformers
9,418
closed
New TF embeddings (cleaner and faster)
# What does this PR do? This PR propose a better implementation of the embedding layer for the BERT-Like TF models. Another benefit of this cleaning is a better computational performance: ``` model = TFBertForMaskedLM.from_pretrained("bert-base-cased") cProfile.run("model(model.dummy_inputs)") # current master 56150 function calls (55318 primitive calls) in 0.096 seconds # with new embeddings implem 55732 function calls (54891 primitive calls) in 0.080 seconds ``` This new implementation should be compatible with the incoming rework of the resizing proposed in #9193. A similar work will be applied to `TFSharedEmbeddings` in a next PR. All slow/quick tests passes. EDIT: I don't know why Github has some issues to pin the reviewers, so pinging @LysandreJik @sgugger and @patrickvonplaten
01-05-2021 11:25:33
01-05-2021 11:25:33
I like this PR in general! Just wondering about two things: 1) Do we need this `get_config` function? 2) Not a huge fan of the `Add()` keras layer...does this really improve performance much?<|||||>Good point @LysandreJik! Basically here most of the models share the similar embedding computation that stay inside their respective file. What has been exported is just the specific computation, which means that `WordEmbeddings`, `PositionalEmbeddings` and `TokenTypeEmbeddings` are always the same doesn't matter who is using it. The same logic that is currently applied to `TFSharedEmbeddings`.<|||||>> Just reviewed the general approach on one model for now and I have some questions before going further. If I understand correctly, the computation of the three different types of embeddings is split in three different ways to maximize the speedup but I wonder if it's documented from TF or just some tests on one particular setup. Before adding the extra complexity, I would like to be sure it brings a speedup on almost all possible environments (CPU, GPU, multi-GPU, TPU) without any loss in memory footprint (one-hot encoding the token type ids seems harmless, but we never know). I basically took example on the official implementation of Transformer encoder available in the Google Repo https://github.com/tensorflow/models/tree/master/official/nlp/keras_nlp . After having done several experiments (only on CPU and GPU though), I end up to extract from this an optimal version for each embedding. > As for putting those in modeling utils versus the model file, I agree with Lysandre that this breaks our philosophy of putting everything in each model file. I emitted the same reserves for TFSharedEmbeddings when it was introduced. I don't mind to copy/paste the same layers in all the concerned files if it is the recommended way. @sgugger @LysandreJik Will you be more confident if I create a version for each model and add the comment `# copied from ....` everytime it is a strong copy/paste? > I don't understand how it can be used above (line 420) in a tf.matmul if it's a layer and not a weight. Now the `get_input_embeddings` returns a `WordEmbeddings` layer that has a `word_embeddings` attribute. If you look at the Bert model for example, the layer `TFBertLMPredictionHead` takes a `WordEmbeddings` layer as `input_embeddings` and use the `WordEmbeddings.word_embeddings` attribute into the `tf.matmul`.<|||||>> Now the `get_input_embeddings` returns a `WordEmbeddings` layer that has a `word_embeddings` attribute. If you look at the Bert model for example, the layer `TFBertLMPredictionHead` takes a `WordEmbeddings` layer as `input_embeddings` and use the `WordEmbeddings.word_embeddings` attribute into the `tf.matmul`. So this part confuses me. Why name `word_embeddings` the weights inside the `WordEmbeddings`? It causes so much headache when reading the code afterward as we keep seeing some `word_embeddings` attributes which might either be an embedding layer or a weight. Also, how does the new organization not screw up pretrained weights? From what I understand, the old `world_embeddings` in the `BertEmbeddings` layer used to be a weight and now it's a layer with an added `world_embeddings` attribute?<|||||>> So this part confuses me. Why name word_embeddings the weights inside the WordEmbeddings? It causes so much headache when reading the code afterward as we keep seeing some word_embeddings attributes which might either be an embedding layer or a weight. I agree it is confusing, if you prefer it can be called `weight` such as in `TFSharedEmbeddings` I think it would be a more suitable name. This renaming will make easier the kind of checking (from the incoming PR on ebd resizing) ```python def _get_word_embedding_weight(self, embedding_layer): if hasattr(embedding_layer, "word_embeddings"): return embedding_layer.word_embeddings elif hasattr(embedding_layer, "weight"): return embedding_layer.weight elif hasattr(embedding_layer, "decoder"): return embedding_layer.decoder else: # Here we build the word embeddings weights if not exists. # And then we retry to get the attribute once built. self(self.dummy_inputs) if hasattr(embedding_layer, "word_embeddings"): return embedding_layer.word_embeddings elif hasattr(embedding_layer, "weight"): return embedding_layer.weight elif hasattr(embedding_layer, "decoder"): return embedding_layer.decoder else: return None ``` No more `word_embeddings` or `weight`, only `weight`. What do you think? > Also, how does the new organization not screw up pretrained weights? From what I understand, the old world_embeddings in the BertEmbeddings layer used to be a weight and now it's a layer with an added world_embeddings attribute? This is because before we where using a [name score](https://www.tensorflow.org/api_docs/python/tf/name_scope) and not anymore in this PR. Let's say that defining a name scope or creating a layer represents the same thing. In both cases the weight is named `'tf_bert_model/bert/embeddings/word_embeddings/weight:0'` until now the `word_embeddings` part of the naming was because the embeddings was created in the context of `tf.name_scope("word_embeddings"):` , in this PR it has the same name but because of the name of the new `WordEmbeddings` layer.<|||||>Yes, having only "weight" makes more sense to me, and it would make the code easier to read. Thanks for explaining why the name of the weight doesn't change for loading!<|||||>I found another advantage of these new embedding computation. It allows our models to be compiled in XLA_GPU and XLA_TPU which was not the case before. Small proof test on a machine with a GPU: ```python from transformers import TFBertModel import tensorflow as tf model = TFBertModel.from_pretrained("bert-base-cased") @tf.function(experimental_compile=True) def run(): return model(model.dummy_inputs) outputs = run() ``` On master fails with: ``` tensorflow.python.framework.errors_impl.InvalidArgumentError: Trying to access resource _AnonymousVar4 located in device /job:localhost/replica:0/task:0/device:CPU:0 from device /job:localhost/replica:0/task:0/device:GPU:0 [Op:__inference_run_4637] ``` On this PR works as expected. The reason is because the `tf.keras.layers.Embeddings` layers are initialized when the model is instanciated instead of being initialized at build time.<|||||>Now, each model has its own `WordEmbedding`, `TokenTypeEmbeddings` and `PositionEmbedding` layer in the model file decorated with the comment `#Copied from...` and the `words_embeddings` weights have been renamed into `weight` to make it more understandable and aligned with the name in `TFSharedEmbeddings`.<|||||>> LGTM in general. One thing I'm not 100% sure about is whether we really need to add keras layers like tf.keras.layers.Add() if we start doing this for the embeddings now, I'm wondering if we should do the same for all residual connections in the self-attention blocks In the absolute, yes we should. In an ideal world, everytime TF proposes a function/layer for doing something we should use it, as it is part of the optimization process. I know and I understand that it might seems confusing and starts to diverge with what PT looks like.
transformers
9,417
closed
shift_tokens_right in BART, FSMT incompatible with DataCollatorForLanguageModelling
## Environment info - `transformers` version: 4.2.0dev0 - Platform: Linux-4.14.81.bm.15-amd64-x86_64-with-debian-9.11 - Python version: 3.7.9 - PyTorch version (GPU?): 1.7.1 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten ## Information I'm trying to train Bart from scratch on a masked language modelling task. I understand that this is currently not supported by HF, but I'm working on it and would like to bring up certain "blockers" that currently prevent this. The Bart shift_tokens_right implementation looks like this: ``` def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int): """ Shift input ids one token to the right, and wrap the last non pad token (usually <eos>). """ prev_output_tokens = input_ids.clone() assert pad_token_id is not None, "self.model.config.pad_token_id has to be defined." # replace possible -100 values in labels by `pad_token_id` prev_output_tokens.masked_fill_(prev_output_tokens == -100, pad_token_id) index_of_eos = (prev_output_tokens.ne(pad_token_id).sum(dim=1) - 1).unsqueeze(-1) decoder_start_tokens = prev_output_tokens.gather(1, index_of_eos).squeeze() prev_output_tokens[:, 1:] = prev_output_tokens[:, :-1].clone() prev_output_tokens[:, 0] = decoder_start_tokens return prev_output_tokens ``` The `shift_tokens_right` implementation assumes that anything that was filled as `-100` was a pad token, using that to try to find the index of the EOS token. This is not always true. In `DataCollatorForLanguageModelling`, which is used in the example script, we see https://github.com/huggingface/transformers/blob/748006c0b35d64cdee23a3cdc2107a1ce64044b5/src/transformers/data/data_collator.py#L303 This causes errors when trying to train a Bart model on language modelling. ``` Traceback (most recent call last): File "math_explain/masked_lm.py", line 281, in <module> main() File "math_explain/masked_lm.py", line 236, in main train_result = trainer.train() File "/home/tiger/.local/lib/python3.7/site-packages/transformers/trainer.py", line 815, in train tr_loss += self.training_step(model, inputs) File "/home/tiger/.local/lib/python3.7/site-packages/transformers/trainer.py", line 1157, in training_step loss = self.compute_loss(model, inputs) File "/home/tiger/.local/lib/python3.7/site-packages/transformers/trainer.py", line 1181, in compute_loss outputs = model(**inputs) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/tiger/.local/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py", line 1233, in forward decoder_input_ids = shift_tokens_right(labels, self.config.pad_token_id) File "/home/tiger/.local/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py", line 75, in shift_tokens_right decoder_start_tokens = prev_output_tokens.gather(1, index_of_eos).squeeze() RuntimeError: index -1 is out of bounds for dimension 1 with size 213 ``` The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below)
01-05-2021 09:38:44
01-05-2021 09:38:44
Hey @jethrokuan, we've merged a big BART PR yesterday just as a heads up, I think this might solve this problem for Bart -> could you check again?<|||||>@patrickvonplaten I think it does solve this problem: code looks good and runs fine, model results not great, but possibly a mistake of mine. Thanks!
transformers
9,416
closed
Why was DataCollatorForNextSentencePrediction removed ?
# 🚀 Feature request I want to ask a question about why DataCollatorForNextSentencePrediction was removed. That class was implement in the pull request down below. https://github.com/huggingface/transformers/pull/6572 It was so useful for me. But, this feature is not included in the latest version. Do anyone know why it was removed? Or, is there any alternative features? ## Motivation I need NSP feature to cunduct complete pre-training.
01-05-2021 09:04:30
01-05-2021 09:04:30
This class is not necessary anymore: it was the same as `DataCollatorForLanguageModeling` while keeping the `nsp_labels` but `DataCollatorForLanguageModeling` will keep any extra things (like `nsp_labels`) you pass to it. So you can just replace it with `DataCollatorForLanguageModeling`.<|||||>Thank you for your quick reply. Do you mean you just have to use TextDatasetForNextSentencePrediction before DataCollatorForLanguageModeling to conduct NSP?<|||||>Yes.<|||||>That makes sense. Thank you very much for your help!
transformers
9,415
closed
About Multi GPU
## Environment info - `transformers` version: 3.5.0 - Platform: Linux-3.10.0-514.el7.x86_64-x86_64-with-centos-7.3.1611-Core - Python version: 3.6.4 - PyTorch version (GPU?): 1.7.0 - Using GPU in script?: Y - Using distributed or parallel set-up in script?: Y ### Who can help @LysandreJik, @sgugger ## Information Model I am using (Bert, XLNet ...): RoBERTa The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. code ``` from transformers import RobertaConfig config = RobertaConfig( vocab_size=34492, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, type_vocab_size=1, position_embedding_type="absolute" ) from transformers import RobertaTokenizerFast tokenizer = RobertaTokenizerFast.from_pretrained("tokenizer", max_len=512) from transformers import RobertaForMaskedLM model = RobertaForMaskedLM(config=config) from datetime import datetime from transformers import LineByLineTextDataset train_dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path="train.txt", block_size=tokenizer.max_len_single_sentence ) eval_dataset = LineByLineTextDataset( tokenizer=tokenizer, file_path="eval.txt", block_size=tokenizer.max_len_single_sentence ) from transformers import DataCollatorForLanguageModeling data_collator = DataCollatorForLanguageModeling( tokenizer=tokenizer, mlm=True, mlm_probability=0.15 ) from transformers import Trainer, TrainingArguments training_args = TrainingArguments( output_dir="output", overwrite_output_dir=True, do_train=True, do_eval=True, evaluation_strategy="epoch", learning_rate=6e-4, adam_beta1=0.9, adam_beta2=0.98, adam_epsilon=1e-6, per_device_train_batch_size=200, per_device_eval_batch_size=200, num_train_epochs=14, disable_tqdm=True ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_dataset, eval_dataset=eval_dataset, ) trainer.train() ``` 2. log ``` True 0: 1 0: Tesla V100-PCIE-32GB 1: True 1: 1 1: Tesla V100-PCIE-32GB 0: build dataset : 0:00:07.309346 0: build dataset : 0:00:00.328179 1: build dataset : 0:00:07.945437 1: build dataset : 0:00:00.364413 0: {'loss': 10.5927734375, 'learning_rate': 0.0005999489187808615, 'epoch': 0.0011918951132300357} 1: {'loss': 10.57164478302002, 'learning_rate': 0.0005999489187808615, 'epoch': 0.0011918951132300357} 0: {'loss': 6.9972802049411325, 'learning_rate': 0.000574459390430785, 'epoch': 0.5959475566150179} 1: {'loss': 7.002058581503216, 'learning_rate': 0.000574459390430785, 'epoch': 0.5959475566150179} 0: {'eval_loss': 7.694924354553223, 'epoch': 1.0} 1: {'eval_loss': 7.6843767166137695, 'epoch': 1.0} 0: {'loss': 6.86546826171875, 'learning_rate': 0.0005489187808615699, 'epoch': 1.1918951132300357} 1: {'loss': 6.86378662109375, 'learning_rate': 0.0005489187808615699, 'epoch': 1.1918951132300357} 0: {'loss': 6.84359375, 'learning_rate': 0.0005233781712923548, 'epoch': 1.7878426698450536} 1: {'loss': 6.842060546875, 'learning_rate': 0.0005233781712923548, 'epoch': 1.7878426698450536} 0: {'eval_loss': 7.635812759399414, 'epoch': 2.0} 1: {'eval_loss': 7.633483409881592, 'epoch': 2.0} 0: {'loss': 6.812291015625, 'learning_rate': 0.0004978375617231397, 'epoch': 2.3837902264600714} 1: {'loss': 6.811927734375, 'learning_rate': 0.0004978375617231397, 'epoch': 2.3837902264600714} 0: {'loss': 6.8180390625, 'learning_rate': 0.0004722969521539247, 'epoch': 2.9797377830750893} 1: {'loss': 6.8180390625, 'learning_rate': 0.0004722969521539247, 'epoch': 2.9797377830750893} 0: {'eval_loss': 7.621339797973633, 'epoch': 3.0} 1: {'eval_loss': 7.620441436767578, 'epoch': 3.0} 0: {'loss': 6.8016015625, 'learning_rate': 0.0004467563425847096, 'epoch': 3.575685339690107} 1: {'loss': 6.8015546875, 'learning_rate': 0.0004467563425847096, 'epoch': 3.575685339690107} 0: {'eval_loss': 7.575932025909424, 'epoch': 4.0} 1: {'eval_loss': 7.5758209228515625, 'epoch': 4.0} 0: {'loss': 6.81323828125, 'learning_rate': 0.0004212157330154946, 'epoch': 4.171632896305125} 1: {'loss': 6.81312109375, 'learning_rate': 0.0004212157330154946, 'epoch': 4.171632896305125} 0: {'loss': 6.80004296875, 'learning_rate': 0.0003956751234462795, 'epoch': 4.767580452920143} 1: {'loss': 6.80001953125, 'learning_rate': 0.0003956751234462795, 'epoch': 4.767580452920143} 0: {'eval_loss': 7.579530715942383, 'epoch': 5.0} 1: {'eval_loss': 7.579504013061523, 'epoch': 5.0} 0: {'loss': 6.79704296875, 'learning_rate': 0.0003701345138770645, 'epoch': 5.363528009535161} 1: {'loss': 6.79696875, 'learning_rate': 0.0003701345138770645, 'epoch': 5.363528009535161} 0: {'loss': 6.796515625, 'learning_rate': 0.0003445939043078495, 'epoch': 5.959475566150179} 1: {'loss': 6.79640234375, 'learning_rate': 0.0003445939043078495, 'epoch': 5.959475566150179} 0: {'eval_loss': 7.59311580657959, 'epoch': 6.0} 1: {'eval_loss': 7.593157768249512, 'epoch': 6.0} 0: {'loss': 6.7975078125, 'learning_rate': 0.0003190532947386344, 'epoch': 6.5554231227651965} 1: {'loss': 6.7974375, 'learning_rate': 0.0003190532947386344, 'epoch': 6.5554231227651965} 0: {'eval_loss': 7.5591912269592285, 'epoch': 7.0} 1: {'eval_loss': 7.559223175048828, 'epoch': 7.0} 0: {'loss': 6.8036171875, 'learning_rate': 0.00029351268516941936, 'epoch': 7.151370679380214} 1: {'loss': 6.803546875, 'learning_rate': 0.00029351268516941936, 'epoch': 7.151370679380214} 0: {'loss': 6.79696875, 'learning_rate': 0.0002679720756002043, 'epoch': 7.747318235995232} 1: {'loss': 6.7969921875, 'learning_rate': 0.0002679720756002043, 'epoch': 7.747318235995232} 0: {'eval_loss': 7.575222492218018, 'epoch': 8.0} 1: {'eval_loss': 7.574929714202881, 'epoch': 8.0} 0: {'loss': 6.796890625, 'learning_rate': 0.00024243146603098925, 'epoch': 8.34326579261025} 1: {'loss': 6.7968515625, 'learning_rate': 0.00024243146603098925, 'epoch': 8.34326579261025} 0: {'loss': 6.788359375, 'learning_rate': 0.00021689085646177421, 'epoch': 8.939213349225268} 1: {'loss': 6.788375, 'learning_rate': 0.00021689085646177421, 'epoch': 8.939213349225268} 0: {'eval_loss': 7.567000389099121, 'epoch': 9.0} 1: {'eval_loss': 7.566658973693848, 'epoch': 9.0} 0: {'loss': 6.794640625, 'learning_rate': 0.00019135024689255915, 'epoch': 9.535160905840286} 1: {'loss': 6.7945859375, 'learning_rate': 0.00019135024689255915, 'epoch': 9.535160905840286} 0: {'eval_loss': 7.5506415367126465, 'epoch': 10.0} 1: {'eval_loss': 7.550570487976074, 'epoch': 10.0} 0: {'loss': 6.78496875, 'learning_rate': 0.0001658096373233441, 'epoch': 10.131108462455304} 1: {'loss': 6.7848984375, 'learning_rate': 0.0001658096373233441, 'epoch': 10.131108462455304} 0: {'loss': 6.7898984375, 'learning_rate': 0.00014026902775412904, 'epoch': 10.727056019070321} 1: {'loss': 6.7898203125, 'learning_rate': 0.00014026902775412904, 'epoch': 10.727056019070321} 0: {'eval_loss': 7.568336486816406, 'epoch': 11.0} 1: {'eval_loss': 7.568056583404541, 'epoch': 11.0} 0: {'loss': 6.79440625, 'learning_rate': 0.000114728418184914, 'epoch': 11.32300357568534} 1: {'loss': 6.7943984375, 'learning_rate': 0.000114728418184914, 'epoch': 11.32300357568534} 0: {'loss': 6.78665625, 'learning_rate': 8.918780861569896e-05, 'epoch': 11.918951132300357} 1: {'loss': 6.786703125, 'learning_rate': 8.918780861569896e-05, 'epoch': 11.918951132300357} 0: {'eval_loss': 7.579376220703125, 'epoch': 12.0} 1: {'eval_loss': 7.5791497230529785, 'epoch': 12.0} 0: {'loss': 6.79565625, 'learning_rate': 6.364719904648391e-05, 'epoch': 12.514898688915375} 1: {'loss': 6.795640625, 'learning_rate': 6.364719904648391e-05, 'epoch': 12.514898688915375} 0: {'eval_loss': 7.5773115158081055, 'epoch': 13.0} 1: {'eval_loss': 7.577144622802734, 'epoch': 13.0} 0: {'loss': 6.795859375, 'learning_rate': 3.810658947726885e-05, 'epoch': 13.110846245530393} 1: {'loss': 6.795796875, 'learning_rate': 3.810658947726885e-05, 'epoch': 13.110846245530393} 0: {'loss': 6.79365625, 'learning_rate': 1.2565979908053806e-05, 'epoch': 13.70679380214541} 1: {'loss': 6.793703125, 'learning_rate': 1.2565979908053806e-05, 'epoch': 13.70679380214541} 0: {'eval_loss': 7.550729751586914, 'epoch': 14.0} 0: {'epoch': 14.0} 0: train time : 2:00:32.638885 1: {'eval_loss': 7.550601482391357, 'epoch': 14.0} 1: {'epoch': 14.0} 1: train time : 2:00:54.112366 ``` ## Expected behavior Hello! I am not sure this is BUG, but I don't know where I can ask a question about this. So, if it is not appropriate, please tell me how can I get the answer about this. I wrote the code like above and I have two GPUs. I understand that transformers automatically allocate data to each GPU, so I don't need to set up anything in code. But, the log seems like each GPU train separate model. I expect that each GPU is trained by scattered data, and gather the loss. However, GPU0's loss and GPU1's loss have same values. Also, there is no time different between using two gpus(7302sec) and single gpu(7252sec). Is there anything I can do for reducing training time with two GPUs?
01-05-2021 05:31:12
01-05-2021 05:31:12
Hi there, those kinds of questions should be asked on the [forums](https://discuss.huggingface.co/) as we keep issues for bugs and feature requests only, so closing this. For a quick answer (move the discussion to the forums for a longer discussion!), it's normal that each GPU takes a slightly different time to train as all CUDA operations are asynchronous and the program is launched twice in parallel to be executed by both. It's also normal to see two different losses as the loss is not gathered across devices during training, only the gradients.<|||||>Thank you for the answer. I will move this to forums.
transformers
9,414
closed
Fix link to Evaluate TAPAS Notebook
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
01-05-2021 05:26:54
01-05-2021 05:26:54
transformers
9,413
closed
Fix link to Notebook to fine-tune TAPAS
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, XLM: @LysandreJik GPT2: @LysandreJik, @patrickvonplaten tokenizers: @mfuntowicz Trainer: @sgugger Benchmarks: @patrickvonplaten Model Cards: @julien-c examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @patrickvonplaten, @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten T5: @patrickvonplaten Rag: @patrickvonplaten, @lhoestq EncoderDecoder: @patrickvonplaten Longformer, Reformer: @patrickvonplaten TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten examples/seq2seq: @patil-suraj examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger FSMT: @stas00 -->
01-05-2021 05:23:43
01-05-2021 05:23:43
transformers
9,412
closed
[model parallel] add experimental warning
This PR documents that model parallelism is experimental and can change at any moment, so that we are not committing to any APIs until we sorted this out and it appears to be stable. This in particular applies to the device map which is far from being sorted out. @sgugger
01-05-2021 04:39:46
01-05-2021 04:39:46
Thanks!
transformers
9,411
closed
[examples/text-classification] Fix a bug for using own regression dataset
# What does this PR do? This PR is to fix https://github.com/huggingface/transformers/issues/9393 Fix a bug in `run_glue.py` so that it can be used for our own dataset of regression tasks. close #9393 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger Thank you for checking the issue and giving the comment.
01-05-2021 03:17:09
01-05-2021 03:17:09
@sgugger Thank you for checking and merging this PR!