repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
12,225
closed
Pegasus pretraining in fp16 results in NaN loss
## Environment info `transformers` version: 4.5.1 - Platform: Linux-5.4.0-73-generic-x86_64-with-glibc2.29 - Python version: 3.8.5 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @patrickvonplaten, @patil-suraj ## Information Model I am using: pegasus The problem arises when using: * [ ] my own modified scripts: ``` config = PegasusConfig(**pegasus_config_kwargs) model = PegasusForConditionalGeneration(config=config) ``` and then using Trainer with fp16 on. The trainer args I'm using: ```json { "logging_strategy": "steps", "logging_steps": 20, "save_strategy": "steps", "save_steps": 5000, "num_train_epochs": 2, "lr_scheduler_type": "linear", "warmup_steps": 10000, "learning_rate": 0.001, "dataloader_num_workers": 8, "per_device_train_batch_size": 16, "gradient_accumulation_steps": 16, "group_by_length": true, "adafactor": true, "fp16": true } ``` The tasks I am working on is: * [ ] my own task or dataset ## To reproduce I was trying to pretrain pegasus in fp16 from scratch using a modified script. The training is much faster, around 40% speedup, but after almost 3 days, training was 10% into a second epoch, a NaN loss happened. Debugging the place where overflow occurred I guess is possible, but will be troublesome. Do you know what could be the problem or if someone is working on problems with fp16 on pegasus? I've seen for example that it could be a problem when using pretrained checkpoints (https://discuss.huggingface.co/t/finetuning-for-fp16-compatibility/977), but shouldn't it work when initializing model from config, like below? ``` config = PegasusConfig(**pegasus_config_kwargs) model = PegasusForConditionalGeneration(config=config) ``` Training without fp16 works fine.
06-17-2021 10:27:02
06-17-2021 10:27:02
Yeah, not sure to what extent it is feasible to prevent this as Pegasus was pretrained in `bfloat16` cc @stas00 <|||||>But I'm pretraining a freshly initialized model, so I think the problem shouldn't be with the `bfloat16` casting<|||||>That's interesting. We have primarily debugged bf16-pretrained models that almost all had this issue as Patrick says. So this means the model's design is somehow not fp16-friendly. Could you take a last checkpoint that was still good and run it with `DebugUnderflowOverflow` https://huggingface.co/transformers/debugging.html#underflow-and-overflow-detection and report back the failing trace - which will show us where the under/over-flow occurs in the model. <|||||>I will debug it, thanks for the link on how to do it, but probably will have the results in like ~2 weeks time, because now I'm waiting for the results of training without mixed precision. <|||||>I've run training from checkpoint with debugging, like below: ``` DebugUnderflowOverflow(model) trainer = Trainer(**args) trainer.train(resume_from_checkpoint=checkpoint_path) ``` And NaN's happened after around 2 days of training. I've redirected all stdout to file, but the problem is that there wasn't any output from the DebugUnderflowOverflow, no near first nans or other place in file. Console also didn't show anything. ``` Logging step 78200 {'loss': 3.6594, 'learning_rate': 0.00028089413749472796, 'epoch': 0.54} Logging step 78220 {'loss': nan, 'learning_rate': 0.0002806832560101223, 'epoch': 0.54} ``` Assuming that I've used DebugUnderflowOverflow correctly, do you have any ideas what might be the source of these nans? Disclaimer about the experiment: Last checkpoint I had was just before the end of the first epoch, the next one was after NaN's started, so I took the last one, but because we have 'dynamic' tokenization it would take long time just to get to the previous point. So I've used option ignore_data_skip which didn't forward the dataset to the checkpoint's data point but just started training on the whole dataset again. I think it shouldn't matter for the purpose of debugging NaNs because in the first run model has seen whole training dataset without throwing NaN's.<|||||>You can validating that the tracing works with: https://huggingface.co/transformers/debugging.html#specific-batch-absolute-mix-and-max-value-tracing This will just report all min/max values of the desired batches - e.g. batch 0, so that you know it's configured correctly and outputs the data it would if there were to be NaNs. Let's validate that it works first and if it does, then hopefully a trace of one batch could shed some light. If it's really long probably make an attachment to your comment. e.g. it's possible that the weights are all not-NaNs, but the loss still somehow gets pushed into over/underflow.<|||||>Turns out I didn't attach the debugger properly for the first time ^^. A way to validate helped, thanks. Here are all frames printed by the debugger after detecting inf. [overflow_debug.txt](https://github.com/huggingface/transformers/files/6777740/overflow_debug.txt) Not sure where things go wrong.<|||||>The last frame is: ``` model.encoder.layers.10 PegasusEncoderLayer 2.44e-04 6.86e+04 input[0] 0.00e+00 3.40e+38 input[1] 9.77e-04 inf output[0] ``` The weird thing is that, input[1] is probably attention_mask and not sure why and where some of its values are set to inf. I think in the encoder layer it should be 0 or 1, indicating padding masking? <|||||>Other thing I don't fully understand is that ``` model.encoder.layers.10.fc2 Linear 1.52e-08 3.28e+00 weight 3.00e-04 1.71e+00 bias 0.00e+00 1.80e+02 input[0] 0.00e+00 6.06e+04 output model.encoder.layers.10 PegasusEncoderLayer 2.44e-04 6.86e+04 input[0] 0.00e+00 3.40e+38 input[1] 9.77e-04 inf output[0] ``` it looks like second layer of feed forward layer returns output that is still acceptable in fp16, but then the whole layer returns inf. So I assume that the overflow occurred somewhere between this line (4.5.1 version) https://github.com/huggingface/transformers/blob/4bae96ec2bee265f938fc262201538819419089a/src/transformers/models/pegasus/modeling_pegasus.py#L337 and the return. But there is a check to clamp any possible overflows.<|||||>I'd say the next step is to inject the `detect_overflow` between the suspect lines of code, as shown at the very end of: https://huggingface.co/transformers/debugging.html#underflow-and-overflow-detection as shown in this example: ``` from debug_utils import detect_overflow class T5LayerFF(nn.Module): [...] def forward(self, hidden_states): forwarded_states = self.layer_norm(hidden_states) detect_overflow(forwarded_states, "after layer_norm") forwarded_states = self.DenseReluDense(forwarded_states) detect_overflow(forwarded_states, "after DenseReluDense") return hidden_states + self.dropout(forwarded_states) ``` and then you will know exactly where things overflow. And once identified you can either turn off the `autocast` off for that line of code, or to change the operation to always cast to fp32, as in `some_torch_op(...., dtype=torch.float32)` if it's a torch op that is. For `autocast` turning off example please see https://github.com/huggingface/transformers/pull/10956/files<|||||>So I ran some more tests with `detect_overflow`. Turns out that scaling up inside `F.dropout` pushes already high values from output of 2nd linear layer (which is fp16) into inf. The next unexpected thing is that, the inf check https://github.com/huggingface/transformers/blob/4bae96ec2bee265f938fc262201538819419089a/src/transformers/models/pegasus/modeling_pegasus.py#L340 should be moved a few lines of code up. As it is now, during the dtype check, `hidden_states` is already promoted to fp32 after the residual add. In residual add `residual` is fp32 and `hidden_states` is fp16 with possible overflows that get carried to fp32 result. Moving the check up will patch the overflows I've been seeing. I also think about adding a second check before the first residual add in the encoder, as some values are rather high (2.2e4). And then I'll keep my fingers crossed that nothing overflows in the decoder as I haven't looked into the scale of values there. <|||||>I'm glad to hear that you can now easily tell where things overflow, @kolakows. Please remember that the original code was trained in a different dtype regime (bf16 or fp32/tf32) and so the designers of the model haven't had to deal with fp16 and that's why changes are needed to be applied to the original port. This same story happens to pretty much all models of this kind (i.e. not designed to be trained with fp16 in mind). I trust you will be able to tweak the code to overcome this. You can approach this in 3 ways 1. explicit upcasting as you suggested 2. turning off `autocast` for the duration of the "sensitive" few lines of code. 3. yet another approach is to change the loss function to punish the high weights and encourage the model to use weights in a safe fp16 range, e.g. for t5 https://github.com/huggingface/transformers/pull/10956#issuecomment-820712267 - which may or may not work here and of course need to think how to add the extra component in a sensible way. Then you can PR the changes and hopefully others will enjoy the fruit of your hard labour. Thank you!<|||||>Thank you for guiding me on how to debug the model and pointing out possible fixes. It took me some time to wrap my head around fp16. I think now I have a clear understanding on how to approach it. For now I made simple patches and will be running some more training and see how it goes. If I get some nice results, I'll post some summary here and do a PR.<|||||>A while back I also made a short study comparing bf16 and fp16, so it might be useful too to understand the limitations of fp16: https://github.com/stas00/ml-ways/blob/master/numbers/bfloat16-vs-float16-study.ipynb
transformers
12,224
closed
Support for torch 1.9.0
This PR adds support for torch 1.9.0. It upgrades the CPU CI to use torch 1.9.0, and the GPU CI to use PyTorch's 1.9.0 docker image to run tests. As discussed with @michaelbenayoun, this puts a hard requirement on having a specific torch version for torch fx to be run. The idea is that: - The torch fx support in `transformers` is currently experimental, and will be updated *without* backwards compatibility requirements - To that end, it should always support the latest PyTorch version and not the earlier ones. - However PyTorch 1.9.0 will not be supported due to https://github.com/pytorch/pytorch/pull/59569 - To that end, we setup a specific version requirement on `torch` in order to offer torch FX support. Running on torch 1.8.0 and torch 1.8.1, as well as the various torch 1.8.1-cu111 and other 1.8.x versions works correctly. Running on torch < 1.8 or torch > 1.8 returns: ``` Traceback (most recent call last): File "<input>", line 1, in <module> File "/opt/pycharm-professional/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "/opt/pycharm-professional/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/home/lysandre/.config/JetBrains/PyCharm2021.1/scratches/torchfx.py", line 6, in <module> traced_model = symbolic_trace( File "/home/lysandre/transformers/src/transformers/modeling_fx_utils.py", line 374, in symbolic_trace tracer = HFTracer(batch_size=batch_size, sequence_length=sequence_length, num_choices=num_choices) File "/home/lysandre/transformers/src/transformers/modeling_fx_utils.py", line 152, in __init__ raise ImportError( ImportError: Found an incompatible version of torch. Found version 1.9.0, but only version 1.8 is supported. ```
06-17-2021 10:00:30
06-17-2021 10:00:30
Shouldn't the fx tests be skipped correspondingly? I see the CI logs show that they all passed with 1.9.0 - how is that possible?<|||||>The `is_torch_fx_available` returns `False` as the versions aren't compatible. The tests for torch.fx require `is_torch_fx_available` to be `True` in order to run! Yes, switching back to > 1.9.0 once the issue is fixed works for me.<|||||>OK, but the tests were reported as passed and not skipped. So another todo for down the road is to add a skip rule, so that we don't get a misleading report and have a skipped test appearing as passed. Don't have to do it now.<|||||>They should have a decorator for that, rather than the in-test check. Would be better reported as skipped, indeed!
transformers
12,223
closed
Argument `never_split` not working on `AutoTokenizer`
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.6.1 - Platform: Linux-4.19.0-14-cloud-amd64-x86_64-with-debian-10.7 - Python version: 3.7.9 - PyTorch version (GPU?): 1.8.1+cu102 (True) - Tensorflow version (GPU?): 2.5.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help - tokenizers: @LysandreJik <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): BERT The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('bert-large-uncased', never_split={'lol'}) tokenizer.tokenize("lol That's funny") """ ['lo', '##l', 'that', "'", 's', 'funny'] """ ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior The expected output should be ```python ['lol', 'that', "'", 's', 'funny'] ``` I know by using the `BertTokenizer` the `never_split` argument works e.g. ```python from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-large-uncased', never_split={'lol'}) tokenizer.tokenize("lol That's funny") """ ['lol', 'that', "'", 's', 'funny'] """ ``` But I want to use the `AutoTokenizer` for another model, `nghuyong/ernie-2.0-en`, and it doesn't work there either. <!-- A clear and concise description of what you would expect to happen. -->
06-17-2021 09:53:29
06-17-2021 09:53:29
Ah, I believe the fast tokenizers do not have the `never_split` option. In order to achieve this I would add the tokens to the vocabulary instead cc @n1t0 is there another way to handle this? ```py >>> tokenizer.tokenize("lol that's funny") ['lo', '##l', 'that', "'", 's', 'funny'] >>> tokenizer.add_tokens(["lol"]) 1 >>> tokenizer.tokenize("lol that's funny") ['lol', 'that', "'", 's', 'funny'] ```<|||||>Thanks! The suggestion works for the token `lol`. However another token that I do not want to be split is `...` and the suggestion does not work for this as shown below. ```python >>> from transformers import AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained('bert-large-uncased') >>> tokenizer.tokenize("... that's funny") ['.', '.', '.', 'that', "'", 's', 'funny'] >>> tokenizer.add_tokens(["..."]) 0 >>> tokenizer.tokenize("... that's funny") ['.', '.', '.', 'that', ``` However, again it does work using the `BertTokenizer` and the `never_split` argument e.g. ```python >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained('bert-large-uncased', never_split={'...'}) >>> tokenizer.tokenize("... That's funny") ['...', 'that', "'", 's', 'funny'] ``` Is there another workaround?<|||||>Hi @udeepam, I'm not sure to understand the goal of doing this. ```python >>> tokenizer = AutoTokenizer.from_pretrained('bert-large-uncased', use_fast=False, never_split={'lol'}) >>> tokenizer.tokenize("lol That's funny") ['lol', 'that', "'", 's', 'funny'] >>> tokens = tokenizer.encode("lol That's funny") >>> tokens [101, 100, 2008, 1005, 1055, 6057, 102] >>> tokenizer.convert_ids_to_tokens(tokens) ['[CLS]', '[UNK]', 'that', "'", 's', 'funny', '[SEP]'] ``` As you can see, the tokenizer doesn't split the `lol` token, but it doesn't know it. So it ends up being an `[UNK]` token. If it knew it, it wouldn't have split it in the first place. Is it the behavior you expect to get? Unfortunately, I don't see any other workaround than what @LysandreJik proposed in the first place.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,222
closed
[WIP] enabling `inference_mode` for pipelines for potentially improved perf.
# What does this PR do? This won't work on torch==1.7.1 but does on >=1.8.1 (LTS). Question is, should we enable this with a compatibility layer, or simply do nothing. I think we need a bit of benchmarking to assess the value of this change first. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-17-2021 08:55:25
06-17-2021 08:55:25
@Narsil Curious if you already observed improved performance through this mode? <|||||>So small, not really worth it right now. (a few percent tops) Main roadblock is that the context manager does not exist in torch 1.7 which is still supported by transformers. (So enabling it would mean adding more logic in transformers to basically use inference_mode when available else, no_grad).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,221
closed
Tokenizer encoding skips � character
## Environment info - `transformers` version: 4.5.1 - Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.5 - PyTorch version (GPU?): 1.8.1+cu102 (False) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help - tokenizers: @LysandreJik ## Information Model I am using (Bert, XLNet ...): Electra The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [X] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator") c = "foo � bar" print(f"c[4:5]={c[4:5]}") e = tokenizer(c, return_offsets_mapping=True) print(repr(e)) """ {'input_ids': [101, 29379, 3347, 102], 'token_type_ids': [0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1], 'offset_mapping': [(0, 0), (0, 3), (6, 9), (0, 0)]} """ i = e.char_to_token(4) print(f"i={repr(i)}") # i=None ``` ## Expected behavior Problem: � character was not encoded by the tokenizer. � character should be encoded as some token <UNK> or otherwise. Said character appears in the SquadV2 dataset with ID `5acd29f507355d001abf3774`: ``` Question What is the glyph that Apple's Last Resort font displays? Context Rendering software which cannot process a Unicode character appropriately often displays it as an open rectangle, or the Unicode "replacement character" (U+FFFD, �), to indicate the position of the unrecognized character. Some systems have made attempts to provide more information about such characters. The Apple's Last Resort font will display a substitute glyph indicating the Unicode range of the character, and the SIL International's Unicode Fallback font will display a box showing the hexadecimal scalar value of the character. Answer � ```
06-17-2021 08:48:39
06-17-2021 08:48:39
Maybe @n1t0 has an idea!<|||||>This is totally expected behavior. This tokenizer uses the same cleanup steps that were used in BERT, and this character is specifically removed. Cf here on line 492: https://github.com/huggingface/transformers/blob/32dbb2d/src/transformers/models/bert/tokenization_bert.py#L487-L498
transformers
12,220
closed
[Trainer.py] tr_loss in trainer with distributed training
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4, not very sure - Platform: linux - Python version: 3.7 - PyTorch version (GPU?): 1.7 - Tensorflow version (GPU?): // - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Distributed training with single node with multi-gpu ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger ## Information Model I am using (Bert, XLNet ...): Bart The problem arises when using: trainer.py ## To reproduce Steps to reproduce the behavior: 1. python -m torch.distributed.launch --nproc_per_node=2 xxx 2. observe the tr_loss <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> I'm not sure if it's a bug or a misunderstanding. In `trainer.py`, the `tr_loss` printed in the distributed training is the loss caused in rank = 0. Do we need to reduce the `tr_loss`? ![image](https://user-images.githubusercontent.com/23735761/122360089-b4c14900-cf88-11eb-9d2d-1e8a5b724a17.png)
06-17-2021 08:29:27
06-17-2021 08:29:27
Since it's averaged over all the training mini-batches, it should be a good representation of the real training loss. I'd personally avoid any complexity and add a new reduce operation here, since a user can always evaluate on the training set to get the "real" training loss if they absolutely need to. Does that make sense?<|||||>Thank you for your reply. I got it.
transformers
12,219
closed
Enabling users to provide their own `stopping_criteria` + `logits_processor` to `generate`.
# What does this PR do? Fixes #12118 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-17-2021 07:27:46
06-17-2021 07:27:46
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@patrickvonplaten (Not urgent, get some rest :))<|||||>Sorry for the late reply here @Narsil - I'm happy with the PR I think :-) If we could add a test that would be great<|||||>@patrickvonplaten Should I merge this ?<|||||>I think we shouldn't check anything. If you defined something we pass it `as-is` IMO. It's a poweuser feature, the doc specifically mentions this: https://github.com/huggingface/transformers/pull/12219/files#diff-b7601d397d5d60326ce61a9c91beaa2afa026014141052b32b07e1d044fbbe17R801<|||||>But also happy to drop the PR, the issue didn't seem to generate that much traction. If we're scared to introduce new range of bugs, hard to understand stuff, maybe let's just drop it.<|||||>I think it would be nice to merge the PR, but it just doesn't make much sense to me that a default, always-defined value like `max_length=20` would overwrite something that's passed via the `logits_processor`. So instead of dropping the PR we can just ensure that passed `logits_processor` and `stopping_criteria` that are passed have priority which is intuitive and sensible to me. <|||||>So, you think, we should ```python if logits_processor is None: logist_processort = self._get_logits_process(...) ``` instead ? Make sense. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Leaving it as closed for now - reopening in case the community expresses interest in this PR again...<|||||>Thanks a lot for taking this over @lvwerra ! Let me know if you need any help with the remaining tests<|||||>Superseeded by https://github.com/huggingface/transformers/pull/14779#issuecomment-997914237
transformers
12,218
closed
T5 model seq2seq text generation using word embeddings instead of token_ids does not work
Hi there, I trained a MT5ForConditionalGeneration model. During training, I used my own embeddings for encoding (but default embeddings for decoding). However, when I try to generate output using generate function, it will give me an error message. I will post the code and error message in the following: Here is the code for model training: `outputs = self.encoder2(inputs_embeds=context, attention_mask=input_mask, labels=padded_labels)` Where the context is similar to batch of token_ids but instead they are embeddings. The labels are target sequence token_ids. The training works fine without any issues. And here is the line I tried to generate using the model: `outputs = self.encoder2.generate(input_ids=None, inputs_embeds=context, attention_mask=input_mask, bos_token_id=0, pad_token_id=0, eos_token_id=1)` And once the program hits the above line, I will get the following error message: > outputs = self.encoder2.generate(input_ids=None, inputs_embeds=context, attention_mask=input_mask, bos_token_id=0, pad_token_id=0, eos_token_id=1) > File "/scratch/jerryc/jerryc/venv_py3.7/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context > return func(*args, **kwargs) > File "/scratch/jerryc/jerryc/venv_py3.7/lib/python3.7/site-packages/transformers/generation_utils.py", line 913, in generate > input_ids, decoder_start_token_id=decoder_start_token_id, bos_token_id=bos_token_id > File "/scratch/jerryc/jerryc/venv_py3.7/lib/python3.7/site-packages/transformers/generation_utils.py", line 422, in _prepare_decoder_input_ids_for_generation > torch.ones((input_ids.shape[0], 1), dtype=torch.long, device=input_ids.device) * decoder_start_token_id > AttributeError: 'NoneType' object has no attribute 'shape' It seems the model is not handling this case property. Any help would be appreciated. Thanks
06-17-2021 06:27:27
06-17-2021 06:27:27
Hey @jerry3chen, Can you post a fully reproducible code snippet so that I can take a look? :-)<|||||>Hi @patrickvonplaten, I will post some more detailed codes. But this is downstream task so it is probably not ideal to have all of the code. I will just post down all of the parts that involve the t5model. Here is where I initialized the t5 model ` enc2 = MT5ForConditionalGeneration.from_pretrained('google/mt5-small') ` Then is it passed to a bigger model: ` model = Gat2Seq(enc,enc2,vocab.word2id('<pad>'),vocab.word2id('</s>')) ` ` class Gat2Seq(nn.Module): def __init__(self, encoder, encoder2, pad_idx, eos_idx, teacher_forcing = 0.5): super().__init__() self.encoder = encoder self.encoder2 = encoder2 ` During training, I have: `context = self.encoder(graph, art_lengths) outputs = self.encoder2(inputs_embeds=context, attention_mask=input_mask, labels=padded_labels)` Where context is the shape of [8, 50, 512] coming from previous encoder(8 is the batch size, 50 is the sentence max length, 512 is the embedding size default from mt5tokenizer). padded_labels has shape of [8, 20](8 is the batch size, 20 is the maximum target sequence length). It is batch of target sentence token_ids that I want the model to generate. I wanted the t5model to treated the context as embedded tokens and does it's own encode/decode for text generation. The training step works fine and I am able to see reasonable decrease in outputs.loss. Finally when I have some trained models, I ran this time to generate text: ` outputs = self.encoder2.generate(input_ids=None, inputs_embeds=context, attention_mask=input_mask, bos_token_id=0, pad_token_id=0, eos_token_id=1) ` Where context here is exact the same as the one used in training. However, I will get the following error when program hits the generation line: > File "pred.py", line 452, in <module> > main() > File "pred.py", line 448, in main > setup_predicting(model, data_loader, hps, vocab, f.split('/')[-1] + '_model_output.txt') > File "pred.py", line 64, in setup_predicting > run_predicting(model, data_loader, hps, vocab, save_f) > File "pred.py", line 118, in run_predicting > raise e > File "pred.py", line 106, in run_predicting > outputs = model.forward(G,lengths,labels,predicting=True) # [n_snodes, 2] > File "/scratch/jerryc/jerryc/gat2seq/HeterSumGraph-master-mod-att-TV-char/HiGraphMod.py", line 470, in forward > outputs = self.encoder2.generate(input_ids=None, inputs_embeds=context, attention_mask=input_mask, bos_token_id=0, pad_token_id=0, eos_token_id=1) > File "/scratch/jerryc/jerryc/venv_py3.7/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context > return func(*args, **kwargs) > File "/scratch/jerryc/jerryc/venv_py3.7/lib/python3.7/site-packages/transformers/generation_utils.py", line 913, in generate > input_ids, decoder_start_token_id=decoder_start_token_id, bos_token_id=bos_token_id > File "/scratch/jerryc/jerryc/venv_py3.7/lib/python3.7/site-packages/transformers/generation_utils.py", line 422, in _prepare_decoder_input_ids_for_generation > torch.ones((input_ids.shape[0], 1), dtype=torch.long, device=input_ids.device) * decoder_start_token_id > AttributeError: 'NoneType' object has no attribute 'shape' Hope this is enough for you to diagnose the issue. Thanks, Jerry<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>hello, I face the same problem. Could you give me any suggestions?<|||||>Hey @jerry3chen, @yuto3o, Could you please provide a complete, but **minimal** reproducible code snippet, so that I can easily reproduce the bug? Small non-executeable code snippets are not enough to efficiently debug the problem. Thanks!<|||||>@patrickvonplaten @yuto3o @jerry3chen Hello, I also face the same problem. However, I found that the error doesn't occur if I pass `decoder_input_ids` consisting of `pad_token_id` to the `generate`. The minimal reproducible code snippets are as follows. My environment ``` transformers 4.12.0 torch 1.8.0 ``` **reproducible code for the error** ```py from transformers import ( T5ForConditionalGeneration, T5Tokenizer, ) model = T5ForConditionalGeneration.from_pretrained("sonoisa/t5-base-japanese") tokenizer = T5Tokenizer.from_pretrained("sonoisa/t5-base-japanese", is_fast=True) # the example sentence is "It's sunny today" in English tokenized_inputs = tokenizer(["今日は良い天気です"], return_tensors='pt') # create input embedding instead of passing input_ids inputs_embeds = model.get_input_embeddings()(tokenized_inputs["input_ids"]) output_ids = model.generate( inputs_embeds=inputs_embeds, attention_mask=tokenized_inputs["attention_mask"] ) ``` > --------------------------------------------------------------------------- > AttributeError Traceback (most recent call last) > <ipython-input-32-e369f62c37b6> in <module> > 1 inputs_embeds = model.get_input_embeddings()(tokenized_inputs["input_ids"]) > ----> 2 output_ids = model.generate( > 3 inputs_embeds=inputs_embeds, > 4 attention_mask=tokenized_inputs["attention_mask"] > 5 ) > > ~/anaconda3/envs/aitd/lib/python3.8/site-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs) > 25 def decorate_context(*args, **kwargs): > 26 with self.__class__(): > ---> 27 return func(*args, **kwargs) > 28 return cast(F, decorate_context) > 29 > > ~/anaconda3/envs/aitd/lib/python3.8/site-packages/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, max_new_tokens, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, synced_gpus, **model_kwargs) > 911 input_ids = model_kwargs.pop("decoder_input_ids") > 912 else: > --> 913 input_ids = self._prepare_decoder_input_ids_for_generation( > 914 input_ids, decoder_start_token_id=decoder_start_token_id, bos_token_id=bos_token_id > 915 ) > > ~/anaconda3/envs/aitd/lib/python3.8/site-packages/transformers/generation_utils.py in _prepare_decoder_input_ids_for_generation(self, input_ids, decoder_start_token_id, bos_token_id) > 422 decoder_start_token_id = self._get_decoder_start_token_id(decoder_start_token_id, bos_token_id) > 423 decoder_input_ids = ( > --> 424 torch.ones((input_ids.shape[0], 1), dtype=torch.long, device=input_ids.device) * decoder_start_token_id > 425 ) > 426 return decoder_input_ids > > AttributeError: 'NoneType' object has no attribute 'shape' > **How to fix it** ```py from transformers import ( T5ForConditionalGeneration, T5Tokenizer, ) model = T5ForConditionalGeneration.from_pretrained("sonoisa/t5-base-japanese") tokenizer = T5Tokenizer.from_pretrained("sonoisa/t5-base-japanese", is_fast=True) tokenized_inputs = tokenizer(["今日は良い天気です"], return_tensors='pt') # It's sunny today inputs_embeds = model.get_input_embeddings()(tokenized_inputs["input_ids"]) # **NOTE**: pad_token_id is used as decoder_start_token_id dummy_decoder_input_ids = torch.tensor([[tokenizer.pad_token_id]]) output_ids = model.generate( inputs_embeds=inputs_embeds, attention_mask=tokenized_inputs["attention_mask"], decoder_input_ids=dummy_decoder_input_ids ) ``` > #output_ids > tensor([[ 0, 32099, 876, 4, 5, 2262, 32098, 876, 4, 2262, > 1]]) **When I pass `input_ids` to `generate`** I can get the same result when I pass `input_ids`. ```py from transformers import ( T5ForConditionalGeneration, T5Tokenizer, ) model = T5ForConditionalGeneration.from_pretrained("sonoisa/t5-base-japanese") tokenizer = T5Tokenizer.from_pretrained("sonoisa/t5-base-japanese", is_fast=True) tokenized_inputs = tokenizer(["今日は良い天気です"], return_tensors='pt') # It's sunny today output_ids = model.generate( input_ids=tokenized_inputs["input_ids"], attention_mask=tokenized_inputs["attention_mask"] ) ``` > #output_ids > tensor([[ 0, 32099, 876, 4, 5, 2262, 32098, 876, 4, 2262, > 1]])<|||||>@ichiroex, Thanks for the nicely reproducible code snippet - this is indeed a bug and should be fixed.<|||||>PR to fix this: #14443 <|||||>@patrickvonplaten Thank you!!
transformers
12,217
closed
fix pt-1.9.0 `add_` deprecation
This PR fixes a new pt-1.9.0 `add_` deprecation in several places. The deprecation warnings: ``` UserWarning: This overload of add_ is deprecated: add_(Number alpha, Tensor other) Consider using one of the following signatures instead: add_(Tensor other, *, Number alpha) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:1025.) exp_avg_sq.mul_(beta2t).add_(1.0 - beta2t, update) ``` The new API is at https://pytorch.org/docs/stable/generated/torch.Tensor.add_.html ## Backward compatibility alert I tracked this API down to need pt-1.5.0 or higher Requesting an easier way to do this kind of process: https://github.com/pytorch/pytorch/issues/60149 I still have no idea which minimal pytorch version `transformers` is meant to support. Merging this PR will push it at least to `torch>=1.5.0`. Last I [checked](https://github.com/huggingface/transformers/pull/7985) some 8 months ago we barely supported `torch>=1.4.0`. If you're OK with `torch>=1.5.0` then we should revive and update [this](https://github.com/huggingface/transformers/pull/7985) or make a new PR or fix it here. ## Readability Unfortunately since we have to use the named arg now, the autoformatter makes the code less readable, by forcing whitespace in the expression. I wrote these as: ``` exp_avg.mul_(group["beta1"]).add_(update, alpha=1-group["beta1"]) ``` to make it clear, it's an expression, but it made it into: ``` exp_avg.mul_(group["beta1"]).add_(update, alpha=1 - group["beta1"]) ``` now it looks like alpha is 1. grrrr. perhaps `()` are needed for improved readability. i.e.: ``` exp_avg.mul_(group["beta1"]).add_(update, alpha=(1 - group["beta1"])) ``` @sgugger, @LysandreJik
06-17-2021 03:39:41
06-17-2021 03:39:41
Can we had an import error in AdaFactor to error if the version is les than 1.5 then? It seems the code is only there.<|||||>Sure, I'm just not sure where we are at `transformers`-wise with minimal pt version, so it might be simpler to require pt-1.5+, but the suggestion you made works too for now. Would it help to add `()` for `alpha` as described in the last section of OP? <|||||>Yes, I missed that part. Adding parenthesis is fine!<|||||>@sgugger, it's in AdamW too - it's just whoever coded it hasn't checked back-compat (they didn't know), i.e. search for `add_` - so I think we either need a wrapper or cut off at pt-1.5.0 project-wise. Found this now, as I was adding `()` for clarity. See the new diff.<|||||>as discussed on slack for now adding: ``` require_version("torch>=1.5.0") # add_ with alpha ``` for AdamW and Adafactor.
transformers
12,216
closed
Fix blenderbot checkpoint convert codes.
#12203
06-17-2021 00:18:16
06-17-2021 00:18:16
transformers
12,215
closed
Missing PredictionHeadTransform for BertGenerationDecoder
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.6.0 ### Who can help @LysandreJik ## Information Model I am using (Bert, XLNet ...): Bert, BertForGeneration It seems the [`BertPredictionHeadTransform`](https://github.com/huggingface/transformers/blob/v4.6.0/src/transformers/models/bert/modeling_bert.py#L645) layer (dense+layer norm) is not used in [BertGenerationDecoder](https://github.com/huggingface/transformers/blob/v4.6.0/src/transformers/models/bert_generation/modeling_bert_generation.py#L430), while it is used in [the original BERT](https://github.com/huggingface/transformers/blob/v4.6.0/src/transformers/models/bert/modeling_bert.py#L657). Is this expected? ## To reproduce Steps to reproduce the behavior: ```python3 from transformers import BertForPreTraining, BertGenerationDecoder bert = BertForPreTraining.from_pretrained('bert-base-uncased') bert >>> .... (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) ) ) (pooler): BertPooler( (dense): Linear(in_features=768, out_features=768, bias=True) (activation): Tanh() ) ) (cls): BertPreTrainingHeads( (predictions): BertLMPredictionHead( (transform): BertPredictionHeadTransform( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) ) (decoder): Linear(in_features=768, out_features=30522, bias=True) ) (seq_relationship): Linear(in_features=768, out_features=2, bias=True) ) ) bertdecoder = BertGenerationDecoder.from_pretrained('bert-base-uncased', is_decoder=True) bertdecoder >>> .... (output): BertOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) ) ) ) (lm_head): BertGenerationOnlyLMHead( (decoder): Linear(in_features=768, out_features=30522, bias=True) ) ) ``` ## Expected behavior BertGenerationDecoder has the same transform layer before the final LM head. ```python3 (transform): BertPredictionHeadTransform( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True) ) ```
06-16-2021 23:37:41
06-16-2021 23:37:41
SImilarly, `token_type_embeddings` is also missing for [BertGenerationEmbeddings](https://github.com/huggingface/transformers/blob/v4.6.0/src/transformers/models/bert_generation/modeling_bert_generation.py#L133).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @j-min, `BertForGeneration` was added so that the checkpoints of https://huggingface.co/blog/warm-starting-encoder-decoder can be used in Transformers. Those models don't really need `token_type_ids` since they are generation models and also the lm head is different <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,214
closed
Getting 404 Client Error when loading BaptisteDoyen/camembert-base-xnli
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.6.1 - Platform: Linux-5.11.16-arch1-1-x86_64-with-glibc2.33 - Python version: 3.9.5 - PyTorch version (GPU?): 1.9.0+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ## Information The problem arises when using: * [ ] the official example scripts: https://huggingface.co/BaptisteDoyen/camembert-base-xlni The tasks I am working on is: * [ ] an official GLUE/SQUaD task ## To reproduce Steps to reproduce the behavior: ``` from transformers import pipeline classifier = pipeline("zero-shot-classification", model="BaptisteDoyen/camembert-base-xnli") ``` returns: ``` 404 Client Error: Not Found for url: https://huggingface.co/BaptisteDoyen/camembert-base-xnli/resolve/main/config.json ```
06-16-2021 22:39:13
06-16-2021 22:39:13
There's a typo in your model identifier: ```diff - BaptisteDoyen/camembert-base-xlni + BaptisteDoyen/camembert-base-xnli ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,213
closed
[Question] When pretraining a language model, can I choose to mask specific words?
Hi there, I apologize if this is answered anywhere. I need to pretrain a language model with some specific words masked. I was wondering if this is currently supported? Since language models are trained in an unsupervised way, I saw in the examples that the provided datasets don't need any labels. However, I was thinking if it would be possible to create my own (sentence_with_masks, masked_words) pairs. If library isn't currently supporting that, may anyone point me to a file so that I can make my modifications? Thanks in advance!
06-16-2021 22:37:44
06-16-2021 22:37:44
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,212
open
Clearer indication for overridden method in generation
The expectation for the `prepare_inputs_for_generation` function to be overridden can be made clearer by changing https://github.com/huggingface/transformers/blob/700cee344691afc41f68aa18fedea463b22f95f1/src/transformers/generation_utils.py#L369-L374 to raise a `NotImplementedError` that provides the information mentioned in the function's comment. @patrickvonplaten
06-16-2021 21:31:49
06-16-2021 21:31:49
Also putting this in the "Fix generation docs" task basket<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,211
closed
[WIP] tweak model repo saving
06-16-2021 21:28:45
06-16-2021 21:28:45
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This was incorporated by @sgugger and @LysandreJik in another PR
transformers
12,210
open
Better documentation for generation parameter defaults
# Generation default params documentation It's very hard to follow how the generation parameters are set when running generation. When looking at the official function: https://github.com/huggingface/transformers/blob/700cee344691afc41f68aa18fedea463b22f95f1/src/transformers/generation_utils.py#L644 all parameters default to `None`, but are then later overwritten by the config's default parameters, *e.g.* here: https://github.com/huggingface/transformers/blob/700cee344691afc41f68aa18fedea463b22f95f1/src/transformers/generation_utils.py#L878 . This is very hard to trace or follow. We should at least put a warning or note that clearly states that all generation parameters (and actually all forward) parameters **always** default to the config. What do you think @LysandreJik @patil-suraj @sgugger ? If you agree, I'll open a PR for it :-)
06-16-2021 21:09:14
06-16-2021 21:09:14
Fine by me! I think it can just be stated at the beginning before each arg is documented.<|||||>Putting this is the "Improve generation task basket" so that this is handled once the generation docs are improved as well<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,209
closed
The kernel appears to have died. It will restart automatically. from transformers import pipeline
I am working on the Jupyter Notebook With the code: `from transformers import pipeline` I get: "The kernel appears to have died. It will restart automatically." Can someone explain to me what I have to do to fix this? I have already installed tensorflow and transformers.
06-16-2021 20:19:17
06-16-2021 20:19:17
Could you share a colab with a reproducible code example so that we may take a look? Thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,208
closed
AutoTokenizer: infer the class from the tokenizer config if possible
# What does this PR do? This PR adds the functionality to load a tokenizer with `AutoTokenizer.from_pretrained` after saving it locally (without saving the model config in the same folder). To do this, the proper tokenizer class is saved in `tokenizer_config.json` and the `AutoTokenizer.from_pretrained` method will first look in this file before defaulting to the model config (like before).
06-16-2021 20:09:22
06-16-2021 20:09:22
transformers
12,207
closed
Pipeline update & tests
Image classification models that have less than 5 labels currently cannot run with the pipeline defaults as it uses a top_k of 5 by default. This puts a limit on the top_k so that it maxes out to the number of labels of the model.
06-16-2021 19:59:32
06-16-2021 19:59:32
transformers
12,206
closed
Add TFHubertModel
# What does this PR do? This PR adds the TFHubert Model. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @patrickvonplaten @sgugger @Rocketknight1 Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-16-2021 19:43:56
06-16-2021 19:43:56
Hey @will-rice, Wow that was quick! :D Can you remove the [WIP] whenever the PR is ready for review? :-)<|||||>One thing that's different from the PyTorch version is I couldn't use the copy comments because I added the type to the config arguments in Wav2Vec2. If I retained the copy comments it would overwrite HubertConfig with Wav2Vec2Config. Which makes sense, but I wondered if there was a way to fix this so I could keep the copy comments, but ignore the config type.<|||||>I added back the WIP based on the TFWav2Vec2 [bugs](https://github.com/huggingface/transformers/issues/12264#issuecomment-864611327). I will update this with the fixes when those are corrected. <|||||>@patrickvonplaten I believe this one is ready for review now. I updated it with the wav2vec2 bug fixes.<|||||>@patrickvonplaten I can definitely add the copy comments. The issue I ran into was due to Wav2Vec2Config typing in TFWav2Vec2 so the copy script overwrites the TFHubertConfig. I didn't look in depth at the copy code, but I was thinking that we could allow the copy to ignore typing.<|||||>Removing the config typing from TFWav2Vec2 would work though and that's how it is in PyTorch.<|||||>> Removing the config typing from TFWav2Vec2 would work though and that's how it is in PyTorch. Ah you can add something like `with Wav2Vec2->Hubert` which should correctly replace the class name when copying<|||||>Left a comment here: https://github.com/huggingface/transformers/pull/12206/files?file-filters%5B%5D=.py#r667074787 :-) That's how it should work well with the configs<|||||>> Left a comment here: https://github.com/huggingface/transformers/pull/12206/files?file-filters%5B%5D=.py#r667074787 :-) That's how it should work well with the configs 🤦‍♂️ Thanks! I will update it now.
transformers
12,205
closed
[Docs] fixed broken link
# What does this PR do? Fixes #12200 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Documentation: @sgugger
06-16-2021 17:41:30
06-16-2021 17:41:30
Thanks again!
transformers
12,204
closed
(#12203) Fix blenderbot checkpoint convert codes.
https://github.com/huggingface/transformers/issues/12203
06-16-2021 17:03:04
06-16-2021 17:03:04
transformers
12,203
closed
blenderbot checkpoint convert script has bug.
- error ![image](https://user-images.githubusercontent.com/38183241/122261506-70459700-cf0f-11eb-9c51-d30fc6362352.png) - original parlai checkpoint ![image](https://user-images.githubusercontent.com/38183241/122261541-79ceff00-cf0f-11eb-938f-0be13cadd471.png) - so, we should fix codes like below. ```python def rename_layernorm_keys(sd): keys = [ "encoder.norm_embeddings.weight", "encoder.norm_embeddings.bias", "decoder.norm_embeddings.weight", "decoder.norm_embeddings.bias", ] for k in keys: v = sd.pop(k) new_k = "model." + k.replace("norm_embeddings", "layer_norm") assert new_k not in sd sd[new_k] = v IGNORE_KEYS = ["START"] ```
06-16-2021 17:01:53
06-16-2021 17:01:53
Now I have almost fixed the bug. I will PR soon.
transformers
12,202
closed
Training in google colab with TPU using TFTrainer fails with
## Environment info - `transformers` version: 4.6.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.1+cu101 (False) - Tensorflow version (GPU?): 2.5.0 (False) - Using GPU in script?: Using TPU - Using distributed or parallel set-up in script?: I assume Yes, under the hood ### Who can help - trainer: @sgugger @Rocketknight1 ## Information Model I am using (Albert): The problem arises when using: * [ ] my own modified scripts The tasks I am working on is: * [ ] my own task or dataset ## To reproduce I'm trying to train classification model on TPU using TFTrainer, it fails with the following error: > Trying to run metric.update_state in replica context when the metric was not created in TPUStrategy scope. Make sure the keras Metric is created in TPUstrategy scope. I tried training without eval and it finishes without an error but the model is not really trained and results are poor. Also tried to train with eval and without compute_metrics but the same error is thrown. ``` from transformers import TFTrainer, TFTrainingArguments from transformers import TFAutoModelForSequenceClassification def compute_metrics(pred): labels = pred.label_ids preds = pred.predictions.argmax(-1) precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='weighted') acc = accuracy_score(labels, preds) return { 'accuracy': acc, 'precision': precision, 'recall': recall, 'f1': f1 } training_args = TFTrainingArguments( tpu_num_cores=8, output_dir=output_dir, # output directory num_train_epochs=3, # total number of training epochs per_device_train_batch_size=3, # batch size per device during training per_device_eval_batch_size=3, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir=logging_dir, # directory for storing logs logging_steps=10, evaluation_strategy="steps", eval_steps=500, save_steps=3000, load_best_model_at_end=True, metric_for_best_model="f1", learning_rate=1e-5 ) with training_args.strategy.scope(): model = TFAutoModelForSequenceClassification.from_pretrained(modelName, num_labels=len(label_dict), output_attentions=False, output_hidden_states=False) trainer = TFTrainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above compute_metrics=compute_metrics, train_dataset=train_dataset, # training dataset eval_dataset=val_dataset, # evaluation dataset ) trainer.train() ``` ## Expected behavior I would expect to train successfully on TPU
06-16-2021 14:43:58
06-16-2021 14:43:58
Hi! We're trying to move away from using TFTrainer for TensorFlow and instead train models with the native Keras API. We have a full example using the Keras approach here: https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification Training on TPU with this example works correctly, but there are some issues with Keras predictions on TPU that we're actively working on. If you encounter these (the output object contains None fields that should contain values), you can try moving any `predict` calls out of the `strategy.scope()`, or saving the model and doing the predictions on a GPU or CPU instance instead.<|||||>Is there any chance this will be fixed? TF/Trainer has many things that are useful and easier to use.<|||||>Unfortunately, we're probably going to be moving away from TFTrainer entirely - it's actually likely to be deprecated in the very near future! We will, however, be making ongoing adjustments to our models and data preprocessing to ensure people's workflows remain smooth!<|||||>Sounds good. Thank you very much!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> Hi! We're trying to move away from using TFTrainer for TensorFlow and instead train models with the native Keras API. We have a full example using the Keras approach here: https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification > > Training on TPU with this example works correctly, but there are some issues with Keras predictions on TPU that we're actively working on. If you encounter these (the output object contains None fields that should contain values), you can try moving any `predict` calls out of the `strategy.scope()`, or saving the model and doing the predictions on a GPU or CPU instance instead. `predict` works slowly outside of `strategy.scope()`. Is there any other way to make `predict` working with TPU ? I tried to create custom loop for prediction using `tf.function` - it doesn't work with TPU.<|||||>Not easily, unfortunately. This is a known issue at our end and we're hoping to implement a fix, but in the meantime you can try exporting your trained model to a GPU instance and running `predict()` there.
transformers
12,201
closed
ValueError: char_to_token() is not available when using Python based tokenizers ; XLNetTokenizer and encodings.char_to_token bug ;
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - transformers version: 4.6.1 - Platform: Windows - Python version: 3.8.8 - PyTorch version (GPU?): 1.8.1 , GPU enabled - Tensorflow version (GPU?): NA - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten @LysandreJik ## Information Model I am using (Bert, XLNet ...): XLNet , "xlnet-base-cased" The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) my own modified script, but the issue can be reproduced as given below. encodings.char_to_token(i, answers[i]['answer_start']) The error I get is : ValueError: char_to_token() is not available when using Python based tokenizers - This issue is very similar to #9326 The tasks I am working on is: * [SQUAD ] an official GLUE/SQUaD task: (give the name) * A self-curated QA dataset in SQUaD format Steps to reproduce the behavior: Run the code snippet given below : ``` import json from pathlib import Path from transformers import XLNetTokenizer, XLNetForQuestionAnsweringSimple import torch def read_squad(path): path = Path(path) with open(path, 'rb') as f: squad_dict = json.load(f) contexts = [] questions = [] answers = [] for group in squad_dict['data']: for passage in group['paragraphs']: context = passage['context'] for qa in passage['qas']: question = qa['question'] for answer in qa['answers']: contexts.append(context) questions.append(question) answers.append(answer) return contexts, questions, answers train_contexts, train_questions, train_answers = read_squad('train-v2.0.json') val_contexts, val_questions, val_answers = read_squad('dev-v2.0.json') def add_end_idx(answers, contexts): for answer, context in zip(answers, contexts): gold_text = answer['text'] start_idx = answer['answer_start'] end_idx = start_idx + len(gold_text) # sometimes squad answers are off by a character or two – fix this if context[start_idx:end_idx] == gold_text: answer['answer_end'] = end_idx elif context[start_idx - 1:end_idx - 1] == gold_text: answer['answer_start'] = start_idx - 1 answer['answer_end'] = end_idx - 1 # When the gold label is off by one character elif context[start_idx - 2:end_idx - 2] == gold_text: answer['answer_start'] = start_idx - 2 answer['answer_end'] = end_idx - 2 # When the gold label is off by two characters add_end_idx(train_answers, train_contexts) add_end_idx(val_answers, val_contexts) device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') model_name = "xlnet-base-cased" tokenizer = XLNetTokenizer.from_pretrained(model_name) model = XLNetForQuestionAnsweringSimple.from_pretrained(model_name) model.to(device) train_encodings = tokenizer(train_contexts, train_questions, truncation=True, padding=True, max_length= 512) val_encodings = tokenizer(val_contexts, val_questions, truncation=True, padding=True, max_length= 512) def add_token_positions(encodings, answers): start_positions = [] end_positions = [] for i in range(len(answers)): start_positions.append(encodings.char_to_token(i, answers[i]['answer_start'])) end_positions.append(encodings.char_to_token(i, answers[i]['answer_end'] - 1)) # if None, the answer passage has been truncated if start_positions[-1] is None: start_positions[-1] = tokenizer.model_max_length if end_positions[-1] is None: end_positions[-1] = tokenizer.model_max_length encodings.update({'start_positions': start_positions, 'end_positions': end_positions}) add_token_positions(train_encodings, train_answers) add_token_positions(val_encodings, val_answers) ``` ![image](https://user-images.githubusercontent.com/19966604/122236270-ffd15280-cedb-11eb-8549-3b2856c03d4f.png) <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior - encodings.char_to_token(i, answers[i]['answer_start']) should return some value - char_to_token should be not none in this case like in other tokenizers ValueError: char_to_token() is not available when using Python based tokenizers encodings._encoding seems to be None <!-- A clear and concise description of what you would expect to happen. -->
06-16-2021 14:23:32
06-16-2021 14:23:32
Why are you not using the fast tokenizer? The error message tells you that the feature `char_to_token` is not available for the slow (i.e. python) tokenizers because nobody has implemented it yet.<|||||>@cronoik , I ran the same with Fast Tokenizer (XLNetTokenizerFast) on "xlnet-base-cased" , although char_to_token() was available this time , there seems to be some problem with XLNetTokenizerFast . ``` start_positions.append(encodings.char_to_token(i, answers[i]['answer_start'])) end_positions.append(encodings.char_to_token(i, answers[i]['answer_end'] - 1)) ``` While debugging this snippet from the above code, I observed using **XLNetTokenizerFast on "xlnet-base-cased"** that `encodings.char_to_token(i, answers[i]['answer_start'])` is None for most of the cases . (90%) . The output is None , hence encoding["start_position"] and "end_position" have erronous values . and just changing the model, i.e, using **AutoTokenizer on "roberta-base"** , Unlike above I saw these valuse to be finite and not None . And I was further able to fine tune the model . Do you have some insight on this ? <|||||>There is to 95% nothing wrong with the tokenizer. You are just using it the wrong way. Please give us an example that leads to None. The `char_to_token` returns none when you ask for a whitespace position and you use a tokenizer that does not support whitespace.<|||||>Sure, Try these 2 code snippets , with XLNetTokenizerFast: ``` import json from pathlib import Path from transformers import XLNetTokenizerFast, XLNetForQuestionAnsweringSimple # from transformers import BigBirdTokenizerFast, BigBirdForQuestionAnswering import torch def read_squad(path): path = Path(path) with open(path, 'rb') as f: squad_dict = json.load(f) contexts = [] questions = [] answers = [] for group in squad_dict['data']: for passage in group['paragraphs']: context = passage['context'] for qa in passage['qas']: question = qa['question'] for answer in qa['answers']: contexts.append(context) questions.append(question) answers.append(answer) return contexts, questions, answers train_contexts, train_questions, train_answers = read_squad('train-v2.0.json') val_contexts, val_questions, val_answers = read_squad('dev-v2.0.json') def add_end_idx(answers, contexts): for answer, context in zip(answers, contexts): gold_text = answer['text'] start_idx = answer['answer_start'] end_idx = start_idx + len(gold_text) # sometimes squad answers are off by a character or two – fix this if context[start_idx:end_idx] == gold_text: answer['answer_end'] = end_idx elif context[start_idx - 1:end_idx - 1] == gold_text: answer['answer_start'] = start_idx - 1 answer['answer_end'] = end_idx - 1 # When the gold label is off by one character elif context[start_idx - 2:end_idx - 2] == gold_text: answer['answer_start'] = start_idx - 2 answer['answer_end'] = end_idx - 2 # When the gold label is off by two characters add_end_idx(train_answers, train_contexts) add_end_idx(val_answers, val_contexts) device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu') model_name = "xlnet-base-cased" tokenizer = XLNetTokenizerFast.from_pretrained(model_name) model = XLNetForQuestionAnsweringSimple.from_pretrained(model_name) model.to(device) train_encodings = tokenizer(train_contexts, train_questions, truncation=True, padding=True, max_length= 512) val_encodings = tokenizer(val_contexts, val_questions, truncation=True, padding=True, max_length= 512) def add_token_positions(encodings, answers): start_positions = [] end_positions = [] for i in range(len(answers)): start_positions.append(encodings.char_to_token(i, answers[i]['answer_start'])) end_positions.append(encodings.char_to_token(i, answers[i]['answer_end'] - 1)) # if None, the answer passage has been truncated if start_positions[-1] is None: start_positions[-1] = tokenizer.model_max_length if end_positions[-1] is None: end_positions[-1] = tokenizer.model_max_length encodings.update({'start_positions': start_positions, 'end_positions': end_positions}) add_token_positions(train_encodings, train_answers) add_token_positions(val_encodings, val_answers) ``` With roberta-base: ``` import json from pathlib import Path from transformers import AutoTokenizer, AutoModelForQuestionAnswering # from transformers import BigBirdTokenizerFast, BigBirdForQuestionAnswering import torch def read_squad(path): path = Path(path) with open(path, 'rb') as f: squad_dict = json.load(f) contexts = [] questions = [] answers = [] for group in squad_dict['data']: for passage in group['paragraphs']: context = passage['context'] for qa in passage['qas']: question = qa['question'] for answer in qa['answers']: contexts.append(context) questions.append(question) answers.append(answer) return contexts, questions, answers train_contexts, train_questions, train_answers = read_squad('train-v2.0.json') val_contexts, val_questions, val_answers = read_squad('dev-v2.0.json') def add_end_idx(answers, contexts): for answer, context in zip(answers, contexts): gold_text = answer['text'] start_idx = answer['answer_start'] end_idx = start_idx + len(gold_text) # sometimes squad answers are off by a character or two – fix this if context[start_idx:end_idx] == gold_text: answer['answer_end'] = end_idx elif context[start_idx - 1:end_idx - 1] == gold_text: answer['answer_start'] = start_idx - 1 answer['answer_end'] = end_idx - 1 # When the gold label is off by one character elif context[start_idx - 2:end_idx - 2] == gold_text: answer['answer_start'] = start_idx - 2 answer['answer_end'] = end_idx - 2 # When the gold label is off by two characters add_end_idx(train_answers, train_contexts) add_end_idx(val_answers, val_contexts) device = torch.device('cpu') model_name = "roberta-base" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForQuestionAnswering.from_pretrained(model_name) model.to(device) train_encodings = tokenizer(train_contexts, train_questions, truncation=True, padding=True, max_length= 512) val_encodings = tokenizer(val_contexts, val_questions, truncation=True, padding=True, max_length= 512) def add_token_positions(encodings, answers): start_positions = [] end_positions = [] for i in range(len(answers)): start_positions.append(encodings.char_to_token(i, answers[i]['answer_start'])) end_positions.append(encodings.char_to_token(i, answers[i]['answer_end'] - 1)) # if None, the answer passage has been truncated if start_positions[-1] is None: start_positions[-1] = tokenizer.model_max_length if end_positions[-1] is None: end_positions[-1] = tokenizer.model_max_length encodings.update({'start_positions': start_positions, 'end_positions': end_positions}) add_token_positions(train_encodings, train_answers) add_token_positions(val_encodings, val_answers) ``` In the two snippets above, check the value of "start_positions" , "end_positions" variables in "add_token_positions" function after its final iteration , and compare them . Its tokenizer.model_max_length for most cases in XLNet one. Now why is it that in tokenizer (specifically , encodings.char_to_token(i, answers[i]['answer_start'])) its returning finite values in roberta , and with other tokenizer its None . All that was changed was Tokenizer [ encodings.char_to_token(i, answers[i]['answer_start']) is None for xlnet model and , not for roberta . <|||||>Please give us an example of your text that produces None. You have already shown us your code. <|||||>[train-v2.0.txt](https://github.com/huggingface/transformers/files/6671676/train-v2.0.txt) Consider this slice of Squad2.0 dataset , roughly 65 contexts and their qas . (change file from .txt to json ) I'm working on the complete Squad2.0 dataset , but this json will reproduce the issue.<|||||>much help <|||||>Can someone help me understand the purpose "add_token_position" function? I've read multiple articles and watched videos and they all mention "we need to add the token position" but I honestly don't understand that explanation. For example, if we try and fine-tune a bert-base-uncased the start_position for train_context[0] is 67 and the end_position is 70 (subtracting -1 to account for space). I'm fairly certain these numbers represent indices but indices of what and in what list? Thanks for your help. <|||||>Any update @cronoik on why XLnet tokenizer is returning None because it still is returning the same.<|||||>Sorry, I found the reaction of @akar5h very unfriendly and decided to ignore this issue I'll look into it later.
transformers
12,200
closed
[Docs] Broken Link in the Benchmarks.rst
## Issue info In the documentation at [Benchmarks page](https://huggingface.co/transformers/benchmarks.html) the last link is broken due to the reordering of examples folders It is ``` With the new `benchmark` tools, it is easier than ever to share your benchmark results with the community :prefix_link:`here <examples/benchmarking/README.md>`. ``` should be changed to ``` With the new `benchmark` tools, it is easier than ever to share your benchmark results with the community - :prefix_link:`Pytorch Benchmarking Results<examples/pytorch/benchmarking/README.md>`. - :prefix_link:`Tensorflow Benchmarking Results<examples/tensorflow/benchmarking/README.md>`. ``` or a separate documentation page can be created for benchmarking the results of the model. Please Let me know if I can help or if it is being covered by another ongoing effort. ## Who can help? @patrickvonplaten @sgugger
06-16-2021 14:18:25
06-16-2021 14:18:25
Don't hesitate to submit a PR with a fix!
transformers
12,199
closed
[WIP] TensorFlow variant of DataCollatorForLanguageModeling.
Co-authored-by: Dalton Walker <[email protected]> # What does this PR do? We didn't see any support for TensorFlow within the DataCollatorForLanguageModeling data class. Integrating directly with TensorFlow seems useful for TensorFlow users and avoids the necessity for tensor conversion. This PR adds a TFDataCollatorForLangaugeModeling data class that integrates directly with TensorFlow tensors and paves the way for further TFDataCollator conversions. (Reopened PR #12179) ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @LysandreJik @Rocketknight1 @sgugger Anyone in the community is free to review the PR.
06-16-2021 13:00:58
06-16-2021 13:00:58
Thanks a lot for your PR! Before I review more in detail, could you provide an example of use of this API? Data-collators are very PyTorch-ic so I want to make sure this is something that can actually be used in TensorFlow without too many contorsions.<|||||>> Thanks a lot for your PR! > > Before I review more in detail, could you provide an example of use of this API? Data-collators are very PyTorch-ic so I want to make sure this is something that can actually be used in TensorFlow without too many contorsions. Absolutely! We are currently in the process of pretraining Bert with a custom dataset in a domain specific language. We are going to make use of the TFBertForPreTraining Model to achieve this as well as a custom trained Tokenizer. (https://huggingface.co/transformers/model_doc/bert.html#tfbertforpretraining) Specifically we started with the collator for language modeling to make our training data consistent with MLM and NSP tasks. The collator provided that functionality along with batching but only for PyTorch. We wanted to provide the functionality that existed for PyTorch for TensorFlow users, and plan on completing the entire API for TensorFlow support if desired. If you need specific implementation details we are willing to expand further. <|||||>Do you have an example of data preprocessing a bit similar to the [run_mlm](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_mlm.py) script we have in PyTorch? That would be helpful to see this TF data collator in action.<|||||>> Do you have an example of data preprocessing a bit similar to the [run_mlm](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_mlm.py) script we have in PyTorch? That would be helpful to see this TF data collator in action. We are going to move this PR into a WIP so we can address your question. <|||||>In answer to your question @sgugger, our objective is to integrate the collator with TFTrainer. Currently PyTorch users enjoy this functionality but TensorFlow users do not have the built-in functionality that deserves to be there (unless we are mistaken, and if so apologize). Our idea is to implement the following change in TFTrainer/get_train_tfdataset: ``` if tf_collate_fn is None: ds = ( self.train_dataset.repeat() .shuffle(self.num_train_examples, seed=self.args.seed) .batch(self.total_train_batch_size, drop_remainder=self.args.dataloader_drop_last) .prefetch(tf.data.experimental.AUTOTUNE) ) else ds = ( self.train_dataset.repeat() .shuffle(self.num_train_examples, seed=self.args.seed) .batch(self.total_train_batch_size, drop_remainder=self.args.dataloader_drop_last) .map(tf_collate_fn) .prefetch(tf.data.experimental.AUTOTUNE) ) ``` or we could implement the dataset conversion in the collator: ``` if not tf_collate_fn is None: ds = tf_collate_fn(ds) else: ds = ( self.train_dataset.repeat() .shuffle(self.num_train_examples, seed=self.args.seed) .batch(self.total_train_batch_size, drop_remainder=self.args.dataloader_drop_last) .prefetch(tf.data.experimental.AUTOTUNE) ) ``` This would provide an avenue for TensorFlow users to train any models requiring collator functionality in TFTrainer. Any advice or alternative solutions are welcome! <|||||>We plan to drop the TFTrainer pretty soon to the profit of using Keras, but this could still be useful as we will still rely on the datasets. I think the best API would be to apply it to a TensorFlow dataset but @Rocketknight1 might have other views.<|||||>Our intention is to drop TFTrainer to do training through Keras instead, and as a result in TF we want the input to come from tf.data.Dataset objects rather than custom collators. A lot of things like multi-GPU or TPU training in Keras expect tf.data.Dataset input, and will coerce the input into a Dataset if you don't supply it as one.<|||||>@Rocketknight1 Understood. So providing a collator that could be passed to Dataset.map is the way to go if we want the option. Or are you saying that such an operation should be performed before TFTrainer? I just want to clarify before we continue with a PR. <|||||>We want to avoid TFTrainer entirely in future, so yeah - any kind of custom collator should return a Dataset, or should work through Dataset.map(). This is something we're in the process of updating through our library - there's still a lot of usages of TFTrainer that I'm cleaning up over time!<|||||>Thank you for your quick response! We will continue with the PR going down the .map route. Even though TFTrainer is depreciating, some may still find it beneficial in the meantime. Cheers!<|||||>@LysandreJik @Rocketknight1 @sgugger @sdwalker62 and I have made our working commit for the data_tf_collator.py functioning with tf.data.Dataset. We had quite a few commits within our test-branch that has slightly cluttered the PR, so if you want us to make another PR to help focus in on the code that matters most let us know. Otherwise, the two scripts to primarily look at are data_tf_collator.py and test_data_tf_collator.py. Let us know if you have any questions. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey! This isn't something I want to go stale, but I lost track of it when I saw you were still adding commits! Are you happy with it as-is, and ready for a review?<|||||>That is no problem! And we are ready for a review at your convenience. <|||||>Hi! I'm reviewing now. This is actually quite timely - we're planning a general revamp of all the data collators to support both Tensorflow and JAX, as well as support for our Dataset objects to automatically convert to `tf.data.Dataset`, which will almost certainly include the new data collation functions as part of the `tf.data` pipeline. The downside is that we haven't decided how exactly to structure the code yet, so we might ask you to move or rename this class, but hopefully we can use almost all of the code here as part of the revamp!<|||||>> Hi! I'm reviewing now. This is actually quite timely - we're planning a general revamp of all the data collators to support both Tensorflow and JAX, as well as support for our Dataset objects to automatically convert to `tf.data.Dataset`, which will almost certainly include the new data collation functions as part of the `tf.data` pipeline. > > The downside is that we haven't decided how exactly to structure the code yet, so we might ask you to move or rename this class, but hopefully we can use almost all of the code here as part of the revamp! That is perfect, we are glad we could help out! We will happily move/rename/or restructure the code in any way that best suits your revamp and the rest of your codebase :smile: <|||||>So I've been thinking this over a bit more - my guess is that `tokenizer.pad` probably cannot/shouldn't be compiled with tf.function. It's effectively a totally arbitrary function, and every new model we add might have a different one, so we couldn't make any guarantee that AutoGraph will play nicely with it, even though in testing it seemed to work for me on a few common cases. For the same reasons, we shouldn't try to reimplement `tokenizer.pad` like you did with `tf_pad_tokens`, because at any moment a model could come along that would require a fresh rewrite of that. Given that we need to call a block of arbitrary Python code, that means we can't guarantee that the collation function will be compilable with `tf.function` or `Dataset.map()`, although we could still use it in a `tf.data` pipeline by either using it when the data is loaded with `from_generator`, or wrapping it in `py_function` to allow it to be used in `Dataset.map()`. I think we should go for the following: 1. The function should take input as either tf.Tensor or nested (possibly variable-length) lists. It could optionally accept `np.ndarray` or `tf.ragged.RaggedTensor` too. 2. No `tf.function` anywhere - code is pure Python 3. We can possibly have some kind of 'master' function that takes an argument like `return_tensors` and will call the framework-specific collators based on the argument value, but this is something we can implement later. That's a lot of changes, though I'm hopeful we could keep a lot of your code here as-is. Do you think it makes sense, or do you have any objections to any of it?<|||||>In the meantime, I'm going to be working on this too - I'll take a different `DataCollator` class and try to write a TF equivalent of it tomorrow. If I run into any issues there I'll let you know.<|||||>Hey, I've rewritten a few of the classes in our preferred style, but left the language modelling ones alone for now, you can see them here: https://github.com/huggingface/transformers/pull/13105 We'd like to push ahead with this fairly soon, so if you'd like, you can try adjusting this PR to a similar style. If not, we can close this PR and I'll add the rest to my PR tomorrow. Either way, thank you for the contribution - whether or not we use the code directly, this PR was helpful in drawing our attention to the problem and to possible approaches for writing data collators that support frameworks besides Torch!<|||||>> Hey, I've rewritten a few of the classes in our preferred style, but left the language modelling ones alone for now, you can see them here: #13105 > > We'd like to push ahead with this fairly soon, so if you'd like, you can try adjusting this PR to a similar style. If not, we can close this PR and I'll add the rest to my PR tomorrow. Either way, thank you for the contribution - whether or not we use the code directly, this PR was helpful in drawing our attention to the problem and to possible approaches for writing data collators that support frameworks besides Torch! This afternoon we started finalizing and adding some of those changes you've suggested in another branch. Once done, we will also adjust the code to match your preferred style shown in your new PR. We can merge those changes into the this PR here and you can feel free to just use this code in your PR or as a starting point for your revisions. Either way, no hard feelings, and we are glad we could help out in any way!<|||||>I'm happy for you to submit your code, and I'll avoid any classes you're touching when I make my own PR! Which ones would you like to handle?<|||||>Hey! We'd like to push to get this in soon, so we can proceed with a general overhaul of our TF data pipelines. At the same time, I know you're contributing code for free, and the rush is mostly caused by my own disorganization, so I don't want to force deadlines on you or anything! We'd like to move on and merge everything by Monday, so if you want to add any code today or this weekend, I'll grab it at that point and pull it into my PR. If not, then don't worry - what you've added up to now will already be quite helpful for the final PR, and we'll make sure that both of you get correct author/contributor credits for it regardless!<|||||>Hey there! 😃 We just made some code changes to integrate more closely with your style and had all of our tests pass. We are finishing up lunch and then will go through a final review before updating the PR. <|||||>@sdwalker62 and I just pushed up our revisions based on your review and recent PR. We changed the name of the file to TFDataCollatorForMaskedLanguageModeling. Hopefully, this helps with your upcoming merge this Monday! Let us know if you need anything else, and we look forward to contributing to more things in the future! :smile: <|||||>Thank you! We're just finishing off an upstream PR to `Datasets`, at which point I'll be merging your code into the other DataCollator PR and getting the rest of the team to review it.<|||||>Hey, just to update you: The code has been incorporated into my local copy, and I'm working on adding some other methods we need before I push it all to the other PR. I'll tag you as soon as that commit is in!<|||||>Code is all in at #13105. I'm very likely to steal some of the test code from this PR too once we incorporate tests for all the classes, so I'll make sure you're acknowledged as contributors for that too!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,198
closed
Enabling AutoTokenizer for HubertConfig.
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-16-2021 12:47:38
06-16-2021 12:47:38
transformers
12,197
closed
XLM-RoBERTa MLM Trainer not saving 'sentencepiece.bpe.model' file
## Environment info (Colab) - `transformers` version: 4.7.0.dev0 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.1+cu101 (True) - Tensorflow version (GPU?): 2.5.0 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ## Information Model I am using xlm-roberta-base: The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) Example: [RoBERTa/BERT/DistilBERT and masked language modeling, using HuggingFace Trainer with your own train file](https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) Dataset: csv file with only one column named "text", containing one sentence per row. ## To reproduce Steps to reproduce the behavior: 1. Follow the instructions displayed in this [pytorch language-modeling examples](https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) page (RoBERTa/BERT/DistilBERT and masked language modeling, using HuggingFace Trainer with your own train file). Used command: `!python train.py --model_name_or_path xlm-roberta-base --train_file custom_train_dset.csv --save_steps 300000 --line_by_line --do_train --output_dir xlm-roberta-base-mlm-tuned-example` ## Expected behavior I did the very same thing, but with less data(same custom dataset, less rows) two days ago(2021/06/14) and I got the desired output: ![image](https://user-images.githubusercontent.com/60230715/122219560-f0bfb580-ce85-11eb-99a7-982c474d8050.png) Now, this is the output that I am getting(**wrong**): ![image](https://user-images.githubusercontent.com/60230715/122218776-287a2d80-ce85-11eb-8878-622f58f8fb39.png) @sgugger
06-16-2021 12:38:24
06-16-2021 12:38:24
Without seeing your training script, it's impossible to diagnose what went wrong. I just tried a `tokenizer.save_pretrained(...)` with this model and I get all the files.<|||||>Hey @sgugger, thanks for the quick reply! I was making a very stupid mistake(typo) and haven't noticed it until now. I was using 'roberta-base' instead of 'xlm-roberta-base', that is why there was no 'sentencepiece.bpe.model' file when saving it. Sorry for taking your time!
transformers
12,196
closed
Where I can find official pretrained weights of SOP in Albert and NSP in Bert?
Hi Guys, I was checking for the pre-trained weights ( 2 layer classifier ) of ```SOP``` in ```Albert``` and ```NSP``` in ```Bert```. Seems like it is initializing randomly every time. Can we have the official weights loaded here or is it not available from official models? Can anyone clarify please. Thanks.
06-16-2021 12:05:56
06-16-2021 12:05:56
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Is there any update?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@LysandreJik - Can you help me here :)?<|||||>Hi @s4sarath, sorry for the delayed response. If the checkpoints on the hub do not satisfy you (I see the SOP/NSP layers are indeed lacking), conversion scripts are available for each model: - [BERT](https://github.com/huggingface/transformers/tree/master/src/transformers/models/bert), see the `convert_*` scripts - [ALBERT](https://github.com/huggingface/transformers/blob/master/src/transformers/models/albert/convert_albert_original_tf_checkpoint_to_pytorch.py) I confirm this successfully exports the full model, including the NSP/SOP weights: #### ALBERT ```bash wget https://storage.googleapis.com/albert_models/albert_base_v2.tar.gz tar -xzf albert_base_v2.tar.gz python convert_albert_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path=albert_base/model.ckpt-best --albert_config_file=albert_base/albert_config.json --pytorch_dump_path=albert_base/pytorch_model.bin cp albert_base/albert_config.json albert_base/config.json ``` ```python >>> from transformers import TFAlbertForPreTraining >>> model = TFAlbertForPreTraining.from_pretrained("albert_base", from_pt=True) [...] All the weights of TFAlbertForPreTraining were initialized from the PyTorch model. If your task is similar to the task the model of the checkpoint was trained on, you can already use TFAlbertForPreTraining for predictions without further training. ``` #### BERT Same for BERT: ```bash wget https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip unzip uncased_L-12_H-768_A-12.zip python convert_bert_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path=uncased_L-12_H-768_A-12/bert_model.ckpt --bert_config_file=uncased_L-12_H-768_A-12/bert_config.json --pytorch_dump_path=uncased_L-12_H-768_A-12/pytorch_model.bin ``` ```python >>> from transformers import TFBertForPreTraining >>> bert = TFBertForPreTraining.from_pretrained("uncased_L-12_H-768_A-12", from_pt=True) [...[ All the weights of TFBertForPreTraining were initialized from the PyTorch model. If your task is similar to the task the model of the checkpoint was trained on, you can already use TFBertForPreTraining for predictions without further training. ``` Hope that helps. <|||||>Thanks Lysandre. No worries. I kind of did same hack. But was wondering, isit something that is supposed to be a part of official model loading. Thanks Sarath On Wed, 11 Aug, 2021, 8:12 pm Lysandre Debut, ***@***.***> wrote: > Hi @s4sarath <https://github.com/s4sarath>, sorry for the delayed > response. If the checkpoints on the hub do not satisfy you (I see the > SOP/NSP layers are indeed lacking), conversion scripts are available for > each model: > > - BERT > <https://github.com/huggingface/transformers/tree/master/src/transformers/models/bert>, > see the convert_* scripts > - ALBERT > <https://github.com/huggingface/transformers/blob/master/src/transformers/models/albert/convert_albert_original_tf_checkpoint_to_pytorch.py> > > I confirm this successfully exports the full model, including the NSP/SOP > weights: > ALBERT > > wget https://storage.googleapis.com/albert_models/albert_base_v2.tar.gz > tar -xzf albert_base_v2.tar.gz > python convert_albert_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path=albert_base/model.ckpt-best --albert_config_file=albert_base/albert_config.json --pytorch_dump_path=albert_base/pytorch_model.bin > cp albert_base/albert_config.json albert_base/config.json > > >>> from transformers import TFAlbertForPreTraining>>> model = TFAlbertForPreTraining.from_pretrained("albert_base", from_pt=True) > [...]All the weights of TFAlbertForPreTraining were initialized from the PyTorch model.If your task is similar to the task the model of the checkpoint was trained on, you can already use TFAlbertForPreTraining for predictions without further training. > > BERT > > Same for BERT: > > wget https://storage.googleapis.com/bert_models/2018_10_18/uncased_L-12_H-768_A-12.zip > unzip uncased_L-12_H-768_A-12.zip > python convert_bert_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path=uncased_L-12_H-768_A-12/bert_model.ckpt --bert_config_file=uncased_L-12_H-768_A-12/bert_config.json --pytorch_dump_path=uncased_L-12_H-768_A-12/pytorch_model.bin > > >>> from transformers import TFBertForPreTraining>>> bert = TFBertForPreTraining.from_pretrained("uncased_L-12_H-768_A-12", from_pt=True) > [...[All the weights of TFBertForPreTraining were initialized from the PyTorch model.If your task is similar to the task the model of the checkpoint was trained on, you can already use TFBertForPreTraining for predictions without further training. > > Hope that helps. > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/12196#issuecomment-896887301>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ACRE6KCCQLTEL7JVTE5YAKDT4KD4ZANCNFSM46ZJP3RA> > . > Triage notifications on the go with GitHub Mobile for iOS > <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675> > or Android > <https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email> > . > <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,195
closed
Batched pipeline for NER
Hi, Is there a way to run batches with NER Pipeline rather than just one example? Thanks.
06-16-2021 11:49:40
06-16-2021 11:49:40
This has been asked many times before, see #11244. However, the corresponding PR was not merged, see [this comment](https://github.com/huggingface/transformers/pull/11251#pullrequestreview-637488364) for the reason.
transformers
12,194
closed
LayoutXLM not loaded
I am trying to use [layoutxlm model](https://huggingface.co/microsoft/layoutxlm-base), but I get the following error when loading either the tokenizer or the model with respectively `AutoTokenizer.from_pretrained("microsoft/layoutxlm-base")` or `AutoModelForTokenClassification.from_pretrained("microsoft/layoutxlm-base")`. ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-11-381e58ab16b7> in <module> 1 model_name="microsoft/layoutxlm-base" ----> 2 tokenizer = AutoTokenizer.from_pretrained(model_name) /opt/conda/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 400 kwargs["_from_auto"] = True 401 if not isinstance(config, PretrainedConfig): --> 402 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) 403 404 use_fast = kwargs.pop("use_fast", True) /opt/conda/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 430 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) 431 if "model_type" in config_dict: --> 432 config_class = CONFIG_MAPPING[config_dict["model_type"]] 433 return config_class.from_dict(config_dict, **kwargs) 434 else: KeyError: 'layoutxlm' ``` Using `transformers 4.6.1`
06-16-2021 10:59:52
06-16-2021 10:59:52
LayoutXLM is not yet supported by the AutoModel API. You can probably plug it into a `LayoutLMTokenizer` and a `LayoutLMForTokenClassification`. <|||||>I had already tried that, but did not work. ``` model_name="microsoft/layoutxlm-base" tokenizer = LayoutLMTokenizer.from_pretrained(model_name) ``` Gives me the error ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-720ed3162350> in <module> 1 model_name="microsoft/layoutxlm-base" ----> 2 tokenizer = LayoutLMTokenizer.from_pretrained(model_name) /opt/conda/lib/python3.8/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 1717 logger.info(f"loading file {file_path} from cache at {resolved_vocab_files[file_id]}") 1718 -> 1719 return cls._from_pretrained( 1720 resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs 1721 ) /opt/conda/lib/python3.8/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs) 1789 # Instantiate tokenizer. 1790 try: -> 1791 tokenizer = cls(*init_inputs, **init_kwargs) 1792 except OSError: 1793 raise OSError( /opt/conda/lib/python3.8/site-packages/transformers/models/bert/tokenization_bert.py in __init__(self, vocab_file, do_lower_case, do_basic_tokenize, never_split, unk_token, sep_token, pad_token, cls_token, mask_token, tokenize_chinese_chars, strip_accents, **kwargs) 191 ) 192 --> 193 if not os.path.isfile(vocab_file): 194 raise ValueError( 195 f"Can't find a vocabulary file at path '{vocab_file}'. To load the vocabulary from a Google pretrained " /opt/conda/lib/python3.8/genericpath.py in isfile(path) 28 """Test whether a path is a regular file""" 29 try: ---> 30 st = os.stat(path) 31 except (OSError, ValueError): 32 return False TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType ``` While, `model = LayoutLMModel.from_pretrained(model_name)` does not throw an error, but it seems not able to correctly initialize weights. ``` You are using a model of type layoutxlm to instantiate a model of type layoutlm. This is not supported for all configurations of models and can yield errors. Some weights of the model checkpoint at microsoft/layoutxlm-base were not used when initializing LayoutLMModel: ['layoutlmv2.visual.backbone.bottom_up.res4.1.conv3.norm.running_var', 'layoutlmv2.encoder.layer.7.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv3.norm.running_var', 'layoutlmv2.visual.backbone.fpn_output5.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv3.norm.weight', 'layoutlmv2.encoder.layer.3.output.LayerNorm.bias', 'layoutlmv2.encoder.layer.9.attention.output.LayerNorm.bias', 'layoutlmv2.encoder.layer.10.attention.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv2.norm.running_mean', 'layoutlmv2.encoder.layer.3.attention.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv3.norm.running_var', 'layoutlmv2.embeddings.position_embeddings.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv1.norm.bias', 'layoutlmv2.embeddings.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.stem.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.0.shortcut.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv1.weight', 'layoutlmv2.encoder.layer.5.intermediate.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv1.norm.bias', 'layoutlmv2.visual.backbone.fpn_lateral3.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv1.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.6.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv3.norm.bias', 'layoutlmv2.encoder.layer.9.attention.self.value.weight', 'layoutlmv2.encoder.layer.5.attention.self.query.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv3.norm.running_mean', 'layoutlmv2.pooler.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.0.shortcut.norm.running_var', 'layoutlmv2.encoder.layer.9.attention.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv3.norm.bias', 'layoutlmv2.encoder.layer.0.intermediate.dense.bias', 'layoutlmv2.encoder.layer.10.output.dense.bias', 'layoutlmv2.visual_proj.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv2.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.2.attention.output.LayerNorm.weight', 'layoutlmv2.encoder.layer.0.attention.self.query.bias', 'layoutlmv2.encoder.layer.11.attention.self.query.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv2.norm.weight', 'layoutlmv2.encoder.layer.5.intermediate.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv3.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.5.attention.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv2.norm.running_mean', 'layoutlmv2.embeddings.h_position_embeddings.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv1.weight', 'layoutlmv2.encoder.layer.10.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv3.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.3.attention.self.query.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv1.weight', 'layoutlmv2.encoder.layer.6.attention.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res2.0.shortcut.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv2.norm.running_var', 'layoutlmv2.encoder.layer.9.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv3.norm.running_mean', 'layoutlmv2.encoder.layer.8.attention.self.query.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv1.weight', 'layoutlmv2.encoder.layer.3.attention.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv2.norm.bias', 'layoutlmv2.encoder.layer.9.intermediate.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv2.norm.weight', 'layoutlmv2.visual.pixel_mean', 'layoutlmv2.encoder.layer.9.output.LayerNorm.weight', 'layoutlmv2.encoder.layer.3.attention.self.query.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv3.norm.bias', 'layoutlmv2.visual.backbone.fpn_output2.weight', 'layoutlmv2.encoder.layer.8.attention.self.query.weight', 'layoutlmv2.encoder.layer.2.attention.self.query.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv1.norm.bias', 'layoutlmv2.encoder.layer.0.intermediate.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv3.norm.weight', 'layoutlmv2.encoder.layer.11.attention.self.key.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv1.norm.weight', 'layoutlmv2.encoder.layer.9.attention.self.value.bias', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv3.weight', 'layoutlmv2.encoder.layer.6.attention.self.key.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv2.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.5.attention.self.value.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.0.shortcut.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv2.norm.running_mean', 'layoutlmv2.encoder.layer.4.attention.self.query.bias', 'layoutlmv2.encoder.layer.2.attention.output.LayerNorm.bias', 'layoutlmv2.encoder.layer.11.attention.self.key.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv3.norm.bias', 'layoutlmv2.encoder.layer.1.attention.self.key.weight', 'layoutlmv2.encoder.layer.3.intermediate.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv1.norm.weight', 'layoutlmv2.encoder.layer.8.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv1.norm.weight', 'layoutlmv2.encoder.layer.0.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv1.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.4.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.fpn_output3.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv3.norm.running_var', 'layoutlmv2.encoder.layer.6.attention.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv1.norm.running_mean', 'layoutlmv2.encoder.layer.11.attention.self.query.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv1.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.4.attention.self.key.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv3.weight', 'layoutlmv2.encoder.layer.6.attention.self.query.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv2.weight', 'layoutlmv2.visual.backbone.fpn_lateral2.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv3.norm.bias', 'layoutlmv2.encoder.layer.11.attention.self.value.bias', 'layoutlmv2.visual.backbone.fpn_output3.weight', 'layoutlmv2.encoder.layer.6.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv1.norm.weight', 'layoutlmv2.encoder.layer.7.attention.self.value.bias', 'layoutlmv2.encoder.layer.2.intermediate.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv1.norm.weight', 'layoutlmv2.encoder.layer.7.attention.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv3.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.8.attention.self.key.weight', 'layoutlmv2.encoder.layer.10.attention.self.value.weight', 'layoutlmv2.encoder.layer.1.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv2.norm.bias', 'layoutlmv2.encoder.layer.5.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv1.norm.running_var', 'layoutlmv2.encoder.layer.3.attention.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv1.norm.running_var', 'layoutlmv2.encoder.layer.4.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv3.weight', 'layoutlmv2.encoder.layer.4.attention.self.query.weight', 'layoutlmv2.encoder.layer.8.attention.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.0.shortcut.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.0.shortcut.weight', 'layoutlmv2.encoder.layer.9.attention.self.key.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv3.norm.running_mean', 'layoutlmv2.encoder.layer.2.attention.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv1.norm.weight', 'layoutlmv2.encoder.layer.10.attention.self.key.bias', 'layoutlmv2.encoder.layer.9.attention.self.key.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv2.weight', 'layoutlmv2.encoder.layer.1.intermediate.dense.bias', 'layoutlmv2.embeddings.x_position_embeddings.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv3.norm.running_mean', 'layoutlmv2.encoder.layer.10.attention.output.LayerNorm.bias', 'layoutlmv2.encoder.layer.0.attention.self.query.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.0.shortcut.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.0.shortcut.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv1.norm.bias', 'layoutlmv2.encoder.layer.0.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv1.weight', 'layoutlmv2.encoder.layer.4.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv3.norm.running_mean', 'layoutlmv2.encoder.layer.2.attention.self.query.weight', 'layoutlmv2.encoder.layer.8.attention.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv1.weight', 'layoutlmv2.visual.backbone.fpn_lateral5.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv2.norm.running_var', 'layoutlmv2.encoder.layer.0.output.dense.weight', 'layoutlmv2.encoder.layer.2.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv3.weight', 'layoutlmv2.encoder.layer.6.attention.self.key.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv1.norm.running_mean', 'layoutlmv2.encoder.layer.6.attention.self.query.weight', 'layoutlmv2.encoder.layer.3.attention.self.key.weight', 'layoutlmv2.visual_proj.bias', 'layoutlmv2.encoder.layer.10.attention.self.query.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv1.norm.weight', 'layoutlmv2.embeddings.position_ids', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv3.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.6.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.0.shortcut.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv2.weight', 'layoutlmv2.encoder.layer.4.attention.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv2.norm.bias', 'layoutlmv2.encoder.layer.2.attention.self.key.weight', 'layoutlmv2.encoder.layer.10.attention.self.key.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.0.shortcut.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.stem.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv2.norm.running_mean', 'layoutlmv2.encoder.layer.5.attention.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv3.norm.bias', 'layoutlmv2.encoder.layer.9.attention.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv3.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.9.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv3.weight', 'layoutlmv2.encoder.layer.0.output.dense.bias', 'layoutlmv2.encoder.layer.1.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv3.norm.running_mean', 'layoutlmv2.encoder.layer.11.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv3.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.8.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res2.0.shortcut.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.1.attention.self.key.bias', 'layoutlmv2.encoder.layer.7.intermediate.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv1.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.10.intermediate.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv2.norm.running_var', 'layoutlmv2.visual.backbone.fpn_output2.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv3.norm.weight', 'layoutlmv2.encoder.layer.8.intermediate.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.0.shortcut.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv1.norm.running_var', 'layoutlmv2.encoder.layer.0.attention.self.key.bias', 'layoutlmv2.visual.backbone.bottom_up.stem.conv1.weight', 'layoutlmv2.encoder.layer.6.intermediate.dense.weight', 'layoutlmv2.encoder.layer.8.attention.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv3.norm.weight', 'layoutlmv2.encoder.layer.6.attention.self.value.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.stem.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv1.norm.running_var', 'layoutlmv2.encoder.layer.11.intermediate.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv2.norm.running_mean', 'layoutlmv2.encoder.layer.11.attention.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv1.weight', 'layoutlmv2.encoder.layer.8.attention.self.key.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv1.weight', 'layoutlmv2.encoder.layer.9.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res3.0.shortcut.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv3.weight', 'layoutlmv2.visual.backbone.fpn_lateral5.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv3.norm.bias', 'layoutlmv2.encoder.layer.11.attention.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv1.weight', 'layoutlmv2.encoder.layer.7.attention.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv3.norm.bias', 'layoutlmv2.encoder.layer.9.intermediate.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv3.norm.bias', 'layoutlmv2.encoder.layer.7.attention.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv2.norm.weight', 'layoutlmv2.visual.backbone.fpn_lateral4.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv2.weight', 'layoutlmv2.visual.backbone.fpn_output4.bias', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv2.weight', 'layoutlmv2.encoder.layer.7.attention.self.query.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv2.norm.weight', 'layoutlmv2.encoder.layer.4.attention.self.value.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv1.weight', 'layoutlmv2.encoder.layer.2.attention.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv3.weight', 'layoutlmv2.encoder.layer.2.attention.self.value.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv1.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.0.attention.output.LayerNorm.bias', 'layoutlmv2.encoder.layer.10.attention.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv2.norm.running_mean', 'layoutlmv2.encoder.layer.0.attention.output.dense.bias', 'layoutlmv2.pooler.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv3.weight', 'layoutlmv2.encoder.layer.8.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv1.norm.running_var', 'layoutlmv2.encoder.layer.11.output.dense.weight', 'layoutlmv2.embeddings.word_embeddings.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv1.norm.running_var', 'layoutlmv2.embeddings.y_position_embeddings.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.stem.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv2.norm.running_var', 'layoutlmv2.encoder.layer.5.attention.self.value.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.fpn_output4.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv2.norm.num_batches_tracked', 'layoutlmv2.visual_LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv2.norm.bias', 'layoutlmv2.encoder.layer.4.intermediate.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv3.norm.weight', 'layoutlmv2.visual_LayerNorm.weight', 'layoutlmv2.encoder.layer.11.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv2.norm.running_mean', 'layoutlmv2.encoder.layer.8.attention.self.value.weight', 'layoutlmv2.encoder.layer.10.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.0.shortcut.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv1.weight', 'layoutlmv2.encoder.layer.6.attention.self.value.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv3.norm.running_mean', 'layoutlmv2.encoder.layer.10.attention.self.value.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.0.shortcut.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res2.0.shortcut.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv3.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.7.attention.self.key.bias', 'layoutlmv2.encoder.layer.1.attention.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.shortcut.weight', 'layoutlmv2.encoder.layer.4.attention.self.key.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv3.norm.running_var', 'layoutlmv2.encoder.layer.9.attention.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv1.norm.weight', 'layoutlmv2.encoder.layer.7.attention.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.stem.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.0.shortcut.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv1.norm.running_mean', 'layoutlmv2.encoder.layer.3.output.dense.weight', 'layoutlmv2.encoder.layer.6.attention.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv2.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.1.attention.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv2.weight', 'layoutlmv2.encoder.layer.4.attention.output.dense.weight', 'layoutlmv2.encoder.layer.8.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv3.norm.running_var', 'layoutlmv2.encoder.layer.0.attention.output.LayerNorm.weight', 'layoutlmv2.encoder.layer.11.output.LayerNorm.weight', 'layoutlmv2.encoder.layer.5.attention.self.query.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv3.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.2.attention.self.value.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv3.norm.weight', 'layoutlmv2.encoder.layer.4.intermediate.dense.bias', 'layoutlmv2.encoder.layer.6.intermediate.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv1.norm.running_var', 'layoutlmv2.encoder.layer.3.output.LayerNorm.weight', 'layoutlmv2.encoder.layer.7.output.dense.bias', 'layoutlmv2.encoder.layer.10.intermediate.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv2.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.7.attention.self.key.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv2.norm.weight', 'layoutlmv2.encoder.layer.0.attention.self.value.bias', 'layoutlmv2.visual.pixel_std', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv3.norm.running_var', 'layoutlmv2.encoder.layer.1.output.dense.bias', 'layoutlmv2.encoder.layer.5.attention.self.key.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv3.weight', 'layoutlmv2.embeddings.token_type_embeddings.weight', 'layoutlmv2.encoder.layer.7.intermediate.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv1.norm.weight', 'layoutlmv2.encoder.layer.4.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv3.norm.bias', 'layoutlmv2.encoder.layer.7.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.fpn_lateral3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv2.weight', 'layoutlmv2.encoder.layer.1.attention.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv1.norm.running_var', 'layoutlmv2.encoder.layer.5.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv3.norm.running_var', 'layoutlmv2.encoder.layer.4.attention.self.value.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv1.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.1.attention.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv1.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.11.attention.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv3.norm.running_mean', 'layoutlmv2.encoder.layer.5.output.dense.bias', 'layoutlmv2.encoder.layer.9.attention.self.query.weight', 'layoutlmv2.visual_segment_embedding', 'layoutlmv2.encoder.layer.1.attention.self.query.weight', 'layoutlmv2.encoder.layer.10.attention.self.query.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv2.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.11.attention.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv1.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.6.attention.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.0.shortcut.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv2.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.2.attention.self.key.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv1.norm.weight', 'layoutlmv2.encoder.layer.2.output.dense.bias', 'layoutlmv2.encoder.layer.5.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv3.norm.weight', 'layoutlmv2.encoder.layer.1.output.LayerNorm.bias', 'layoutlmv2.encoder.layer.7.attention.self.query.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv2.norm.bias', 'layoutlmv2.embeddings.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv2.weight', 'layoutlmv2.encoder.layer.0.attention.self.value.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv2.norm.bias', 'layoutlmv2.encoder.layer.0.attention.self.key.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv3.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res2.0.shortcut.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv1.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv3.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.11.intermediate.dense.bias', 'layoutlmv2.encoder.layer.3.intermediate.dense.bias', 'layoutlmv2.encoder.layer.2.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv3.norm.bias', 'layoutlmv2.encoder.layer.10.attention.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res2.0.shortcut.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv2.norm.running_var', 'layoutlmv2.encoder.layer.1.attention.self.value.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv2.norm.bias', 'layoutlmv2.encoder.layer.9.attention.self.query.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv2.norm.running_var', 'layoutlmv2.encoder.layer.8.attention.self.value.bias', 'layoutlmv2.encoder.layer.4.attention.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv2.norm.weight', 'layoutlmv2.visual.backbone.fpn_output5.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv1.weight', 'layoutlmv2.encoder.layer.1.intermediate.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.fpn_lateral4.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv2.norm.running_var', 'layoutlmv2.encoder.layer.4.attention.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv3.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv3.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv2.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv2.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv3.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.5.attention.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res5.0.shortcut.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv3.weight', 'layoutlmv2.encoder.layer.7.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv3.norm.num_batches_tracked', 'layoutlmv2.embeddings.w_position_embeddings.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv2.weight', 'layoutlmv2.encoder.layer.1.attention.self.value.bias', 'layoutlmv2.encoder.layer.11.attention.self.value.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.0.shortcut.norm.running_mean', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv3.norm.running_var', 'layoutlmv2.encoder.layer.2.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv2.norm.num_batches_tracked', 'layoutlmv2.encoder.layer.8.intermediate.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv3.norm.weight', 'layoutlmv2.encoder.layer.2.intermediate.dense.weight', 'layoutlmv2.encoder.layer.0.attention.output.dense.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv1.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv1.norm.bias', 'layoutlmv2.encoder.layer.3.attention.self.value.bias', 'layoutlmv2.encoder.layer.5.attention.self.key.weight', 'layoutlmv2.visual.backbone.fpn_lateral2.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv3.norm.running_mean', 'layoutlmv2.encoder.layer.1.attention.self.query.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv1.norm.running_var', 'layoutlmv2.encoder.layer.3.attention.output.LayerNorm.bias', 'layoutlmv2.encoder.layer.3.attention.self.value.weight', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv3.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv2.weight', 'layoutlmv2.encoder.layer.6.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv1.norm.running_mean', 'layoutlmv2.encoder.layer.8.attention.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv1.weight', 'layoutlmv2.visual.backbone.bottom_up.res5.0.shortcut.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv3.norm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv1.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv1.norm.bias', 'layoutlmv2.encoder.layer.7.attention.self.value.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv2.norm.bias', 'layoutlmv2.encoder.layer.3.output.dense.bias', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv2.norm.running_var', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv1.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv1.norm.bias', 'layoutlmv2.encoder.layer.10.output.LayerNorm.weight', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv2.norm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv2.norm.weight', 'layoutlmv2.encoder.layer.3.attention.self.key.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv2.weight', 'layoutlmv2.encoder.layer.5.attention.output.LayerNorm.bias', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv2.norm.running_mean'] - This IS expected if you are initializing LayoutLMModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing LayoutLMModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of LayoutLMModel were not initialized from the model checkpoint at microsoft/layoutxlm-base and are newly initialized: ['layoutlm.encoder.layer.4.intermediate.dense.weight', 'layoutlm.encoder.layer.4.intermediate.dense.bias', 'layoutlm.encoder.layer.10.attention.self.query.weight', 'layoutlm.encoder.layer.6.output.LayerNorm.bias', 'layoutlm.encoder.layer.2.attention.self.value.bias', 'layoutlm.encoder.layer.2.intermediate.dense.bias', 'layoutlm.encoder.layer.5.output.dense.bias', 'layoutlm.encoder.layer.10.output.LayerNorm.bias', 'layoutlm.encoder.layer.9.output.dense.weight', 'layoutlm.encoder.layer.0.attention.self.value.weight', 'layoutlm.encoder.layer.7.attention.self.key.bias', 'layoutlm.encoder.layer.2.attention.output.LayerNorm.weight', 'layoutlm.encoder.layer.1.attention.self.key.weight', 'layoutlm.encoder.layer.8.output.dense.bias', 'layoutlm.encoder.layer.1.intermediate.dense.weight', 'layoutlm.encoder.layer.6.attention.self.query.bias', 'layoutlm.encoder.layer.11.attention.output.LayerNorm.weight', 'layoutlm.encoder.layer.3.attention.self.query.bias', 'layoutlm.encoder.layer.1.attention.self.value.bias', 'layoutlm.encoder.layer.7.output.dense.bias', 'layoutlm.encoder.layer.5.attention.output.dense.bias', 'layoutlm.encoder.layer.9.attention.output.dense.bias', 'layoutlm.encoder.layer.9.attention.self.key.bias', 'layoutlm.encoder.layer.9.output.dense.bias', 'layoutlm.encoder.layer.5.output.LayerNorm.bias', 'layoutlm.encoder.layer.6.attention.output.LayerNorm.bias', 'layoutlm.encoder.layer.6.output.dense.bias', 'layoutlm.pooler.dense.weight', 'layoutlm.encoder.layer.1.attention.output.dense.bias', 'layoutlm.encoder.layer.7.output.LayerNorm.bias', 'layoutlm.encoder.layer.2.attention.self.key.bias', 'layoutlm.encoder.layer.1.attention.self.value.weight', 'layoutlm.encoder.layer.3.attention.output.dense.weight', 'layoutlm.encoder.layer.10.intermediate.dense.weight', 'layoutlm.encoder.layer.6.attention.self.query.weight', 'layoutlm.encoder.layer.6.attention.output.dense.weight', 'layoutlm.encoder.layer.7.intermediate.dense.bias', 'layoutlm.encoder.layer.3.intermediate.dense.weight', 'layoutlm.encoder.layer.7.attention.self.value.weight', 'layoutlm.encoder.layer.8.attention.output.LayerNorm.weight', 'layoutlm.encoder.layer.6.attention.self.key.weight', 'layoutlm.encoder.layer.1.output.dense.bias', 'layoutlm.encoder.layer.3.attention.self.key.weight', 'layoutlm.encoder.layer.5.output.LayerNorm.weight', 'layoutlm.encoder.layer.1.attention.self.query.bias', 'layoutlm.encoder.layer.11.attention.output.dense.weight', 'layoutlm.encoder.layer.10.attention.output.LayerNorm.weight', 'layoutlm.encoder.layer.0.attention.self.query.weight', 'layoutlm.encoder.layer.11.attention.self.key.bias', 'layoutlm.encoder.layer.2.attention.self.query.weight', 'layoutlm.encoder.layer.11.attention.self.query.weight', 'layoutlm.encoder.layer.7.attention.self.key.weight', 'layoutlm.encoder.layer.10.output.dense.weight', 'layoutlm.encoder.layer.0.attention.self.key.bias', 'layoutlm.encoder.layer.7.attention.self.query.bias', 'layoutlm.encoder.layer.7.attention.output.LayerNorm.bias', 'layoutlm.encoder.layer.9.intermediate.dense.bias', 'layoutlm.encoder.layer.1.attention.self.key.bias', 'layoutlm.encoder.layer.1.attention.output.LayerNorm.weight', 'layoutlm.encoder.layer.8.attention.self.value.weight', 'layoutlm.encoder.layer.7.attention.self.value.bias', 'layoutlm.encoder.layer.8.attention.self.key.bias', 'layoutlm.encoder.layer.5.attention.self.query.bias', 'layoutlm.encoder.layer.11.attention.output.LayerNorm.bias', 'layoutlm.encoder.layer.10.attention.self.key.weight', 'layoutlm.encoder.layer.2.attention.self.value.weight', 'layoutlm.encoder.layer.11.output.LayerNorm.bias', 'layoutlm.encoder.layer.10.attention.self.value.weight', 'layoutlm.encoder.layer.1.intermediate.dense.bias', 'layoutlm.encoder.layer.2.attention.output.LayerNorm.bias', 'layoutlm.encoder.layer.4.attention.output.LayerNorm.weight', 'layoutlm.encoder.layer.0.output.dense.weight', 'layoutlm.encoder.layer.4.output.LayerNorm.weight', 'layoutlm.encoder.layer.7.attention.output.LayerNorm.weight', 'layoutlm.encoder.layer.3.attention.output.LayerNorm.weight', 'layoutlm.encoder.layer.10.output.LayerNorm.weight', 'layoutlm.encoder.layer.3.attention.self.value.weight', 'layoutlm.embeddings.x_position_embeddings.weight', 'layoutlm.encoder.layer.11.attention.self.value.weight', 'layoutlm.encoder.layer.5.intermediate.dense.bias', 'layoutlm.encoder.layer.4.attention.self.query.bias', 'layoutlm.embeddings.word_embeddings.weight', 'layoutlm.encoder.layer.7.attention.self.query.weight', 'layoutlm.encoder.layer.6.output.dense.weight', 'layoutlm.encoder.layer.11.output.dense.bias', 'layoutlm.encoder.layer.2.intermediate.dense.weight', 'layoutlm.encoder.layer.8.attention.self.key.weight', 'layoutlm.encoder.layer.5.output.dense.weight', 'layoutlm.encoder.layer.6.attention.self.value.bias', 'layoutlm.encoder.layer.2.output.LayerNorm.weight', 'layoutlm.encoder.layer.9.attention.output.dense.weight', 'layoutlm.encoder.layer.3.output.dense.weight', 'layoutlm.encoder.layer.5.attention.self.value.weight', 'layoutlm.encoder.layer.9.attention.self.value.weight', 'layoutlm.encoder.layer.1.attention.output.LayerNorm.bias', 'layoutlm.encoder.layer.8.output.dense.weight', 'layoutlm.encoder.layer.1.output.LayerNorm.bias', 'layoutlm.encoder.layer.6.attention.output.LayerNorm.weight', 'layoutlm.encoder.layer.8.attention.self.value.bias', 'layoutlm.encoder.layer.2.attention.self.query.bias', 'layoutlm.encoder.layer.2.output.dense.bias', 'layoutlm.encoder.layer.4.attention.output.LayerNorm.bias', 'layoutlm.embeddings.h_position_embeddings.weight', 'layoutlm.encoder.layer.0.attention.output.LayerNorm.bias', 'layoutlm.encoder.layer.11.output.LayerNorm.weight', 'layoutlm.encoder.layer.5.attention.self.key.weight', 'layoutlm.encoder.layer.7.attention.output.dense.bias', 'layoutlm.encoder.layer.6.intermediate.dense.weight', 'layoutlm.embeddings.token_type_embeddings.weight', 'layoutlm.encoder.layer.11.attention.output.dense.bias', 'layoutlm.encoder.layer.9.attention.self.key.weight', 'layoutlm.encoder.layer.8.output.LayerNorm.weight', 'layoutlm.encoder.layer.6.output.LayerNorm.weight', 'layoutlm.encoder.layer.10.attention.output.dense.bias', 'layoutlm.encoder.layer.7.intermediate.dense.weight', 'layoutlm.encoder.layer.9.output.LayerNorm.weight', 'layoutlm.encoder.layer.0.output.LayerNorm.bias', 'layoutlm.encoder.layer.7.output.LayerNorm.weight', 'layoutlm.encoder.layer.8.attention.output.LayerNorm.bias', 'layoutlm.encoder.layer.9.attention.self.query.bias', 'layoutlm.encoder.layer.0.output.LayerNorm.weight', 'layoutlm.encoder.layer.3.intermediate.dense.bias', 'layoutlm.encoder.layer.4.attention.self.key.bias', 'layoutlm.encoder.layer.5.attention.self.query.weight', 'layoutlm.encoder.layer.2.output.dense.weight', 'layoutlm.encoder.layer.5.attention.output.LayerNorm.bias', 'layoutlm.encoder.layer.5.intermediate.dense.weight', 'layoutlm.encoder.layer.5.attention.self.key.bias', 'layoutlm.encoder.layer.8.attention.output.dense.bias', 'layoutlm.encoder.layer.10.output.dense.bias', 'layoutlm.encoder.layer.9.attention.output.LayerNorm.weight', 'layoutlm.encoder.layer.4.attention.self.value.weight', 'layoutlm.encoder.layer.1.output.dense.weight', 'layoutlm.encoder.layer.4.output.dense.weight', 'layoutlm.encoder.layer.8.attention.output.dense.weight', 'layoutlm.encoder.layer.3.output.LayerNorm.bias', 'layoutlm.encoder.layer.11.intermediate.dense.weight', 'layoutlm.encoder.layer.4.output.LayerNorm.bias', 'layoutlm.encoder.layer.11.output.dense.weight', 'layoutlm.encoder.layer.7.attention.output.dense.weight', 'layoutlm.embeddings.LayerNorm.weight', 'layoutlm.encoder.layer.2.attention.self.key.weight', 'layoutlm.encoder.layer.11.intermediate.dense.bias', 'layoutlm.encoder.layer.2.output.LayerNorm.bias', 'layoutlm.encoder.layer.3.output.dense.bias', 'layoutlm.encoder.layer.3.attention.self.key.bias', 'layoutlm.encoder.layer.8.attention.self.query.bias', 'layoutlm.encoder.layer.1.output.LayerNorm.weight', 'layoutlm.embeddings.w_position_embeddings.weight', 'layoutlm.encoder.layer.9.intermediate.dense.weight', 'layoutlm.encoder.layer.10.attention.self.value.bias', 'layoutlm.encoder.layer.8.attention.self.query.weight', 'layoutlm.encoder.layer.9.attention.self.value.bias', 'layoutlm.encoder.layer.4.attention.output.dense.bias', 'layoutlm.encoder.layer.11.attention.self.query.bias', 'layoutlm.encoder.layer.5.attention.output.dense.weight', 'layoutlm.encoder.layer.0.attention.output.LayerNorm.weight', 'layoutlm.encoder.layer.11.attention.self.value.bias', 'layoutlm.encoder.layer.0.output.dense.bias', 'layoutlm.encoder.layer.3.attention.output.dense.bias', 'layoutlm.encoder.layer.9.attention.self.query.weight', 'layoutlm.encoder.layer.3.attention.output.LayerNorm.bias', 'layoutlm.encoder.layer.3.attention.self.query.weight', 'layoutlm.embeddings.LayerNorm.bias', 'layoutlm.encoder.layer.10.attention.self.key.bias', 'layoutlm.encoder.layer.2.attention.output.dense.bias', 'layoutlm.encoder.layer.6.intermediate.dense.bias', 'layoutlm.encoder.layer.7.output.dense.weight', 'layoutlm.encoder.layer.10.attention.output.LayerNorm.bias', 'layoutlm.encoder.layer.9.attention.output.LayerNorm.bias', 'layoutlm.encoder.layer.0.attention.self.query.bias', 'layoutlm.encoder.layer.2.attention.output.dense.weight', 'layoutlm.embeddings.position_embeddings.weight', 'layoutlm.encoder.layer.10.attention.self.query.bias', 'layoutlm.encoder.layer.4.attention.self.key.weight', 'layoutlm.encoder.layer.5.attention.output.LayerNorm.weight', 'layoutlm.encoder.layer.0.attention.self.key.weight', 'layoutlm.encoder.layer.8.intermediate.dense.weight', 'layoutlm.encoder.layer.0.attention.output.dense.weight', 'layoutlm.encoder.layer.3.attention.self.value.bias', 'layoutlm.encoder.layer.10.intermediate.dense.bias', 'layoutlm.encoder.layer.11.attention.self.key.weight', 'layoutlm.encoder.layer.4.output.dense.bias', 'layoutlm.encoder.layer.3.output.LayerNorm.weight', 'layoutlm.encoder.layer.0.intermediate.dense.bias', 'layoutlm.encoder.layer.5.attention.self.value.bias', 'layoutlm.encoder.layer.0.attention.output.dense.bias', 'layoutlm.pooler.dense.bias', 'layoutlm.encoder.layer.6.attention.self.value.weight', 'layoutlm.encoder.layer.0.attention.self.value.bias', 'layoutlm.encoder.layer.6.attention.self.key.bias', 'layoutlm.encoder.layer.1.attention.output.dense.weight', 'layoutlm.encoder.layer.4.attention.self.value.bias', 'layoutlm.encoder.layer.6.attention.output.dense.bias', 'layoutlm.embeddings.y_position_embeddings.weight', 'layoutlm.encoder.layer.9.output.LayerNorm.bias', 'layoutlm.encoder.layer.4.attention.output.dense.weight', 'layoutlm.encoder.layer.10.attention.output.dense.weight', 'layoutlm.encoder.layer.1.attention.self.query.weight', 'layoutlm.encoder.layer.8.output.LayerNorm.bias', 'layoutlm.encoder.layer.0.intermediate.dense.weight', 'layoutlm.encoder.layer.4.attention.self.query.weight', 'layoutlm.encoder.layer.8.intermediate.dense.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ```<|||||>Hmm ok, I thought LayoutXLM was equivalent to LayoutLM, but apparently it isn't. I guess one would need to add LayoutXLM to HuggingFace Transformers in order to properly load it. Otherwise, you can use the newly released layoutlmft package by the original authors as explained [here](https://github.com/microsoft/unilm/tree/master/layoutxlm).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,193
closed
Cannot import RobertaPreTrainedModel
## Environment info I tried with both transformers 3.5.1 and 4.6.1. ### Who can help Maybe @julien-c since they contributed RoBERTa. ## Information I want to derive my own class from RobertaPreTrainedModel, but I cannot import that class like I can import e.g. BertPreTrainedModel or AlbertPreTrainedModel. More specifically, ```from transformers import BertPreTrainedModel``` and ```from transformers import AlbertPreTrainedModel``` works, but ```from transformers import RobertaPreTrainedModel``` returns `ImportError: cannot import name 'RobertaPreTrainedModel'`. Is this the intended behavior or could it be a bug? ## To reproduce Try `from transformers import RobertaPreTrainedModel` ## Expected behavior The RobertaPreTrainedModel class should be imported like it works for other transformers.
06-16-2021 10:49:04
06-16-2021 10:49:04
transformers
12,192
closed
Marian tatoeba conversion update
# What does this PR do? The Helsinki-NLP / Tatoeba NMT models have gone through various architectural changes, and the old conversion code fails on them. This commit is something of a rewrite to remedy this, in particular parsing supplied yaml files rather than README.md files. It needs to be looked at by someone on the Huggingface side. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger @sshleifer
06-16-2021 08:21:33
06-16-2021 08:21:33
cc @patrickvonplaten who did the conversion<|||||>Hey @Traubert, Thanks a lot for adapting the conversion script! Could you maybe post a link to a Marian Tatoeba model so that I can try it out? <|||||>I'll fix the issues you mentioned, and I think the output needs to be made a bit neater by omitting some things. Currently a lot of technical details are copied from the model's yaml description. Is there a huggingface guideline for what model cards should look like? @patrickvonplaten The converter downloads the models, so you should be able to test like: ```python from convert_marian_tatoeba_to_pytorch import * conv = TatoebaConverter() conv.convert_models(('fin-eng',), dry_run = False) ``` This would result in the converter looking in the metadata from the Tatoeba-Challenge repository, which you are supposed to have available locally, and choosing the best model for that pair. It will then download and convert it, I think in that case this file: https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opus+bt-2021-04-30.zip<|||||>Hm. There's still a failing CI test from isort, but I ran that, committed the change, and on my machine `isort --check-only src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py` now reports nothing. Any ideas? @sgugger @patrickvonplaten <|||||>Make sure you have the good versions installed, they are all pinned so `pip install -e .[quality]` in the repo should do the trick.<|||||>> Make sure you have the good versions installed, they are all pinned so `pip install -e .[quality]` in the repo should do the trick. Thanks - also I didn't realise that `make style` was doing something more than plain `isort`, so now I committed another styled-by-make-style version.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry for being so unresponsive on this PR @Traubert - do you think it could be possible to open a copy of the PR with a clean git commit history? Think some external git commits got accidentally merged into this PR.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,191
closed
updated DLC images and sample notebooks
# What does this PR do? This PR updates the SageMaker documentation. It moves the overview to the bottom of the site, since it will grow. It also adds the Vision Transformer example.
06-16-2021 08:17:41
06-16-2021 08:17:41
transformers
12,190
closed
How to figure out which pretrained tokenizers support emojis?
Hi, I am working on a dataset with emojis. I found that the BERT tokenizer doesn't support emojis, and we have to manually add them and train their embeddings (#7648). But the RoBERTa tokenizer seems to identify emojis on tokenization as the following code: ```Python3 tokenizer = AutoTokenizer.from_pretrained('distilroberta-base', use_fast=True, normalization=True) tokenizer.encode('I 🙁 hate ☹️ to 😣 see 😖 this 😫 fail 😩 , 🥺 pls 😢 help 😭 me 😤') ``` outputs: ``` [ 0, 100, 8103, 27, 10172, 4157, 42699, 9253, 12605, 7, 17841, 2469, 192, 17841, 25448, 42, 17841, 4958, 5998, 17841, 15375, 2156, 8103, 8210, 3070, 2968, 29, 17841, 7258, 244, 17841, 12410, 162, 17841, 10470, 2] ``` None of this represents the "\<unk\>" token so, all of them should be trained embeddings. Is that right? Also, why are there weird characters in most of the words in the vocab of RoBERTa model like - 'ĸ', 'Ġthis', 'ĠðŁĺ', '«', 'Ġfail', 'ĠðŁĺ', etc.?
06-16-2021 06:55:21
06-16-2021 06:55:21
you can try this: tokenizer.decode(tokenizer.encode('I 🙁 hate ☹️ to 😣 see 😖 this 😫 fail 😩 , 🥺 pls 😢 help 😭 me 😤')) If the tokenizer successfully decodes back to origin emojis then yes! Your tokenizer can encode emojis. In this case, you use distillroberta tokenizer which use BPE (Radford et al. 2019) method, hence your tokenizer can encode emojis. ![Selection_467](https://user-images.githubusercontent.com/42698038/125319165-81b07200-e308-11eb-947f-4169b1d8fa97.png) <|||||>I already found this hack somewhere, but thanks anyway!
transformers
12,189
closed
T5 Generate from Encoder Output
Hi, I am working with the T5ConditionalGeneration for sequence generation. I am wondering that for autoregressively decoding outputs with the _generate()_ function, is there a way to decode a sequence from by feeding an intermediate layer input (such as _encoder_outputs_ or _input_embeds_) as opposed to the _input_ids_. I have noticed that for _forward()_ function, this feature is supported, where we can input _encoder_outputs_ or _input_embeds_ instead of the _input_ids_. However, I have not yet figured out a way to decode through the following: > \# model is t5 conditional generation > out_sequence = model.generate(encoder_outputs=encoder_outputs, num_beams=args.num_beams, ...) If this is feature not directly available, are there any alternative approaches recommended to allow sequence decoding directly from an encoder output?
06-16-2021 05:11:33
06-16-2021 05:11:33
from [https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py](url) row 369-374: ` def prepare_inputs_for_generation(self, input_ids: torch.LongTensor, **kwargs) -&gt; Dict[str, Any]: return {"input_ids": input_ids} ` Maybe haven't a new way to add own args other than "input_ids". <|||||>This might help: https://github.com/huggingface/transformers/pull/10599<|||||>It should be possible to directly pass `encoder_outptus`<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,188
closed
TextDatasetForNextSentencePrediction does not seem to contain truncate function unlike LineByLineWithSOPTextDataset
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.6.1 - Platform: Ubuntu 20.04.2 LTS - Python version: 3.8.5 - PyTorch version (GPU?): 1.8.1 (Yes) - Tensorflow version (GPU?): Not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): Bert(PreTraining) The problem arises when using: * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Prepare input.txt like: ``` About this time, when some rain began to fall, Sancho proposed that they should shelter themselves in the fulling-mill, but Don Quixote had conceived such abhorrence for it, on account of what was past, that he would no means set foot within its wall; wherefore, turning to the right-hand, they chanced to fall in with a road different from that in which they had traveled the day before; they had not gone far, when the knight discovered a man riding with something on his head, that glittered like polished gold, and scarce had he descried this phenomenon, when turning to Sancho, “I find,” said he, “that every proverb is strictly true; indeed, all of them are apophthegms dictated by experience herself; more especially, that which says, “shut one door, and another will soon open”: this I mention, because, if last night, fortune shut against us the door we fought to enter, by deceiving us with the fulling-hammers; today another stands wide open, in proffering to use us, another greater and more certain adventure, by which, if I fail to enter, it shall be my own fault, and not imputed to my ignorance of fulling-mills, or the darkness of the night. About this time, when some rain began to fall, Sancho proposed that they should shelter themselves in the fulling-mill, but Don Quixote had conceived such abhorrence for it, on account of what was past, that he would no means set foot within its wall; wherefore, turning to the right-hand, they chanced to fall in with a road different from that in which they had traveled the day before; they had not gone far, when the knight discovered a man riding with something on his head, that glittered like polished gold, and scarce had he descried this phenomenon, when turning to Sancho, “I find,” said he, “that every proverb is strictly true; indeed, all of them are apophthegms dictated by experience herself; more especially, that which says, “shut one door, and another will soon open”: this I mention, because, if last night, fortune shut against us the door we fought to enter, by deceiving us with the fulling-hammers; today another stands wide open, in proffering to use us, another greater and more certain adventure, by which, if I fail to enter, it shall be my own fault, and not imputed to my ignorance of fulling-mills, or the darkness of the night. ``` (I think any document where the total number of tokens when using TextDatasetForNextSentencePrediction exceeds 512 will be fine) 2. Run code below: ```python import transformers from transformers.data.datasets import TextDatasetForNextSentencePrediction from transformers.data.data_collator import DataCollatorForLanguageModeling from transformers import BertConfig, BertForPreTraining, Trainer, BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') training_args = transformers.TrainingArguments( output_dir = './bert/', per_device_train_batch_size = 2 ) train_dataset = TextDatasetForNextSentencePrediction( tokenizer = tokenizer, file_path = 'input.txt', overwrite_cache= True, block_size = 512, ) data_collator = DataCollatorForLanguageModeling( tokenizer = tokenizer, mlm = True, ) bert_config = BertConfig( vocab_size = tokenizer.vocab_size, hidden_size = 768, num_attention_heads = 12 ) model = BertForPreTraining(config=bert_config) trainer = Trainer( model = model, args = training_args, data_collator = data_collator, train_dataset = train_dataset, ) trainer.train() ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior No error is expected, but got an error: ``` RuntimeError Traceback (most recent call last) <ipython-input-2-ddc701df65e7> in <module> 32 33 ) ---> 34 trainer.train() ~/j-fin-bert/venv/lib/python3.8/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs) 1270 tr_loss += self.training_step(model, inputs) 1271 else: -> 1272 tr_loss += self.training_step(model, inputs) 1273 self.current_flos += float(self.floating_point_ops(inputs)) 1274 ~/j-fin-bert/venv/lib/python3.8/site-packages/transformers/trainer.py in training_step(self, model, inputs) 1732 loss = self.compute_loss(model, inputs) 1733 else: -> 1734 loss = self.compute_loss(model, inputs) 1735 1736 if self.args.n_gpu > 1: ~/j-fin-bert/venv/lib/python3.8/site-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs) 1764 else: 1765 labels = None -> 1766 outputs = model(**inputs) 1767 # Save past state if it exists 1768 # TODO: this needs to be fixed and made cleaner later. ~/j-fin-bert/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(), ~/j-fin-bert/venv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, next_sentence_label, output_attentions, output_hidden_states, return_dict) 1067 return_dict = return_dict if return_dict is not None else self.config.use_return_dict 1068 -> 1069 outputs = self.bert( 1070 input_ids, 1071 attention_mask=attention_mask, ~/j-fin-bert/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(), ~/j-fin-bert/venv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 962 head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) 963 --> 964 embedding_output = self.embeddings( 965 input_ids=input_ids, 966 position_ids=position_ids, ~/j-fin-bert/venv/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(), ~/j-fin-bert/venv/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds, past_key_values_length) 205 if self.position_embedding_type == "absolute": 206 position_embeddings = self.position_embeddings(position_ids) --> 207 embeddings += position_embeddings 208 embeddings = self.LayerNorm(embeddings) 209 embeddings = self.dropout(embeddings) RuntimeError: The size of tensor a (555) must match the size of tensor b (512) at non-singleton dimension 1 ``` I think this is due to the fact that TextDateForNextSentencePrediction, unlike LineByLineWithSOPTextDataset, does not have the truncate feature like truncate_seq_pair in the create_examples_from_document function. So I added truncate_seq_pair such as https://github.com/huggingface/transformers/blob/802ffaff0da0a7d28b0fef85b44de5c66f717a4b/src/transformers/data/datasets/language_modeling.py#L293-L310 to https://github.com/huggingface/transformers/blob/802ffaff0da0a7d28b0fef85b44de5c66f717a4b/src/transformers/data/datasets/language_modeling.py#L491-L492 Then it worked. Should truncate_seq_pair be also added in TextDatasetForNextSentencePrediction? <!-- A clear and concise description of what you would expect to happen. -->
06-16-2021 01:25:58
06-16-2021 01:25:58
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,187
closed
Clean push to hub API
# What does this PR do? This PR reworks the way the push to hub API works by avoiding cloning in a temporary directory, copying the files saved then pushing, by creating and cloning the repo first, then saving the object and lastly pushing it to the hub. The general commands does not bring any major breaking change although the behavior of `object.push_to_hub(repo_name)` and `object.save_pretrained(push_to_hub=True)` changes slightly (see below). That last API is not used much however, since it's new and completely undocumented, so I think it's okay. `push_to_hub` takes a `repo_name_or_path` and will create a local clone of the repo if it does not exist. That local clone is synced with the distant repo. This is a bit different from before where a temp dir was used for the clone and push, this behavior is still accessible by passing along `temp_dir=True`. ## `push_to_hub` API for models, tokenizers and configs Works like before with a slight change of behavior:: ``` model.push_to_hub(repo_name_or_path="my-awesome-model") ``` will push to the hub the model by creating the repo, cloning it if it exists, saving the model inside and pushing. The change with before is that a local folder named "my_awesome_model" will be created if it does not exist, and if it exists it will either: - be put in sync with the distant repo if the distant repo exists - error if it is not a local clone of the distant repo In the same vein ``` model.save_pretrained(my_folder, push_to_hub=True) ``` will use `my_folder` as a working directory and create it if it exists, error if it exists and is not a local clone of the distant repo and do a `git pull` if it exists and is a local clone of the distant repo. In both cases, the previous behavior can be activated by passing along `temp_dir=True`. Side note: this PR adds tests for the `FlaxPretrainedModel.push_to_hub` method. ## `push_to_hub` API for the Trainer Here there are also slightly breaking changes in the sense that the control over the repo to which we push moves from arguments in the `push_to_hub` method to the fields in `TrainingArguments`. This is because the repo is now initialized at init, so we need to know the repo name, organization and potential token there. The `Trainer` adds an automatic `.gitignore` to ignore all checkpoints folder, which can be changed by the user (we can add a CLI argument to control that in the future) and the `push_to_hub` method now just triggers a save, writes the model card then push the whole output dir to the distant repo. Another slightly breaking change is that the default for the `logging_dir` (for TensorBoard) changes, so that the logs are inside the output_dir and also pushed to the hub.
06-16-2021 00:08:05
06-16-2021 00:08:05
transformers
12,186
closed
[WIP] Flax XLM
# What does this PR do? This PR will add XLM in Flax/Jax <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-15-2021 21:09:03
06-15-2021 21:09:03
Hey @asvskartheek Great to see that you started working on `FlaxXLM`! Feel free to ping me and @patrickvonplaten if you have any questions! Happy to help :)<|||||>Hey @patil-suraj , thanks for offering to help. Is there a general guide or a series of standard steps that can follow while porting models to Flax on HuggingFace in HF's own style?<|||||>There is no guide as such yet. But you could see how other models are implemented in Flax, which should give a good idea about the conversion. Here's [FlaxBert](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_flax_bert.py), [FlaxGPT2](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_flax_bert.py) To start you could just copy the PyTorch model and start replacing each module in Flax.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,185
closed
Use yaml to create metadata
# What does this PR do? This PR leverages `pyaml` to avoid writing yaml manually, as suggested by @julien-c .
06-15-2021 21:08:13
06-15-2021 21:08:13
note that this can ultimately be in `huggingface_hub` rather than here (but it's great to be able to experiment with this here)<|||||>Yes we were talking about it with @LysandreJik, probably for after the upcoming release!
transformers
12,184
closed
Temporarily deactivate torchhub test
# What does this PR do? This PR removes the torch hub test for now, as it seems there is a problem with the torch hub for now. Will investigate more tomorrow if need be.
06-15-2021 20:06:48
06-15-2021 20:06:48
transformers
12,183
closed
Inconsistency between GPTNeo and GPT2 config classes
The config classes for GPTNeo and GPT2 have a bunch of differences that are seemingly unnecessary. This makes it harder for downstream users to write code that depends on accessing these attributes. See below: ![image](https://user-images.githubusercontent.com/54557097/122113739-bbf92300-cddf-11eb-9ac6-a0ea1a30055b.png) It seems that max_position_embeddings, hidden_size, num_layers, num_heads, intermediate_size, resid_dropout, embed_dropout, and attention_dropout should be renamed for sonsistency with the GPT2 config class. ### Who can help @LysandreJik @patil-suraj
06-15-2021 19:50:59
06-15-2021 19:50:59
Seconding this. Last month I swapped out GPT-2 for GPT-Neo in a [project](https://github.com/nostalgebraist/nostalgebraist-autoresponder/), and these differences made it more difficult to adapt my existing code.<|||||>Hi @leogao2 and @nostalgebraist, thanks for opening an issue! You're correct that the way this is currently implemented it prevents a few use-cases. Namely this is authorized: ```py from transformers import GPT2Config config = GPT2Config() config.hidden_size ``` But these are not: ```py from transformers import GPT2Config config = GPT2Config() config.hidden_size = 4 # Fails config = GPT2Config(hidden_size=4) # Fails ``` Unfortunately we can't just rename arguments - as this would break both checkpoints on the hub as well as local checkpoints. We're thinking of a way to enable this with a convention set across configurations for the attributes you mention - this convention would allow getting and setting attributes that are defined in this convention, such as the ones you mention. Let us explore a bit and we'll come back to you. cc @patil-suraj @patrickvonplaten @sgugger <|||||>> This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. > > Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. This still needs to be addressed.<|||||>Is there any progress on this?<|||||>Hey @leogao2! Yes, a proposal is available here: https://github.com/nreimers/transformers/commit/2198ee719c6101ef47de00cd6d53da3a8f938fb4 but there are still a few rough edges to polish. We'll try to have it merged in the next few weeks, will let you know.<|||||>This was fixed in #13026 which will be in the next release alongside GPT-J. Thank you for opening an issue!
transformers
12,182
open
KeyError: 'labels' during Distilling Zero Shot Classification
EDIT: I confirmed that this happens with the example script as it is, so no other changes are required to reproduce this. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.6.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.1+cu101 (True) - Tensorflow version (GPU?): 2.5.0 (True) - Using GPU in script?: Yes (NVIDIA P100) - Using distributed or parallel set-up in script?: No ### Who can help Tagging @VictorSanh @sgugger, @patil-suraj (please correct me if I'm wrong) <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): Student: `distilbert-base-uncased` Teacher: `roberta-large-mnli` The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) * [x] other: distillation of zero shot text classification models (research_projects) I'm simply running the official colab script `Distilling Zero Shot Classification.ipynb`, but get a key error when performing the first epoch of the student training. ## To reproduce Steps to reproduce the behavior: 1. Open the official script https://t.co/JAJ6Eb78vM?amp=1 (you can find this link here as well https://twitter.com/joeddav/status/1363543296166002688?lang=en) 2. Run all the required cells before training 3. Run the cell that runs `transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py` 4. Witness the `KeyError: 'labels'` on the first epoch of the student model training Full logs: `2021-06-16 15:33:19.328924: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0 06/16/2021 15:33:20 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False 06/16/2021 15:33:20 - INFO - __main__ - Training/evaluation parameters DistillTrainingArguments(output_dir='./distilbert-base-uncased-agnews-student', overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=<IntervalStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=32, per_device_eval_batch_size=128, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=1.0, max_steps=-1, lr_scheduler_type=<SchedulerType.LINEAR: 'linear'>, warmup_ratio=0.0, warmup_steps=0, logging_dir='runs/Jun16_15-33-20_9d2a3f891a99', logging_strategy=<IntervalStrategy.STEPS: 'steps'>, logging_first_step=False, logging_steps=500, save_strategy=<IntervalStrategy.STEPS: 'steps'>, save_steps=500, save_total_limit=0, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', fp16_backend='auto', fp16_full_eval=False, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=[], dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name='./distilbert-base-uncased-agnews-student', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=[], deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, length_column_name='length', report_to=['tensorboard'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, use_legacy_prediction_loop=False, push_to_hub=False, resume_from_checkpoint=None, mp_parameters='') 06/16/2021 15:33:20 - INFO - __main__ - Generating predictions from zero-shot teacher model [INFO|configuration_utils.py:517] 2021-06-16 15:33:21,219 >> loading configuration file https://huggingface.co/roberta-large-mnli/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/fab42bdbd5cb5e6ff7cabeb9bcc12728f56022f50b9644a3079904564f2bc704.ddc5961cccf081d6ca7f4f58ee119c21895aa9b19f0044f01954cd2ff42fefcb [INFO|configuration_utils.py:553] 2021-06-16 15:33:21,220 >> Model config RobertaConfig { "_num_labels": 3, "architectures": [ "RobertaForSequenceClassification" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "eos_token_id": 2, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 1024, "id2label": { "0": "CONTRADICTION", "1": "NEUTRAL", "2": "ENTAILMENT" }, "initializer_range": 0.02, "intermediate_size": 4096, "label2id": { "CONTRADICTION": 0, "ENTAILMENT": 2, "NEUTRAL": 1 }, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 16, "num_hidden_layers": 24, "pad_token_id": 1, "position_embedding_type": "absolute", "transformers_version": "4.6.1", "type_vocab_size": 1, "use_cache": true, "vocab_size": 50265 } [INFO|modeling_utils.py:1155] 2021-06-16 15:33:21,507 >> loading weights file https://huggingface.co/roberta-large-mnli/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/63cbd98723b89863bcd86a8002e823de3004a139513559246690c65521cdc9b9.38ef55c51c84ab2e78e5a0e2ea9c25830fd074df70d2f10071eb9a1bc1586ca0 [WARNING|modeling_utils.py:1331] 2021-06-16 15:33:44,205 >> Some weights of the model checkpoint at roberta-large-mnli were not used when initializing RobertaForSequenceClassification: ['roberta.pooler.dense.weight', 'roberta.pooler.dense.bias'] - This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). [INFO|modeling_utils.py:1348] 2021-06-16 15:33:44,205 >> All the weights of RobertaForSequenceClassification were initialized from the model checkpoint at roberta-large-mnli. If your task is similar to the task the model of the checkpoint was trained on, you can already use RobertaForSequenceClassification for predictions without further training. [INFO|configuration_utils.py:517] 2021-06-16 15:33:47,683 >> loading configuration file https://huggingface.co/roberta-large-mnli/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/fab42bdbd5cb5e6ff7cabeb9bcc12728f56022f50b9644a3079904564f2bc704.ddc5961cccf081d6ca7f4f58ee119c21895aa9b19f0044f01954cd2ff42fefcb [INFO|configuration_utils.py:553] 2021-06-16 15:33:47,684 >> Model config RobertaConfig { "_num_labels": 3, "architectures": [ "RobertaForSequenceClassification" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "eos_token_id": 2, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 1024, "id2label": { "0": "CONTRADICTION", "1": "NEUTRAL", "2": "ENTAILMENT" }, "initializer_range": 0.02, "intermediate_size": 4096, "label2id": { "CONTRADICTION": 0, "ENTAILMENT": 2, "NEUTRAL": 1 }, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 16, "num_hidden_layers": 24, "pad_token_id": 1, "position_embedding_type": "absolute", "transformers_version": "4.6.1", "type_vocab_size": 1, "use_cache": true, "vocab_size": 50265 } [INFO|tokenization_utils_base.py:1717] 2021-06-16 15:33:49,522 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/vocab.json from cache at /root/.cache/huggingface/transformers/64a1d72b2bd05b0aff1a4dd9e7a90a6eea0312b4f914e80b0a923aa8f72219bd.d67d6b367eb24ab43b08ad55e014cf254076934f71d832bbab9ad35644a375ab [INFO|tokenization_utils_base.py:1717] 2021-06-16 15:33:49,522 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/merges.txt from cache at /root/.cache/huggingface/transformers/425529714b758f50b6d3f93f8093d859856fd41cf1cec7c8edf2ab44aee632b6.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b [INFO|tokenization_utils_base.py:1717] 2021-06-16 15:33:49,522 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/tokenizer.json from cache at /root/.cache/huggingface/transformers/d077eac6b48c43618a441cba6eab600a5cc6383b98e7eada6d1ad4d3f3cc457e.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730 [INFO|tokenization_utils_base.py:1717] 2021-06-16 15:33:49,522 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/added_tokens.json from cache at None [INFO|tokenization_utils_base.py:1717] 2021-06-16 15:33:49,522 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/special_tokens_map.json from cache at None [INFO|tokenization_utils_base.py:1717] 2021-06-16 15:33:49,522 >> loading file https://huggingface.co/roberta-large-mnli/resolve/main/tokenizer_config.json from cache at None 100% 15000/15000 [1:15:16<00:00, 3.32it/s] 06/16/2021 16:49:06 - INFO - __main__ - Initializing student model [INFO|file_utils.py:1532] 2021-06-16 16:49:07,106 >> https://huggingface.co/distilbert-base-uncased/resolve/main/config.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmpy7f4tyyh Downloading: 100% 442/442 [00:00<00:00, 348kB/s] [INFO|file_utils.py:1536] 2021-06-16 16:49:07,540 >> storing https://huggingface.co/distilbert-base-uncased/resolve/main/config.json in cache at /root/.cache/huggingface/transformers/23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.d423bdf2f58dc8b77d5f5d18028d7ae4a72dcfd8f468e81fe979ada957a8c361 [INFO|file_utils.py:1544] 2021-06-16 16:49:07,540 >> creating metadata file for /root/.cache/huggingface/transformers/23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.d423bdf2f58dc8b77d5f5d18028d7ae4a72dcfd8f468e81fe979ada957a8c361 [INFO|configuration_utils.py:517] 2021-06-16 16:49:07,540 >> loading configuration file https://huggingface.co/distilbert-base-uncased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.d423bdf2f58dc8b77d5f5d18028d7ae4a72dcfd8f468e81fe979ada957a8c361 [INFO|configuration_utils.py:553] 2021-06-16 16:49:07,541 >> Model config DistilBertConfig { "activation": "gelu", "architectures": [ "DistilBertForMaskedLM" ], "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "id2label": { "0": "LABEL_0", "1": "LABEL_1", "2": "LABEL_2", "3": "LABEL_3" }, "initializer_range": 0.02, "label2id": { "LABEL_0": 0, "LABEL_1": 1, "LABEL_2": 2, "LABEL_3": 3 }, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "tie_weights_": true, "transformers_version": "4.6.1", "vocab_size": 30522 } [INFO|file_utils.py:1532] 2021-06-16 16:49:07,820 >> https://huggingface.co/distilbert-base-uncased/resolve/main/pytorch_model.bin not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmptuo3f4g2 Downloading: 100% 268M/268M [00:04<00:00, 62.4MB/s] [INFO|file_utils.py:1536] 2021-06-16 16:49:12,343 >> storing https://huggingface.co/distilbert-base-uncased/resolve/main/pytorch_model.bin in cache at /root/.cache/huggingface/transformers/9c169103d7e5a73936dd2b627e42851bec0831212b677c637033ee4bce9ab5ee.126183e36667471617ae2f0835fab707baa54b731f991507ebbb55ea85adb12a [INFO|file_utils.py:1544] 2021-06-16 16:49:12,343 >> creating metadata file for /root/.cache/huggingface/transformers/9c169103d7e5a73936dd2b627e42851bec0831212b677c637033ee4bce9ab5ee.126183e36667471617ae2f0835fab707baa54b731f991507ebbb55ea85adb12a [INFO|modeling_utils.py:1155] 2021-06-16 16:49:12,343 >> loading weights file https://huggingface.co/distilbert-base-uncased/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/9c169103d7e5a73936dd2b627e42851bec0831212b677c637033ee4bce9ab5ee.126183e36667471617ae2f0835fab707baa54b731f991507ebbb55ea85adb12a [WARNING|modeling_utils.py:1331] 2021-06-16 16:49:12,787 >> Some weights of the model checkpoint at distilbert-base-uncased were not used when initializing DistilBertForSequenceClassification: ['vocab_layer_norm.bias', 'vocab_transform.weight', 'vocab_transform.bias', 'vocab_layer_norm.weight', 'vocab_projector.bias', 'vocab_projector.weight'] - This IS expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing DistilBertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). [WARNING|modeling_utils.py:1342] 2021-06-16 16:49:12,787 >> Some weights of DistilBertForSequenceClassification were not initialized from the model checkpoint at distilbert-base-uncased and are newly initialized: ['pre_classifier.bias', 'classifier.bias', 'pre_classifier.weight', 'classifier.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. [INFO|configuration_utils.py:517] 2021-06-16 16:49:13,073 >> loading configuration file https://huggingface.co/distilbert-base-uncased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/23454919702d26495337f3da04d1655c7ee010d5ec9d77bdb9e399e00302c0a1.d423bdf2f58dc8b77d5f5d18028d7ae4a72dcfd8f468e81fe979ada957a8c361 [INFO|configuration_utils.py:553] 2021-06-16 16:49:13,074 >> Model config DistilBertConfig { "activation": "gelu", "architectures": [ "DistilBertForMaskedLM" ], "attention_dropout": 0.1, "dim": 768, "dropout": 0.1, "hidden_dim": 3072, "initializer_range": 0.02, "max_position_embeddings": 512, "model_type": "distilbert", "n_heads": 12, "n_layers": 6, "pad_token_id": 0, "qa_dropout": 0.1, "seq_classif_dropout": 0.2, "sinusoidal_pos_embds": false, "tie_weights_": true, "transformers_version": "4.6.1", "vocab_size": 30522 } [INFO|file_utils.py:1532] 2021-06-16 16:49:13,357 >> https://huggingface.co/distilbert-base-uncased/resolve/main/vocab.txt not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmps3o1_gw9 Downloading: 100% 232k/232k [00:00<00:00, 1.83MB/s] [INFO|file_utils.py:1536] 2021-06-16 16:49:13,766 >> storing https://huggingface.co/distilbert-base-uncased/resolve/main/vocab.txt in cache at /root/.cache/huggingface/transformers/0e1bbfda7f63a99bb52e3915dcf10c3c92122b827d92eb2d34ce94ee79ba486c.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99 [INFO|file_utils.py:1544] 2021-06-16 16:49:13,766 >> creating metadata file for /root/.cache/huggingface/transformers/0e1bbfda7f63a99bb52e3915dcf10c3c92122b827d92eb2d34ce94ee79ba486c.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99 [INFO|file_utils.py:1532] 2021-06-16 16:49:14,049 >> https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmp1n0mi2iy Downloading: 100% 466k/466k [00:00<00:00, 3.48MB/s] [INFO|file_utils.py:1536] 2021-06-16 16:49:14,616 >> storing https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer.json in cache at /root/.cache/huggingface/transformers/75abb59d7a06f4f640158a9bfcde005264e59e8d566781ab1415b139d2e4c603.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4 [INFO|file_utils.py:1544] 2021-06-16 16:49:14,616 >> creating metadata file for /root/.cache/huggingface/transformers/75abb59d7a06f4f640158a9bfcde005264e59e8d566781ab1415b139d2e4c603.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4 [INFO|file_utils.py:1532] 2021-06-16 16:49:15,461 >> https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer_config.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmperm21jrj Downloading: 100% 28.0/28.0 [00:00<00:00, 22.2kB/s] [INFO|file_utils.py:1536] 2021-06-16 16:49:15,745 >> storing https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer_config.json in cache at /root/.cache/huggingface/transformers/8c8624b8ac8aa99c60c912161f8332de003484428c47906d7ff7eb7f73eecdbb.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79 [INFO|file_utils.py:1544] 2021-06-16 16:49:15,745 >> creating metadata file for /root/.cache/huggingface/transformers/8c8624b8ac8aa99c60c912161f8332de003484428c47906d7ff7eb7f73eecdbb.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79 [INFO|tokenization_utils_base.py:1717] 2021-06-16 16:49:15,746 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/vocab.txt from cache at /root/.cache/huggingface/transformers/0e1bbfda7f63a99bb52e3915dcf10c3c92122b827d92eb2d34ce94ee79ba486c.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99 [INFO|tokenization_utils_base.py:1717] 2021-06-16 16:49:15,746 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer.json from cache at /root/.cache/huggingface/transformers/75abb59d7a06f4f640158a9bfcde005264e59e8d566781ab1415b139d2e4c603.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4 [INFO|tokenization_utils_base.py:1717] 2021-06-16 16:49:15,746 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/added_tokens.json from cache at None [INFO|tokenization_utils_base.py:1717] 2021-06-16 16:49:15,746 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/special_tokens_map.json from cache at None [INFO|tokenization_utils_base.py:1717] 2021-06-16 16:49:15,746 >> loading file https://huggingface.co/distilbert-base-uncased/resolve/main/tokenizer_config.json from cache at /root/.cache/huggingface/transformers/8c8624b8ac8aa99c60c912161f8332de003484428c47906d7ff7eb7f73eecdbb.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79 100% 120000/120000 [00:32<00:00, 3647.18ex/s] 06/16/2021 16:49:49 - INFO - __main__ - Training student model on teacher predictions [INFO|trainer.py:516] 2021-06-16 16:49:49,272 >> The following columns in the training set don't have a corresponding argument in `DistilBertForSequenceClassification.forward` and have been ignored: text. [INFO|trainer.py:1156] 2021-06-16 16:49:49,285 >> ***** Running training ***** [INFO|trainer.py:1157] 2021-06-16 16:49:49,285 >> Num examples = 120000 [INFO|trainer.py:1158] 2021-06-16 16:49:49,285 >> Num Epochs = 1 [INFO|trainer.py:1159] 2021-06-16 16:49:49,285 >> Instantaneous batch size per device = 32 [INFO|trainer.py:1160] 2021-06-16 16:49:49,285 >> Total train batch size (w. parallel, distributed & accumulation) = 32 [INFO|trainer.py:1161] 2021-06-16 16:49:49,285 >> Gradient Accumulation steps = 1 [INFO|trainer.py:1162] 2021-06-16 16:49:49,286 >> Total optimization steps = 3750 0% 0/3750 [00:00<?, ?it/s]Traceback (most recent call last): File "transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py", line 338, in <module> main() File "transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py", line 328, in main trainer.train() File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1272, in train tr_loss += self.training_step(model, inputs) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1734, in training_step loss = self.compute_loss(model, inputs) File "transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py", line 119, in compute_loss target_p = inputs["labels"] KeyError: 'labels' 0% 0/3750 [00:00<?, ?it/s] ` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Not throw a `KeyError` <!-- A clear and concise description of what you would expect to happen. -->
06-15-2021 17:36:01
06-15-2021 17:36:01
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I have the same issue. <|||||>This should be re-opened. <|||||>Hello, unfortunately, none of the maintainers have the bandwidth to assist in the resolution of this issue. I'm putting a `Good second issue` label and pinging the original author @joeddav. We're happy to review a PR!<|||||>@MLMarins @erip could you provide more details!<|||||>@sadakmed create a text file with a few lines and run the zero-shot text classification example with arbitrary labels. There is some breakage in the `datasets` API that causes the labels from the teacher to not be propagated.<|||||>I was also encountering this error, and noticed that the call to [datasets.Dataset.map() in line 310](https://github.com/huggingface/transformers/blob/857ab55c01cf7213bc1822933cd2ef2b7552bac4/examples/research_projects/zero-shot-distillation/distill_classifier.py#L310) is the culprit. It drops the `labels` column from the dataset. Try replacing it with the following ``` ds_tokenized = dataset.map(tokenizer, input_columns="text") dataset = Dataset.from_dict( { "text": ds_tokenized[:]["text"], "labels": teacher_soft_preds, # output of get_teacher_predictions() "input_ids": ds_tokenized[:]["input_ids"], "attention_mask": ds_tokenized[:]["attention_mask"], } ) dataset.set_format("torch") ```<|||||>@LysandreJik I've created a PR for this issue, please take a look when you get the chance to.
transformers
12,181
closed
Temporarily deactivate torch-scatter while we wait for new release
Torch 1.9.0 just landed, incompatible with torch-scatter installed with version 1.8.0. While we wait for torch-scatter binaries compatible with 1.9.0 to be released, deactivating the torch-scatter-based tests. cc @patrickvonplaten @sgugger @NielsRogge
06-15-2021 17:01:15
06-15-2021 17:01:15
Requested the binary build: https://github.com/rusty1s/pytorch_scatter/issues/224 <|||||>well, you can also change to `torch==1.8.1` and keep everything else the same.<|||||>That's a better alternative, let me update.<|||||>sorry, I meant `pip install torch==1.8.1` - torch-scatter doesn't have 1.8.1 - it's 1.8.0 but works with either pytorch-1.8.x https://github.com/rusty1s/pytorch_scatter#pytorch-180 i.e. I suggested that we don't yet switch to pt-1.9.0 until all the dependants catch up.<|||||>Ah, thank you for clarifying. We do have quite a bunch of failures on torch 1.9.0 (all related to torch fx it seems): ``` FAILED tests/test_modeling_albert.py::AlbertModelTest::test_torch_fx - File... FAILED tests/test_modeling_albert.py::AlbertModelTest::test_torch_fx_output_loss FAILED tests/test_modeling_bert.py::BertModelTest::test_torch_fx - File "<e... FAILED tests/test_modeling_bert.py::BertModelTest::test_torch_fx_output_loss FAILED tests/test_modeling_electra.py::ElectraModelTest::test_torch_fx - Fi... FAILED tests/test_modeling_electra.py::ElectraModelTest::test_torch_fx_output_loss FAILED tests/test_modeling_distilbert.py::DistilBertModelTest::test_torch_fx FAILED tests/test_modeling_distilbert.py::DistilBertModelTest::test_torch_fx_output_loss FAILED tests/test_modeling_gpt_neo.py::GPTNeoModelTest::test_torch_fx - Fil... FAILED tests/test_modeling_gpt_neo.py::GPTNeoModelTest::test_torch_fx_output_loss FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_torch_fx - File "<e... FAILED tests/test_modeling_gpt2.py::GPT2ModelTest::test_torch_fx_output_loss FAILED tests/test_modeling_megatron_bert.py::MegatronBertModelTest::test_torch_fx FAILED tests/test_modeling_megatron_bert.py::MegatronBertModelTest::test_torch_fx_output_loss FAILED tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torch_fx FAILED tests/test_modeling_mobilebert.py::MobileBertModelTest::test_torch_fx_output_loss FAILED tests/test_modeling_t5.py::T5ModelTest::test_torch_fx - File "<eval_... FAILED tests/test_modeling_t5.py::T5ModelTest::test_torch_fx_output_loss - ... ``` So agreed to keep the CI on 1.8.1 until we resolve this and can update to 1.9.0. cc @michaelbenayoun <|||||>Yes, the torch fx tests should either be skipped or fixed - @michaelbenayoun already knows about this. The problem was uncovered with 1.9.0-RC. He suggested a fix for pytorch instead, but I don't think it made it into 1.9.0<|||||>Merging since this seems all good I and would really like a green CI :-)<|||||>@LysandreJik, https://pytorch-geometric.com/whl/torch-1.9.0+cu111.html is ready. One way to proceed is to wait for @michaelbenayoun - or skip those tests for now and swtich to `torch==1.9.0` while updating `torch-scatter` to the link above.<|||||>Also trying to ask for pytorch-core support to support "sum", "mean", "max" or "min" scatter reduction functions, so that we could drop the need to depend on `torch-scatter` - https://github.com/pytorch/pytorch/issues/22378#issuecomment-862705586 as it is a bit of an ordeal for being used in just a single model and even then it's optional.<|||||>Oh that would be terrific if we had support directly in PyTorch, thanks for asking!<|||||>One other approach is to provide a slower python-only implementation of the same and fall back to it if `torch-scatter` is not available, and not install the latter on CI.
transformers
12,180
closed
Can't run 124M using transformers
I have downloaded gpt 124M on my local machine and i was able to run the interactivesample.py that was provided by them But when i try to load 124M using transformers, i get following error: _OSError: Can't load config for 'models\124M'. Make sure that: - 'models\124M' is a correct model identifier listed on 'https://huggingface.co/models' - or 'models\124M' is the correct path to a directory containing a config.json file_ **My code:** tokenizer = AutoTokenizer.from_pretrained("models\\124M") 124M contains following json file : encoder
06-15-2021 16:29:10
06-15-2021 16:29:10
Do you have a reproducible code example, or a colab notebook? What is your environment as requested in the issue template? Did you try with a forward slash?<|||||>> Do you have a reproducible code example, or a colab notebook? What is your environment as requested in the issue template? Did you try with a forward slash? My Code: import gpt_2_simple as gpt2 from transformers import pipeline, set_seed,GPT2Tokenizer, TFGPT2LMHeadModel from transformers import GPT2LMHeadModel, GPT2Tokenizer,AutoTokenizer, AutoModelWithLMHead from aitextgen import aitextgen import os.path data_folder = os.path.join(os.getcwd()) file_to_open = os.path.join(data_folder, "124M") print(file_to_open) tokenizer = AutoTokenizer.from_pretrained(file_to_open) model = AutoModelWithLMHead.from_pretrained(file_to_open) I have attached image of my directory, files inside 124M and error ![124M](https://user-images.githubusercontent.com/20142735/122244766-90109700-cede-11eb-8710-e252e7eea440.PNG) ![01](https://user-images.githubusercontent.com/20142735/122244479-593a8100-cede-11eb-8d20-fedf9f3fd187.PNG) ![output](https://user-images.githubusercontent.com/20142735/122244472-58a1ea80-cede-11eb-8104-a646f65e864d.PNG) <|||||>I don't know how you obtained your `124M` folder but it doesn't seem to be using one of our libraries?<|||||>Our libraries save models with a `config.json`, `pytorch_model.bin` if PyTorch and `tf_model.h5` if TensorFlow.<|||||>> Our libraries save models with a `config.json`, `pytorch_model.bin` if PyTorch and `tf_model.h5` if TensorFlow. I got it from https://github.com/openai/gpt-2 download_model.py 124M (in cmd i wrote) I was able to run interactive_conditional_samples.py (in src folder)<|||||>Is that model different from the `gpt2` available on our model hub? https://huggingface.co/gpt2 You would load it like so: ``` tokenizer = AutoTokenizer.from_pretrained("gpt2") model = AutoModelWithLMHead.from_pretrained("gpt2") ``` If it is different, then you should use the conversion script to convert it to a HF-style checkpoint: https://github.com/huggingface/transformers/blob/master/src/transformers/models/gpt2/convert_gpt2_original_tf_checkpoint_to_pytorch.py<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,179
closed
Tensorflow variant of DataCollatorForLanguageModeling.
Co-authored-by: Dalton Walker <[email protected]> # What does this PR do? We didn't see any support for TensorFlow within the DataCollatorForLanguageModeling data class. Integrating directly with TensorFlow seems useful for TensorFlow users and avoids the necessity for tensor conversion. This PR adds a TFDataCollatorForLangaugeModeling data class that integrates directly with TensorFlow tensors and paves the way for further TFDataCollator conversions. ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @LysandreJik Anyone in the community is free to review the PR.
06-15-2021 15:00:13
06-15-2021 15:00:13
Hello, it seems there are a lot of changes in your PR. If you think this isn't the case, do you mind closing this and opening a new PR so that we may see the correct diff? Also feel free to ping @Rocketknight1 and @sgugger for review<|||||>Will do! We had some issues with git but we can clean it up and resubmit.
transformers
12,178
closed
Update AutoModel classes in summarization example
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> This updates the example for text summarization on the `Summary of tasks` page so that no deprecation warnings are shown. In detail: - Convert use of deprecated `AutoModelWithLMHead` to `AutoModelForSeq2SeqLM` - Add newly required `truncation=True` to `tokenizer.encode` with `max_length` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-15-2021 14:23:04
06-15-2021 14:23:04
transformers
12,177
closed
Exception during hyperparameter search with Ray and transformers library starting from version 4.5.0
I currently face the problem that with recent versions of the transformers library (issue starting at version 4.5.0) the hyperparameter search with ray tune runs into a serialization issue described below. ## Environment info - `transformers` version: 4.5.0 - Platform: Linux-4.19.0-16-amd64-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyTorch version (GPU?): 1.8.1+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no - Ray version: 1.4.0 ### Who can help Maybe it is interesting to @richardliaw and @amogkam because they were mentioned as responsible for ray/raytune. ## Information Model I am using (Bert, XLNet ...): distilbert-base-uncased ( model doesn't matter) The problem arises when using: * [ x] my own modified scripts: (give details below) The tasks I am working on is: * [x ] an official GLUE/SQUaD task: (give the name): GLUE mrpc ## To reproduce I have created a small working example which shows the error which (at least) I get:. The code is mainly based on the [blog entry covering ray tune](https://huggingface.co/blog/ray-tune) ```python import os os.environ['TOKENIZERS_PARALLELISM'] = 'false' from datasets import load_dataset, load_metric from transformers import AutoTokenizer, AutoModelForSequenceClassification, Trainer, TrainingArguments from ray import tune from ray.util import inspect_serializability model_name = 'distilbert-base-uncased' tokenizer = AutoTokenizer.from_pretrained(model_name) dataset = load_dataset('glue', 'mrpc') def encode(examples): outputs = tokenizer(examples['sentence1'], examples['sentence2'], truncation=True) return outputs encoded_dataset = dataset.map(encode, batched=True) def model_init(): return AutoModelForSequenceClassification.from_pretrained(model_name, return_dict=True) def compute_metrics(eval_pred): metric = load_metric('glue', 'mrpc') predictions, labels = eval_pred predictions = predictions.argmax(axis=-1) return metric.compute(predictions=predictions, references=labels) training_args = TrainingArguments("test") trainer = Trainer( args=training_args, tokenizer=tokenizer, train_dataset=encoded_dataset["train"], eval_dataset=encoded_dataset["validation"], model_init=model_init, compute_metrics=compute_metrics, ) def search_params(trial): return { #toy example "learning_rate": tune.grid_search([0.000001, 0.00001, 0.0001, 0.001]), } trainer.hyperparameter_search( direction="maximize", backend="ray", hp_space = search_params, n_trials=1, ) ``` This code snippet works with transformers version 4.4.2 and ealier but not on versions 4.5.0 and later. The error which appeared is ```python Traceback (most recent call last): File "working_example.py", line 48, in <module> trainer.hyperparameter_search( File "/site-packages/transformers/trainer.py", line 1459, in hyperparameter_search best_run = run_hp_search(self, n_trials, direction, **kwargs) File "/site-packages/transformers/integrations.py", line 231, in run_hp_search_ray analysis = ray.tune.run( File "/site-packages/ray/tune/tune.py", line 297, in run _ray_auto_init() File "/site-packages/ray/tune/tune.py", line 664, in _ray_auto_init ray.init() File "/site-packages/ray/_private/client_mode_hook.py", line 62, in wrapper return func(*args, **kwargs) File "/site-packages/ray/worker.py", line 866, in init hook() File "/site-packages/ray/tune/registry.py", line 171, in flush self.references[k] = ray.put(v) File "/site-packages/ray/_private/client_mode_hook.py", line 62, in wrapper return func(*args, **kwargs) File "/site-packages/ray/worker.py", line 1527, in put object_ref = worker.put_object(value) File "/site-packages/ray/worker.py", line 280, in put_object serialized_value = self.get_serialization_context().serialize(value) File "/site-packages/ray/serialization.py", line 326, in serialize return self._serialize_to_msgpack(value) File "/site-packages/ray/serialization.py", line 306, in _serialize_to_msgpack self._serialize_to_pickle5(metadata, python_objects) File "/site-packages/ray/serialization.py", line 266, in _serialize_to_pickle5 raise e File "/site-packages/ray/serialization.py", line 262, in _serialize_to_pickle5 inband = pickle.dumps( File "/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 73, in dumps cp.dump(obj) File "/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 580, in dump return Pickler.dump(self, obj) TypeError: cannot pickle '_thread.RLock' object ``` Based on this error, I searched for code to check which part is not serializable (because the whole trainer is transferred to each ray trial). I found the [ray serialization page](https://docs.ray.io/en/master/serialization.html#troubleshooting) and executed ```python inspect_serializability(trainer, name="test") ``` The output was: ``` ================================================================================ Checking Serializability of <transformers.trainer.Trainer object at 0x7fce1cbbeee0> ================================================================================ !!! FAIL serialization: cannot pickle '_thread.RLock' object Serializing 'compute_metrics' <function compute_metrics at 0x7fce1cb5b5e0>... Serializing 'model_init' <function model_init at 0x7fce1cb5b550>... Serializing '_gather_and_numpify' <bound method Trainer._gather_and_numpify of <transformers.trainer.Trainer object at 0x7fce1cbbeee0>>... !!! FAIL serialization: cannot pickle '_thread.RLock' object Serializing '__func__' <function Trainer._gather_and_numpify at 0x7fce1f739940>... WARNING: Did not find non-serializable object in <bound method Trainer._gather_and_numpify of <transformers.trainer.Trainer object at 0x7fce1cbbeee0>>. This may be an oversight. ================================================================================ Variable: FailTuple(_gather_and_numpify [obj=<bound method Trainer._gather_and_numpify of <transformers.trainer.Trainer object at 0x7fce1cbbeee0>>, parent=<transformers.trainer.Trainer object at 0x7fce1cbbeee0>]) was found to be non-serializable. There may be multiple other undetected variables that were non-serializable. Consider either removing the instantiation/imports of these variables or moving the instantiation into the scope of the function/class. If you have any suggestions on how to improve this error message, please reach out to the Ray developers on github.com/ray-project/ray/issues/ ================================================================================ ``` I did not find any major changes between version 4.4.2 and 4.5.0 with regards to integrations.py and trainer.py. I think the first step would be, that someone else reproduce the behaviour if possible (maybe something is also wrong on my side/setup).
06-15-2021 14:02:20
06-15-2021 14:02:20
Hey @sven-h yes this is a known issue, same as https://github.com/huggingface/transformers/issues/11249. From this thread: > If you disable the memory tracker (pass in skip_memory_metrics=True into your TrainingArguments) then you will no longer get the pickling error. In the next transformers release, the Ray Tune integration will automatically disable memory tracking if it's currently being enabled. <|||||>Hi @amogkam thanks for the fast reply and the answer.
transformers
12,176
closed
Update conversion of Tatoeba marian models
# What does this PR do? The Helsinki-NLP / Tatoeba NMT models have gone through various architectural changes, and the old conversion code fails on them. This commit is something of a rewrite to remedy this, in particular parsing supplied yaml files rather than README.md files. It needs to be looked at by someone on the Huggingface side. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger @sshleifer
06-15-2021 12:25:46
06-15-2021 12:25:46
Oops, this PR depends on merging a pull request to Tatoeba which hasn't happened yet. Closing for now.
transformers
12,175
closed
TPU training is stuck using T5 with PyTorch Lightning
## Environment info - `transformers` version: 4.6.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.1+cu101 (False) - Tensorflow version (GPU?): 2.5.0 (False) - Using GPU in script?: False - Using distributed or parallel set-up in script?: False ### Who can help @patrickvonplaten, @patil-suraj ## Information I'm fine-tuning the `t5-small` model using PyTorch Lightning on TPU v3 (Google Colab) on the `imdb` dataset. The training is stuck using an 8-cores setup and works well using a 1-core setup. It seems super weird since the `roberta-base` model works just fine using all 8-cores. I've filed a similar issue https://github.com/PyTorchLightning/pytorch-lightning/issues/7984, but would be great to receive some feedback if `t5` models are proved to work on TPUs using Lightning trainer. ## To reproduce Please, use this Google Colab Notebook: https://colab.research.google.com/drive/1FbWfkho3Otfl19y5ybkrK5Jw_tWNqV-M?usp=sharing ## Expected behavior `t5` models should work fine using TPU 8-cores training setup.
06-15-2021 11:50:04
06-15-2021 11:50:04
Hello! We don't have CI running with pytorch lightning so we would recommend opening an issue on their repository. Did you try a TPU training with the `Trainer` that comes with `transformers`? It should work fine on 8 cores.<|||||>Hello @LysandreJik, thanks for replying! Yes, `🤗/Trainer` works perfectly fine with 8-cores, `t5-small` gets fine-tuned in under 10 minutes on TPU-v3. Nevertheless, it's still unclear whether this issue happens due to some lightning trainer internals or T5 model – `roberta-base` works fine with lightning using 8 TPU cores. The latter makes me think that it might be some T5-specific issue, mightn't be? P.S. Lightning Trainer reports, that `lm_head.weight` parameter isn't tied (it seems missing prior to moving the model to the XLA device). Just in case.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,174
closed
Pretrained XLM model with TLM objective generates nonsensical predictions
Hi, I want to use the [`xlm-mlm-tlm-xnli15-1024`](https://huggingface.co/xlm-mlm-tlm-xnli15-1024) pretrained model, which is the XLM model trained with the auxiliary Translation Language Modeling (TLM) objective. I want to give a translation pair to the model, mask some words in one of the sentences and then get the predictions of the model for the masked words. Check the figure for reference. ![](https://pbs.twimg.com/media/DxmfDcyXcAE-YOr.jpg:large) My problem is that the model makes nonsensical predictions, which means that either I am doing something wrong, such as feeding the wrong input, or the model is not loaded properly. Here is a code snippet: ```python import torch from transformers import XLMWithLMHeadModel, XLMTokenizer model_name = "xlm-mlm-tlm-xnli15-1024" tokenizer = XLMTokenizer.from_pretrained(model_name) model = XLMWithLMHeadModel.from_pretrained(model_name) model.eval() src_lang_id = tokenizer.lang2id["en"] # English trg_lang_id = tokenizer.lang2id["el"] # Greek src_text = "I love pasta with tomato sauce!".replace("tomato", tokenizer.mask_token) trg_text = "Μου αρέσουν τα ζυμαρικά με σάλτσα ντομάτας!" print(f"{src_text}->{trg_text}") # get token_ids src_input_ids = torch.tensor([tokenizer.encode(src_text)]) trg_input_ids = torch.tensor([tokenizer.encode(trg_text)]) src_len = src_input_ids.shape[1] trg_len = trg_input_ids.shape[1] # get lang_ids src_langs = torch.tensor([src_lang_id] * src_len).view(1, -1) trg_langs = torch.tensor([trg_lang_id] * trg_len).view(1, -1) # get token_type_ids src_type = torch.tensor([0] * src_len).view(1, -1) trg_type = torch.tensor([1] * trg_len).view(1, -1) input_ids = torch.cat([src_input_ids, trg_input_ids], dim=1) token_type_ids = torch.cat([src_type, trg_type], dim=1) lang_ids = torch.cat([src_langs, trg_langs], dim=1) position_ids = torch.cat([torch.arange(src_len), torch.arange(trg_len)]) # encode and predict result = model(input_ids, langs=lang_ids, position_ids=position_ids.view(1, -1), token_type_ids=token_type_ids) # get predictions for masked token masked_index = torch.where(input_ids == tokenizer.mask_token_id)[1].tolist()[0] result = result[0][:, masked_index].topk(5).indices result = result.tolist()[0] print(f"Predictions:", tokenizer.decode(result)) ``` Console output: ``` I love pasta with <special1> sauce!->Μου αρέσουν τα ζυμαρικά με σάλτσα ντομάτας! Predictions: with the 'i'my ``` I tried omitting some of the arguments to the model, changing the example sentence-pair and the languages, but I always get weird predictions. Am I doing something wrong? Important: I tried downgrading to `transformers==2.9.0` to make this error message go away: ``` Some weights of XLMWithLMHeadModel were not initialized from the model checkpoint at xlm-mlm-tlm-xnli15-1024 and are newly initialized: ['transformer.position_ids'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` However, I noticed that even in that version, the predictions are the same, which means that there is something else going on. I don't want to train the model on another task. I want to use the pretrained model to make predictions in exactly the same way it was pretrained.
06-15-2021 11:49:18
06-15-2021 11:49:18
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,173
closed
Use a released version of optax rather than installing from Git.
(We were using a new API that hadn't been released until a few weeks ago) # What does this PR do? Update the version of Optax we depend on in Flax examples' requirement.py to the latest released version. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? <-- I installed dependencies via requirements.txt and then ran run_flax_glue.py.
06-15-2021 10:50:58
06-15-2021 10:50:58
transformers
12,172
closed
How can we modify the MM-IMDB model for sequence to sequence generation tasks?
Hi all, thank you so much for the wonderful service. I have some doubts regarding the training details for MM-Imdb dataset. Are the image encoder's and tokenizer's embeddings fine-tuned during training on MM-Imdb dataset? If not, can you suggest a way to do it or refer any material for help? Is there a way to modify the code so that the model’s pre-trained weights can be used for sequence-to-sequence generations tasks instead of classification? Thank You..
06-15-2021 10:31:08
06-15-2021 10:31:08
Hi, could you please ask this question on the [forum](https://discuss.huggingface.co/) rather than here? We like to keep Github issues for bugs/feature requests. Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,171
closed
[Flax generate] Add params to generate
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds an optional `params` input to the `generate()` function. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-15-2021 09:56:47
06-15-2021 09:56:47
transformers
12,170
closed
Vision Transformer (ViT) feature vector example (not classification)
# 🚀 Feature request ## Motivation I would like to see an example of using Vision Transformer just for feature extraction, get the feature vector, before the classification part of the neural network as can be done with tensorflow hub https://www.tensorflow.org/hub/common_signatures/images?hl=en#image_feature_vector
06-15-2021 09:33:36
06-15-2021 09:33:36
In HuggingFace Transformers, models typically output a dictionary. You can access the feature vector by getting the `pooler_output` key of that dictionary (assuming you're using `ViTModel`). It's a tensor of shape `(batch_size, hidden_size)`, so in case you're only providing a single image, and you're using the base-sized model, this will be a tensor of shape `(1, 768)`. Here's an example: ``` from transformers import ViTFeatureExtractor, ViTModel from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k') model = ViTModel.from_pretrained('google/vit-base-patch16-224-in21k') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) feature_vector = outputs.pooler_output ```<|||||>thank you, that is what i was looking for ! thank you<|||||>@NielsRogge is it always `pooler_output` key that contains the feature vector for all image transformers (such as CLIP, DeiT, VisualBERT, DETR)?
transformers
12,169
closed
Allow setting permissions of downloaded models (via envvar)
In our research group we all have user accounts on a server where we each run our own experiments (Ubuntu behind the scenes). By default, everyone is downloading `transformers` models to their own home directory. Let's say we have 20 researchers, that might mean that we have 20 duplicates of "bert-base-cased" on the server (and of many other models). This is not efficient at all and takes too much room to our liking. We have tried creating a 777 directory as TRANSFORMERS_CACHE globally, but that does not work. If I download a model, some of the downloaded files get a read/write access for me as the creator of the file. This means that others cannot use the model (permission denied). Our suggestion or request would be to have an option when downloading a model to also set its permissions for all downloaded files. Preferably adjustable via a (system-wide) environment variable. This would probably need to be added in file_utils.py, similar to other options like "local_files_only". I currently do not have time to work on this myself, but I am open to any feedback of course.
06-15-2021 08:38:06
06-15-2021 08:38:06
Would PR https://github.com/huggingface/transformers/pull/11119 help with your use-case?<|||||>> Would PR #11119 help with your use-case? Indeed, thanks!
transformers
12,168
closed
Special tokens not tokenized properly
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.1 - Python version: 3.8.5 - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @LysandreJik ## Information Hi, I have recently further pretrained a RoBERTa model with fairseq. I use a custom vocabulary, trained with the tokenizers module. After converting the fairseq model to pytorch, I loaded all my model-related files [here](https://huggingface.co/manueltonneau/twibert-lowercase-50272/tree/main). When loading the tokenizer, I noticed that the special tokens are not tokenized properly. ## To reproduce ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('manueltonneau/twibert-lowercase-50272') tokenizer.tokenize('<mask>') Out[7]: ['<mask>'] tokenizer.tokenize('<hashtag>') Out[8]: ['hashtag'] tokenizer.tokenize('<hashtag>') Out[3]: [0, 23958, 2] ``` ## Expected behavior Since `<hashtag>` is a special token in the vocabulary with ID 7 (see [here](https://huggingface.co/manueltonneau/twibert-lowercase-50272/blob/main/vocab.json)), the last output should be: [0, 7, 2]. `<hashtag>` with the '<>' should also be recognized as a unique token. ## Potential explanation When looking at the files from [a similar model](https://huggingface.co/vinai/bertweet-base), it seems that the vocab is in txt format and they also have the `bpe.codes` file, which I don't have. Could that be the issue? And if so, how do I convert my files to this format? For vocab.txt, I have already found your lengthy explanation [here](https://github.com/huggingface/transformers/issues/1083), thanks for this.
06-15-2021 07:32:25
06-15-2021 07:32:25
Hello! What is your tokenizer? Is it a WordPiece-based tokenizer, or a Byte-level BPE-based tokenizer like the original one from RoBERTa?<|||||>Hi @LysandreJik, thanks for your reply and sorry that I'm just seeing this now. My tokenizer is a byte-level BPE-based tokenizer. <|||||>Hi @LysandreJik, let me know if you have a solution for this or if you need more info, thanks a lot in advance :) <|||||>Hi, How did you add the additional special tokens? So you start from a pre-trained RoBERTa, then added additional special tokens and further pre-trained on a corpus? Did you add these additional special tokens using the tokenizers library? Normally, one can add additional tokens as follows (based on https://github.com/huggingface/tokenizers/issues/247#issuecomment-675458087): ``` special_tokens_dict = {'additional_special_tokens': ['[C1]','[C2]','[C3]','[C4]']} num_added_toks = tokenizer.add_special_tokens(special_tokens_dict) ``` However, printing the following: ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('manueltonneau/twibert-lowercase-50272') print(tokenizer.additional_special_tokens) ``` Returns `[]`. So you can solve it by doing: ``` special_tokens_dict = {'additional_special_tokens': ['<hashtag>']} num_added_toks = tokenizer.add_special_tokens(special_tokens_dict) ``` When I then test your example: ``` tokenizer.tokenize('<hashtag>') ``` I get: `['<hashtag>']`. And when doing: ``` tokenizer.convert_tokens_to_ids(tokenizer.tokenize("<hashtag>", add_special_tokens=True)) ``` I get: `[0, 7, 2]`.<|||||>Awesome @NielsRogge, thanks a lot! Will test this and get back to you/close if solved. <|||||>>How did you add the additional special tokens? So you start from a pre-trained RoBERTa, then added additional special tokens and further pre-trained on a corpus? I created a new vocab with the tokenizers module for which I added new special tokens. Here is the code I use below: ``` # Initialize a tokenizer tokenizer = Tokenizer(models.BPE()) # Customize pre-tokenization and decoding tokenizer.pre_tokenizer = pre_tokenizers.ByteLevel(add_prefix_space=True) tokenizer.decoder = decoders.ByteLevel() tokenizer.post_processor = processors.ByteLevel(trim_offsets=True) # And then train trainer = trainers.BpeTrainer(vocab_size=args.vocab_size, min_frequency=2, special_tokens=[ "<s>", "<pad>", "</s>", "<unk>", "<mask>", "@USER", "HTTPURL", "<hashtag>", "</hashtag>" ], show_progress=True) files = [os.path.join(args.corpus_dir, filename) for filename in os.listdir(args.corpus_dir)] i = 0 start_time = time.time() for file in files: print(f'Starting training on {file}') tokenizer.train([file], trainer=trainer) i = i + 1 print(f'{i} files done out of {len(files)} files') print(f'Time elapsed: {time.time() - start_time} seconds') # And Save it output_dir = f'/scratch/mt4493/twitter_labor/twitter-labor-data/data/pretraining/US/vocab_files/{args.vocab_size}/{args.vocab_name}' if not os.path.exists(output_dir): os.makedirs(output_dir) tokenizer.model.save(output_dir) ```<|||||>Works fine, thanks again!
transformers
12,167
closed
ViT for resolution beyond 224x224 support
When the resolution changes, the size of position embedding of ViTModel also changes, which makes ```from_pretrained``` method not working. So, how can I use ViT with a different resolution like 64x64?
06-15-2021 07:17:13
06-15-2021 07:17:13
One would need to interpolate the pre-trained position embeddings. You can see how this is done in the original implementation [here](https://github.com/google-research/vision_transformer/blob/00883dd691c63a6830751563748663526e811cee/vit_jax/checkpoint.py#L224). You can find a PyTorch implementation of that [here](https://github.com/yitu-opensource/T2T-ViT/blob/964796c75445aa5163766d1caa20755f67b0da6f/utils.py#L27) (taken from the T2T-ViT implementation), where they show how you can go from 224 to 384. The pre-trained position embeddings are of shape (1, 197, 768) - there are 196 "positions" in an image of 224x224 with a patch size of 16x16 as (224/16)^2 = 196 and we add 1 for the [CLS] token - and suppose you want to fine-tune at resolution of 64x64 with a patch size of 8, then the number of position embeddings is (64/8)^2 + 1 = 65. In that case, the position embeddings during fine-tuning are of shape (1, 65, 768), and you can use that function to map the pre-trained position embeddings from shape (1, 197, 768) to (1, 65, 768).<|||||>Thank you for your reply! Actually, I know how to interpolate the pos embedding. But I don't know how to do it seamlessly with huggingface ViTModel. Is it necessary to modify the internal code?<|||||>When I change the image size of ViTModel, I cannot even load it from a pretrained checkpoint. ```python from transformers import ViTModel model = ViTModel.from_pretrained('vit-base-patch16-224', image_size=64) ``` This raises an error due to the mismatch of position embedding size.<|||||>I think you first need to load the `state_dict` of the original model, like so: ``` from transformers import ViTModel model = ViTModel.from_pretrained('google/vit-base-patch16-224') # load pretrained model state_dict = model.state_dict() ``` Then, initialize a new `ViTModel` with custom `image_size`, update the position embeddings of the `state_dict` and load the new model with that `state_dict`: ``` from transformers import ViTConfig config = ViTConfig.from_pretrained('google/vit-base-patch16-224', image_size=64) # new model with custom image_size model = ViTModel(config=config) # update state_dict new_state_dict = state_dict.copy() old_posemb = new_state_dict['embeddings.position_embeddings'] if model.embeddings.position_embeddings.shape != old_posemb.shape: # need to resize the position embedding by interpolate new_posemb = resize_pos_embed(old_posemb, model.embeddings.position_embeddings) # use PyTorch function linked above new_state_dict['embeddings.position_embeddings'] = new_posemb # equip new model with state_dict model.load_state_dict(new_state_dict) ```<|||||>Wow, you are so smart! That's awesome!<|||||>Thanks NielsRogge for pointing me here, very helpful resource. Just another quick question, where can we specify the patch size we would like ViT to extract from images? For instance, on CIFAR10 32x32 I wouldn't like to use 16x16 patch size, but maybe something like 8x8 or 4x4 would be more appropriate.
transformers
12,166
closed
[testing] ensure concurrent pytest workers use a unique port for torch.dist
As discussed at https://github.com/huggingface/transformers/issues/12164 currently concurrent tests may try to use the same port when running `-m torch.distributed.launch` and thus fail with `RuntimeError: Address already in use` error. This PR solves this problem by assigning a unique port to each worker when run under `pytest-xdist` with `-n 2` or higher. It also adds 2 helper `testing_utils.py` functions: - `pytest_xdist_worker_id` - `get_torch_dist_unique_port` to accomplish that. Actually I'm not 100% sure that the original failure was caused by this problem, as it could be also caused by some run-away test that still holds the port. If this is the case I will work further on this helper function to actually test that the port it returns is free and will have to think of some extra solutions, because checking that the port is free and binding it is not atomic and there could be a race condition leading to the same problem. But this is an important fix on its own as long as we plan to continue using pytest-xdist Fixes: https://github.com/huggingface/transformers/issues/12164 @LysandreJik, @sgugger
06-15-2021 04:22:03
06-15-2021 04:22:03
transformers
12,165
closed
Documentation for tiny-gpt2 in transformers/examples/pytorch
# 🚀 Documentation request The **tiny-gpt2** transformer model is great for fast prototyping, but it seems sparsely documented on the Huggingface hub: https://huggingface.co/sshleifer/tiny-gpt2 ## Motivation It would be helpful if users knew basic info about how tiny-gpt2 was trained. Is it the same corpus as the standard gpt2? Was it distilled from a larger model or trained from scratch? Etc. ## Your contribution As I did not train tiny-gpt2, I don't know any info about it.
06-15-2021 01:43:36
06-15-2021 01:43:36
It is randomly initialized and trained for 2 steps. So basically it can only be used for prototyping.
transformers
12,164
closed
[testing] concurrent dist tests fail when using the same master_port
The failing `tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_ddp` on multi-gpu slow runner happens because of concurrency of `-n 2` on push (it's `-n 1` on scheduled), so we end up with 2 dist tests running at the same time and clashing on ending up using the same default port used by `torch.distributed.launch`. and the latter test failing with "Address already in Use". I started a discussion at https://github.com/pytorch/pytorch/issues/59978 hoping to expose the `init_method=file://` through the CLI, but alas it won't help since the test suite needs to support the older pytorch even if it's exposed. Furthermore, pytorch-1.9.1 or perhaps higher will have a different way using FileStore as a "rendezvous endpoint" in `torch.distributed.run` - a replacement for `torch.distributed.launch`. Meanwhile it was proposed to use TorchElastic, which requires launching a server https://pytorch.org/elastic/0.2.2/quickstart.html before the test suite start and somehow ensuring it gets killed at the end. This looks very error-prone to me, especially when the test suite fails. But I'm not yet sure how to come up with an algorithm to get the unique unused port to each test client, other than writing yet another server that will do the port management on demand. @LysandreJik
06-15-2021 01:12:25
06-15-2021 01:12:25
Would it be helpful to extract the tests that use the same port with a custom decorator, and run them in a separate `run` directive with `-n 1`?<|||||>That is a possibility too, see the simple proposed solution https://github.com/huggingface/transformers/pull/12166 - perhaps to try first.
transformers
12,163
closed
Missing code for predicting custom labels in Bert
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.6.1 - Platform: MacOS 10.15.7 - Python version: 3.8 - PyTorch version (GPU?): 1.8.1 No GPU - Tensorflow version (GPU?): No - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik ## Information Model I am using (Bert, XLNet ...): Bert The problem arises when using: * [ ] the official example scripts: (give details below) * [X ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X ] my own task or dataset: (give details below) NER custom fine-tuned. ## To reproduce Steps to reproduce the behavior: 1. Create a dataset and load it. 2. Set your features with new labels 3. Load the bert_Base_cased config, transformer, and model 4. Tokenize the data 5. Create a trainer and start it ```python dataset = load_dataset('json', data_files=datasetPath + pathDel + datasetName, split='train') # Dataset column that serves as model's input text_column_name = "tokens" # Dataset column that serves as fine-tuning labels (ner_tags, pos_tags, or chunk_tags in our case) label_column_name = "ner_tags" # Define variables used by tokenize_and_align_labels fn column_names = dataset.column_names # NOT USED (GWC) label_list = features[label_column_name].feature.names label_to_id = {label_list[i]: i for i in range(len(label_list))} # Need to tell the model how many labels it's supposed to predict num_labels = len(label_list) model_name = 'bert-base-cased' config = AutoConfig.from_pretrained(model_name, num_labels=num_labels) tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True, padding=True, truncation=True) # GWC CHANGED added padding=True and truncation=True model = AutoModelForTokenClassification.from_pretrained(model_name, config=config) padding = True def tokenize_and_align_labels(examples): tokenized_inputs = tokenizer( examples[text_column_name], padding=padding, truncation=True, # We use this argument because the texts in our dataset are lists of words (with a label for each word). is_split_into_words=True, ) labels = [] for i, label in enumerate(examples[label_column_name]): word_ids = tokenized_inputs.word_ids(batch_index=i) previous_word_idx = None label_ids = [] for word_idx in word_ids: # Special tokens have a word id that is None. We set the label to -100 so they are automatically # ignored in the loss function. if word_idx is None: label_ids.append(-100) # We set the label for the first token of each word. elif word_idx != previous_word_idx: label_ids.append(label_to_id[label[word_idx]]) # For the other tokens in a word, we set the label to either the current label or -100, depending on # the label_all_tokens flag. else: label_ids.append(label_to_id[label[word_idx]]) previous_word_idx = word_idx labels.append(label_ids) tokenized_inputs["labels"] = labels return tokenized_inputs train_dataset = dataset.map( tokenize_and_align_labels, batched=True, ) trainer = Trainer( model=model, train_dataset=train_dataset, tokenizer=tokenizer ) print('Training dataset') trainer.train() ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior I am expecting it to train the model on our custom data. It was failing during training and I found the bug and fixed it. So mostly I am just trying to report the bug. The bug is in transformers/tokenization_utils_base.py at line 2990. In the _pad() method you forgot to add an if statement for labels. More specifically you have a if self.padding_side == "right": and a if self.padding_side == "left": and both of them are missing the nested if for labels. (The have one for token_type_ids & special_tokens_mask) You should add the section for both left and right but here is the change I made for the "right" side: ```python if needs_to_be_padded: difference = max_length - len(required_input) if self.padding_side == "right": if return_attention_mask: encoded_inputs["attention_mask"] = [1] * len(required_input) + [0] * difference if "token_type_ids" in encoded_inputs: encoded_inputs["token_type_ids"] = ( encoded_inputs["token_type_ids"] + [self.pad_token_type_id] * difference ) if "labels" in encoded_inputs: encoded_inputs["labels"] = ( encoded_inputs["labels"] + [-100] * difference ) if "special_tokens_mask" in encoded_inputs: encoded_inputs["special_tokens_mask"] = encoded_inputs["special_tokens_mask"] + [1] * difference encoded_inputs[self.model_input_names[0]] = required_input + [self.pad_token_id] * difference ..... ```
06-14-2021 22:22:07
06-14-2021 22:22:07
Hi, Tokenizers in HuggingFace Transformers don't take care of padding labels (this should be done by the user). You can only provide text to a tokenizer, and it will turn them into `input_ids`, `attention_mask` and `token_type_ids`. The `tokenize_and_align_labels` function will take care of labeling each token. <|||||>Thanks for this note @NielsRogge and sorry for the delay getting back to you. Lots to do here. We will make the change in our code but it seems like this would be a good feature for the framework and the code is done. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@NielsRogge can we at least get a better error message for this?<|||||>hello how can i find acceptable labels for train_data to fine tuning a pretrained transformer sentiment model>?
transformers
12,162
closed
Add video links to the documentation
# What does this PR do? This PR leverages some videos of the course and adds them to our documentation.
06-14-2021 20:50:04
06-14-2021 20:50:04
transformers
12,161
closed
consistent nn. and nn.functional: part 5 docs
Continuing https://github.com/huggingface/transformers/pull/12124 this PR takes care of `docs` - had to do a bit of extra filtering to not break the images: ``` # deal with torch.nn perl -pi -e 's|^(\s*)import torch\n|$1from torch import nn\n$1import torch\n|' `grep -Ilr torch.nn docs` find docs -type f -regextype posix-egrep -regex '.*(md|rst)$' -exec perl -X -pi -e 's{(?<!(from |import |[`#/]))torch\.nn\.}{nn.}g' {} \; find docs -type f -regextype posix-egrep -regex '.*(md|rst)$' -exec perl -pi -e 's|import torch\.nn as nn|from torch import nn|g' {} \; # deal with F find docs -type f -regextype posix-egrep -regex '.*(md|rst)$' -exec perl -pi -e 's|from torch.nn import functional as F|from torch import nn|g' {} \; find docs -type f -regextype posix-egrep -regex '.*(md|rst)$' -exec perl -pi -e 's|import torch.nn.functional as F|from torch import nn|g' {} \; find docs -type f -regextype posix-egrep -regex '.*(md|rst)$' -exec perl -pi -e 's|(?<!\w)F\.|nn.functional.|g' {} \; make fixup ``` and one manual tweak. docs are hard to automate rewrites and there is no validation, so had to carefully check each diff. @sgugger
06-14-2021 19:27:41
06-14-2021 19:27:41
transformers
12,160
closed
[Jax Slow Circle CI] Don't close PR
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-14-2021 19:20:53
06-14-2021 19:20:53
Thanks for the tip @stas00 ! I'll push to the PR every 24h to have a background circle ci test - eventually we should think about a better solution here<|||||>You can crontab an empty git push to trigger CI, e.g.: ``` cd transformers-flax-cron git commit --allow-empty -m "Trigger CI" git push ```<|||||>I also need to pull from master regularly - otherwise the tests are always run on the same code no? <|||||>Heh, yes of course! That was a blunder suggestion on my part since after rebasing you will always have something to push and if there is nothing to push then there is nothing to test in any case.
transformers
12,159
closed
Can't run QA fine-tune for bert/albert in distributed way
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: HEAD detached at v4.6.1 - Platform: Docker, AWS - Python version: Python 3.8.5 - PyTorch version (GPU?): 1.8.0 - Tensorflow version (GPU?): - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [x] the official example scripts: (give details below) https://github.com/huggingface/transformers/tree/master/examples/pytorch/question-answering * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: Just run: ``` python -m torch.distributed.launch --nproc_per_node=8 run_qa.py \ --model_name_or_path bert-large-uncased-whole-word-masking \ --dataset_name squad \ --do_train \ --do_eval \ --learning_rate 3e-5 \ --num_train_epochs 1 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir ./new_out \ --max_steps 100 \ --per_device_eval_batch_size=3 \ --per_device_train_batch_size=3 \ --cache_dir . ``` Got error as below: ``` [INFO|trainer.py:2115] 2021-06-14 19:01:08,718 >> ***** Running Evaluation ***** [INFO|trainer.py:2117] 2021-06-14 19:01:08,718 >> Num examples = 10784 [INFO|trainer.py:2120] 2021-06-14 19:01:08,718 >> Batch size = 3 Traceback (most recent call last): File "run_qa.py", line 622, in <module> Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): Traceback (most recent call last): File "run_qa.py", line 622, in <module> File "run_qa.py", line 622, in <module> File "run_qa.py", line 622, in <module> File "run_qa.py", line 622, in <module> Traceback (most recent call last): File "run_qa.py", line 622, in <module> main() File "run_qa.py", line 581, in main main()main()main() File "run_qa.py", line 581, in main File "run_qa.py", line 581, in main main() File "run_qa.py", line 581, in main File "run_qa.py", line 581, in main Traceback (most recent call last): File "run_qa.py", line 622, in <module> metrics = trainer.evaluate()Traceback (most recent call last): File "/workspace/transformers/examples/pytorch/question-answering/trainer_qa.py", line 44, in evaluate File "run_qa.py", line 622, in <module> metrics = trainer.evaluate()metrics = trainer.evaluate() output = eval_loop( main() File "/workspace/transformers/examples/pytorch/question-answering/trainer_qa.py", line 44, in evaluate File "/workspace/transformers/examples/pytorch/question-answering/trainer_qa.py", line 44, in evaluate File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2162, in evaluation_loop File "run_qa.py", line 581, in main metrics = trainer.evaluate() File "/workspace/transformers/examples/pytorch/question-answering/trainer_qa.py", line 44, in evaluate metrics = trainer.evaluate() output = eval_loop( File "/workspace/transformers/examples/pytorch/question-answering/trainer_qa.py", line 44, in evaluate File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2162, in evaluation_loop output = eval_loop( File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2162, in evaluation_loop main() output = eval_loop( File "run_qa.py", line 581, in main File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2162, in evaluation_loop output = eval_loop( File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2162, in evaluation_loop metrics = trainer.evaluate() File "/workspace/transformers/examples/pytorch/question-answering/trainer_qa.py", line 44, in evaluate output = eval_loop( File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2162, in evaluation_loop main() File "run_qa.py", line 581, in main metrics = trainer.evaluate() File "/workspace/transformers/examples/pytorch/question-answering/trainer_qa.py", line 44, in evaluate output = eval_loop( File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2162, in evaluation_loop logits = self._nested_gather(logits) metrics = trainer.evaluate() File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2252, in _nested_gather File "/workspace/transformers/examples/pytorch/question-answering/trainer_qa.py", line 44, in evaluate logits = self._nested_gather(logits)output = eval_loop( logits = self._nested_gather(logits) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2252, in _nested_gather File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2162, in evaluation_loop File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2252, in _nested_gather logits = self._nested_gather(logits) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2252, in _nested_gather logits = self._nested_gather(logits) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2252, in _nested_gather logits = self._nested_gather(logits) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2252, in _nested_gather logits = self._nested_gather(logits) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2252, in _nested_gather tensors = distributed_concat(tensors) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in distributed_concat return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in <genexpr> return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) tensors = distributed_concat(tensors) tensors = distributed_concat(tensors) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 156, in distributed_concat File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in distributed_concat File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in distributed_concat logits = self._nested_gather(logits) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 2252, in _nested_gather tensors = distributed_concat(tensors) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in distributed_concat dist.all_gather(output_tensors, tensor)tensors = distributed_concat(tensors) File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1862, in all_gather File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in distributed_concat return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in <genexpr> File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in <genexpr> return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in <genexpr> return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor)tensors = distributed_concat(tensors) return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in <genexpr> File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in distributed_concat tensors = distributed_concat(tensors)return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 156, in distributed_concat File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in distributed_concat File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 156, in distributed_concat return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 156, in distributed_concat return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 156, in distributed_concat File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in <genexpr> dist.all_gather(output_tensors, tensor) File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1862, in all_gather dist.all_gather(output_tensors, tensor) File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1862, in all_gather return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in <genexpr> dist.all_gather(output_tensors, tensor) File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1862, in all_gather dist.all_gather(output_tensors, tensor)return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1862, in all_gather File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 156, in distributed_concat work = default_pg.allgather([tensor_list], [tensor])return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) RuntimeError File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 156, in distributed_concat : Tensors must be non-overlapping and dense dist.all_gather(output_tensors, tensor) File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1862, in all_gather dist.all_gather(output_tensors, tensor) File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1862, in all_gather tensors = distributed_concat(tensors) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in distributed_concat work = default_pg.allgather([tensor_list], [tensor]) work = default_pg.allgather([tensor_list], [tensor])RuntimeError : work = default_pg.allgather([tensor_list], [tensor])Tensors must be non-overlapping and dense RuntimeError : Tensors must be non-overlapping and dense RuntimeError: Tensors must be non-overlapping and densereturn type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 154, in <genexpr> work = default_pg.allgather([tensor_list], [tensor]) return type(tensor)(distributed_concat(t, num_total_examples) for t in tensor) RuntimeError File "/opt/conda/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 156, in distributed_concat : Tensors must be non-overlapping and dense work = default_pg.allgather([tensor_list], [tensor]) RuntimeError: Tensors must be non-overlapping and dense dist.all_gather(output_tensors, tensor) work = default_pg.allgather([tensor_list], [tensor]) File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 1862, in all_gather RuntimeError: Tensors must be non-overlapping and dense work = default_pg.allgather([tensor_list], [tensor]) RuntimeError: Tensors must be non-overlapping and dense Killing subprocess 22340 Killing subprocess 22341 Killing subprocess 22342 Killing subprocess 22343 Killing subprocess 22344 Killing subprocess 22345 Killing subprocess 22346 Killing subprocess 22347 Traceback (most recent call last): File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 340, in <module> main() File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 326, in main sigkill_handler(signal.SIGTERM, None) # not coming back File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd) subprocess.CalledProcessError: Command '['/opt/conda/bin/python', '-u', 'run_qa.py', '--local_rank=7', '--model_name_or_path', 'bert-large-uncased-whole-word-masking', '--dataset_name', 'squad', '--do_train', '--do_eval', '--learning_rate', '3e-5', '--num_train_epochs', '1', '--max_seq_length', '384', '--doc_stride', '128', '--output_dir', './new_out', '--max_steps', '100', '--per_device_eval_batch_size=3', '--per_device_train_batch_size=3', '--cache_dir', '.']' returned non-zero exit status 1. ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
06-14-2021 19:15:39
06-14-2021 19:15:39
@sgugger @philschmid <|||||>Could you confirm #11872 fixes it?<|||||>> Could you confirm #11872 fixes it? yeah, confirmed, closing issue.
transformers
12,158
closed
Pretraining for TFWav2Vec2
# 🚀 Feature request TFWav2Vec2 needs the pretraining implementation like the PyTorch [version](https://huggingface.co/transformers/master/model_doc/wav2vec2.html#wav2vec2forpretraining) ## Motivation Users of the TensorFlow model will most likely want to be able to do pretraining just like with Pytorch. ## Your contribution I recently added the [tensorflow model](https://github.com/huggingface/transformers/pull/11617). So I would like to do this one as well.
06-14-2021 19:10:54
06-14-2021 19:10:54
transformers
12,157
closed
Add course banner
# What does this PR do? This PR adds a course banner in the main README, looking like this: ![image](https://user-images.githubusercontent.com/35901082/122060415-90644180-cdbb-11eb-8db3-36823533e6fb.png)
06-14-2021 18:40:49
06-14-2021 18:40:49
Merging so the image is online and I can then adjust the width if necessary.
transformers
12,156
closed
[style] consistent nn. and nn.functional: part 4 `examples`
This concludes the work on https://github.com/huggingface/transformers/issues/11600 with normalizing `examples` using fully automated: ``` # deal with torch.nn perl -pi -e 's|^(\s*)import torch\n|$1from torch import nn\n$1import torch\n|' `grep -Ilr torch.nn examples` find examples -type f -exec perl -X -pi -e 's{(?<!(from |import |[`#/]))torch\.nn\.}{nn.}g' {} \; find examples -type f -exec perl -pi -e 's|import torch\.nn as nn|from torch import nn|g' {} \; # deal with F find examples -type f -exec perl -pi -e 's|from torch.nn import functional as F|from torch import nn|g' {} \; find examples -type f -exec perl -pi -e 's|import torch.nn.functional as F|from torch import nn|g' {} \; find examples -type f -exec perl -pi -e 's|(?<!\w)F\.|nn.functional.|g' {} \; perl -pi -e 's|import torch||' examples/research_projects/pplm/pplm_classification_head.py # leave legacy unmodified as we can't test is easily git checkout examples/legacy make fixup ``` @sgugger
06-14-2021 18:17:22
06-14-2021 18:17:22
> You have changes in two png files here too, which is weird. Whoah! Found some secret stenography embeddings! :) Probably got triggered by `s|(?<!\w)F\.|nn.functional.|g` Thank you for noticing, @sgugger - will fix it up! > Not sure if we really need to apply this to the research projects which are not actively maintained. at least - `wav2vec2` is. Do you want me to reset all but `wav2vec2` under research? <|||||>No, it's easier to go for all of them in that case.
transformers
12,155
closed
[style] consistent nn. and nn.functional: part 3 `tests`
Continuing https://github.com/huggingface/transformers/pull/12124 this PR takes care of `tests` - a slight variation of the automated code: ``` # deal with torch.nn perl -pi -e 's|^(\s*)import torch\n|$1from torch import nn\n$1import torch\n|' `grep -Ilr torch.nn tests` find tests -type f -exec perl -X -pi -e 's{(?<!(from |import |[`#/]))torch\.nn\.}{nn.}g' {} \; find tests -type f -exec perl -pi -e 's|import torch\.nn as nn|from torch import nn|g' {} \; # deal with F find tests -type f -exec perl -pi -e 's|from torch.nn import functional as F|from torch import nn|g' {} \; find tests -type f -exec perl -pi -e 's|import torch.nn.functional as F|from torch import nn|g' {} \; find tests -type f -exec perl -pi -e 's|(?<!\w)F\.|nn.functional.|g' {} \; make fixup ``` One concern is for slow tests that would be missed by CI, so let's be on the lookout for the nightly slow run after this PR is merged. @sgugger
06-14-2021 18:09:10
06-14-2021 18:09:10
> It looks like you have changes in two test fixtures, is that intended? Oh, I was studying `git diff` and missed the binary change - thank you for noticing it, @sgugger - fixed.
transformers
12,154
closed
[Flax] Fix flax pt equivalence tests
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Corrects bug introduced in https://github.com/huggingface/transformers/pull/11537/files?file-filters%5B%5D=.py#r651157917 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-14-2021 18:08:07
06-14-2021 18:08:07
transformers
12,153
closed
[style] consistent nn. and nn.functional: part2: templates
Continuing https://github.com/huggingface/transformers/pull/12124 this PR takes care of `templates` - had to do some manual tweaking over automated rewrite here since `make fixup` can't process templates. ``` # deal with torch.nn perl -pi -e 's|^import torch\n|from torch import nn\nimport torch\n|' `grep -Ilr torch.nn templates` find templates -type f -exec perl -X -pi -e 's{(?<!(from |import |[`#/]))torch\.nn\.}{nn.}g' {} \; find templates -type f -exec perl -pi -e 's|import torch\.nn as nn|from torch import nn|g' {} \; # deal with F find templates -type f -exec perl -pi -e 's|from torch.nn import functional as F|from torch import nn|g' {} \; find templates -type f -exec perl -pi -e 's|import torch.nn.functional as F|from torch import nn|g' {} \; find templates -type f -exec perl -pi -e 's|(?<!\w)F\.|nn.functional.|g' {} \; make fixup ``` and some manual corrections to remove duplicated imports. @sgugger
06-14-2021 17:39:47
06-14-2021 17:39:47
transformers
12,152
closed
🤗 The Hugging Face Course is out!
The first part of the Hugging Face Course is finally out! Come learn how the :hugs: Ecosystem works :partying_face: : Transformers, Tokenizers, Datasets, Accelerate, the Model Hub! Share with your friends who want to learn NLP, it's free! Come join us at https://hf.co/course Students following this course will understand how to approach (almost) any NLP problem and benefit from all the past experiences of the community. Come register for the live sessions, ask any questions, and organize study groups on the Hugging Face forums: https://discuss.huggingface.co/c/course/20 ![image](https://user-images.githubusercontent.com/30755778/121926625-22ad0c80-cd0c-11eb-9fd9-930831a0149a.png)
06-14-2021 16:29:27
06-14-2021 16:29:27
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,151
closed
do_normalize set to True by default for WAV2VEC tokenizer
## Environment info - `transformers` version: 4.6.1 - Platform: macOS-11.2.3-x86_64-i386-64bit - Python version: 3.8.2 - PyTorch version (GPU?): 1.8.1 (False) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik @sgugger ## Information Model I am using (Bert, XLNet ...): Wav2Vec The problem arises when using: * [*] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: wav_input_16khz, samplerate = sf.read(AUDIOFILE) tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h") tokenizer_2 = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-960h",do_normalize=False) features = tokenizer(wav_input_16khz, return_tensors="pt").input_values features_2 = tokenizer_2(wav_input_16khz, return_tensors="pt").input_values features == features_2 Out[1]: tensor([[False, False, False, ..., False, False, False]]) ## Expected behavior As written in the [documentation](https://huggingface.co/transformers/_modules/transformers/models/wav2vec2/feature_extraction_wav2vec2.html#Wav2Vec2FeatureExtractor.__call__) _"do_normalize (:obj:`bool`, `optional`, defaults to :obj:`False`): Whether or not to zero-mean unit-variance normalize the input. Normalizing can help to significantly improve the performance for some models, *e.g.*, `wav2vec2-lv60 <https://huggingface.co/models?search=lv60>`__."_ should be set to False. However, the option seems to be set to True by default during the initialization.
06-14-2021 14:28:53
06-14-2021 14:28:53
cc @patrickvonplaten <|||||>Hey @Lhemamou, the parameter `do_normalize` is overwritten by the model's config: https://huggingface.co/facebook/wav2vec2-base-960h/blob/main/feature_extractor_config.json<|||||>Thanks @patrickvonplaten, it solved the issue ! :) . Nonetheless, in the code from [documentation](https://huggingface.co/transformers/_modules/transformers/models/wav2vec2/feature_extraction_wav2vec2.html#Wav2Vec2FeatureExtractor.__call__), the initialization part of the class Wav2Vec2FeatureExtractor seems to initialize do_normalize to True by default, contrary to what is written in the documentation for the same class function : > def __init__( > self, > feature_size=1, > sampling_rate=16000, > padding_value=0.0, > return_attention_mask=False, > do_normalize=True, > **kwargs > ) > and > > do_normalize (:obj:`bool`, `optional`, defaults to :obj:`False`): > Whether or not to zero-mean unit-variance normalize the input. Normalizing can help to significantly > improve the performance for some models, *e.g.*, `wav2vec2-lv60<|||||>Oh yeah you're right @Lhemamou ! Would you maybe like to open a PR to fix the documentation ? It should state that it defaults to `True` in this case<|||||>sure I will do it when I have free time :)
transformers
12,150
closed
Flax T5
# What does this PR do? This PR will add T5 in Flax/Jax. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. @patrickvonplaten
06-14-2021 13:36:01
06-14-2021 13:36:01
Exceptionally merging already given the time constraint of the Flax/JAX community week announcement. @patil-suraj @sgugger, I would be very happy if you could nevertheless take a look after merge so that I can correct suggestions in a follow-up PR before the next transformers release.<|||||>Is a jax/flax byT5 planned? Interested in both byT5 and jax... torn.<|||||>You can already use ByT5 in jax/flax! Check the models [here](https://huggingface.co/models?filter=jax&search=byt5)<|||||>oh, my 🤗<|||||>I'm having trouble finding the Jax model training and architecture definition. Is this just loading a byT5 model into a regular T5 inference scaffolding? My aim is to experiment with the training / masking code.<|||||>Here is a test for FlaxByT5 that could help a bit: https://github.com/huggingface/transformers/blob/332a2458611751e7d9c4d7a21bc454299d50e160/tests/test_modeling_flax_t5.py#L432 Also we have `run_summarization` script in Flax that can be easily tweaked for any seq2seq task. And soon (Monday) we'll have FlaxT5 pretraining as well :-)
transformers
12,149
open
Feature request for encoding more than one pair of texts
# 🚀 Feature request Currently, tokenizer may take only inputs like [['text_0', 'text_1']], would be beneficially to expand is as [['text_0', 'text_1', ..., 'text_n']] ## Motivation This would open a convenient way to deal with a new set of processing tasks. ## Your contribution Don't have any.
06-14-2021 11:39:25
06-14-2021 11:39:25
transformers
12,148
closed
[Flax] fix error message
# What does this PR do? Fix error message.
06-14-2021 10:31:52
06-14-2021 10:31:52
transformers
12,147
closed
Improve detr
# What does this PR do? Fixes #12105, improves some more docs and removes some unused variables in `modeling_detr.py`.
06-14-2021 10:05:04
06-14-2021 10:05:04
transformers
12,146
closed
[Flax] Add links to google colabs
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adds links to Flax colabs ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-14-2021 09:59:29
06-14-2021 09:59:29
transformers
12,145
closed
Have dummy processors have a `from_pretrained` method
Fix https://github.com/huggingface/transformers/issues/12100 Before: ```py >>> from transformers import Speech2TextProcessor >>> processor =Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr") ``` ```out Traceback (most recent call last): File "<input>", line 1, in <module> AttributeError: type object 'Speech2TextProcessor' has no attribute 'from_pretrained' ``` After: ```py >>> from transformers import Speech2TextProcessor >>> processor =Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr") ``` ```out Traceback (most recent call last): File "<input>", line 1, in <module> File "/home/lysandre/Workspaces/Python/transformers/src/transformers/utils/dummy_sentencepiece_and_speech_objects.py", line 11, in from_pretrained requires_backends(cls, ["sentencepiece", "speech"]) File "/home/lysandre/Workspaces/Python/transformers/src/transformers/file_utils.py", line 606, in requires_backends raise ImportError("".join([BACKENDS_MAPPING[backend][1].format(name) for backend in backends])) ImportError: Speech2TextProcessor requires the SentencePiece library but it was not found in your environment. Checkout the instructions on the installation page of its repo: https://github.com/google/sentencepiece#installation and follow the ones that match your environment. Speech2TextProcessor requires the torchaudio library but it was not found in your environment. You can install it with pip: `pip install torchaudio` ```
06-14-2021 09:10:35
06-14-2021 09:10:35
transformers
12,144
open
How to train the new wav2vec unsupervised model using hugging face ?
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> How to train the new wav2vec unsupervised model using hugging face ? , The paper link is : https://ai.facebook.com/research/publications/unsupervised-speech-recognition ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
06-14-2021 04:53:39
06-14-2021 04:53:39
The pretraining of wav2vec2-u is a pretty complex training pipeline. It'll probably still take a bit until we have this merged <|||||>@patrickvonplaten @patil-suraj any updates on this yet ?<|||||>I won't have time in the near future to work on this - feel free to give it a try though. It's a very cool paper :-)<|||||>Hey HF team, I see you have [an example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-pretraining) up of how to perform this pre-training. First of all - thank you very much for this work! I'm trying to use this code to train a wav2vec-style model for music. As indicated was likely in the above link, I was running into some training stability issues. One thing that particularly helped me with this was reducing the codebook size. The wav2vec paper does an ablation study in the number of groups and vectors (`G` and `V`) and found that small codebooks work very well. I have been experimenting with G=8 and V=8 and it seems more likely to produce a stable training run for my dataset. Might be worth looking into for librispeech if you find the time (or if someone else sees this and is struggling). I also had one other question: What was the reasoning behind this initialization choice? https://github.com/huggingface/transformers/blob/main/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1055 The mean and variance of the initialized Linear weights after this initialization is very close to the same statistics for the default pytorch initialization (which uses kaiming_uniform init). The difference with your initialization is that it doesn't automatically scale with fan_in and it draws from a normal distribution. I didn't see anything in the paper about either of these details and was just wondering why this was done. Thanks again for this! It's great work!<|||||>Hey @neonbjb, I think the init here was just a copy-paste from what we had for other models. I think fairseq is actually using the default init values for the attention layers: https://github.com/facebookresearch/fairseq/blob/b5a039c292facba9c73f59ff34621ec131d82341/fairseq/modules/multihead_attention.py#L64 . So maybe we should use this as well here. Does `kaiming_uniform_init` work better for you? Definitely open for a PR here to change it<|||||>I don't think the choice between uniform or normal distributions in the init made an appreciable difference, I was just trying to understand the choice. Reducing the size of V (and increasing G) made the biggest difference in stability.<|||||>BTW, if I understood correctly, the Data2Vec guys stated that Data2Vec performs better than Wav2Vec2 mainly because it makes no assumption about the number of sound units a spoken language has (= the number of codebook vectors). This codebook vector is a somewhat arbitrary choice and can vary strongly depending on the language. A big gain from Data2Vec is that there is no such hyper-parameter as a codebook which makes the model generalize better. @alexeib please correct me if I'm wrong here :sweat_smile:
transformers
12,143
closed
Ouput Includes Input
Whenever I am generating text the input is included in the output. When the input is close to the maximum length the model barely produces any useful output. # Information When using transformers.pipeline or transformers.from_pretrianed, the model is only generating the input, when the input is long. For example, `generator = transformers.pipeline('text-generation', model='gpt2')` `prompt = "really long text that is 1023 tokens ..."` `output = generator(prompt, mex_length=1024, do_sample=True, temperature=0.9)` output in this case would be equal to the input prompt. ## To reproduce [Here is a Collab notebook](https://colab.research.google.com/drive/1JzwSmFGrWY1bU6f-t-mgMug88NsRVllp#scrollTo=OgOhZxQJNseL) with simple examples of the problem. I am looking to generate output from input ~1300 tokens and running into this issue consistently. Is there a way around this?
06-14-2021 01:57:42
06-14-2021 01:57:42
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead? Thanks!<|||||>Ok, I posted my question [here](https://discuss.huggingface.co/t/output-includes-input/6831). Thank you!
transformers
12,142
closed
CLIP tokenizer inconsistent with OpenAI release
## Environment info - `transformers` version: 4.6.1 - Platform: Linux-5.4.0-52-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.8.1 (True) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No @patil-suraj ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [x] my own modified scripts: (give details below) * [ ] the official example scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ``` >>> import clip >>> import transformers >>> clip.tokenize('hello world') tensor([[49406, 3306, 1002, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]) >>> tokenizer = transformers.CLIPTokenizerFast.from_pretrained('openai/clip-vit-base-patch32') >>> tokenizer('hello world') {'input_ids': [3306, 220, 1002], 'attention_mask': [1, 1, 1]} ``` The HF CLIPTokenizer seems to add an extra token while dropping the <bos> and <eos> tokens. Am I missing something here? Thanks!
06-13-2021 18:40:54
06-13-2021 18:40:54
The non-fast tokenizer seems to be fine: ``` >>> tokenizer = transformers.CLIPTokenizer.from_pretrained('openai/clip-vit-base-patch32') >>> tokenizer('hello world') {'input_ids': [49406, 3306, 1002, 49407], 'attention_mask': [1, 1, 1, 1]} ```<|||||>To add more into this, HF's fast tokenizer seems to add an extra token for every white space between words: ``` >>> tokenizer("a photo of a cat")['input_ids'] [320, 220, 1125, 220, 539, 220, 320, 220, 2368] ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, is there any update/eta on this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,141
closed
RuntimeError: Could not infer dtype of numpy.int64 on Squad T5
Hello, I try to run the code for T5 on Squad dataset in [https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb](url) I install the required libraries as: ``` !pip install transformers==2.9.1 !pip install -U nlp !pip install sentencepiece ``` I fixed the xla error as in [https://stackoverflow.com/questions/67257008/oserror-libmkl-intel-lp64-so-1-cannot-open-shared-object-file-no-such-file-or](url) However, when the training starts, it gives the following error: Exception in thread Thread-17: ``` Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.7/dist-packages/torch_xla/distributed/parallel_loader.py", line 139, in _loader_worker _, data = next(data_iter) File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 521, in __next__ data = self._next_data() File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 561, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.7/dist-packages/nlp/arrow_dataset.py", line 719, in __getitem__ format_kwargs=self._format_kwargs, File "/usr/local/lib/python3.7/dist-packages/nlp/arrow_dataset.py", line 707, in _getitem format_kwargs=format_kwargs, File "/usr/local/lib/python3.7/dist-packages/nlp/arrow_dataset.py", line 619, in _convert_outputs v = map_nested(command, v, **map_nested_kwargs) File "/usr/local/lib/python3.7/dist-packages/nlp/utils/py_utils.py", line 191, in map_nested return function(data_struct) RuntimeError: Could not infer dtype of numpy.int64 ```
06-13-2021 14:48:41
06-13-2021 14:48:41
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I think this issue is still occuring.<|||||>Hi @wanglec , that's an old notebook and has not been updated since, so I don't recommend it anymore. There's a new example in transformers for fine-tuning T5 for qa, [here](https://github.com/huggingface/transformers/tree/master/examples/pytorch/question-answering#fine-tuning-t5-on-squad20). It also uses `Trainer`, so supports training on TPUs. [Here's](https://github.com/huggingface/transformers/tree/master/examples/pytorch#running-on-tpus) a short guide about how to run these scripts on tpu
transformers
12,140
closed
[FLAX] port GPTNeo to Flax
Port the existing GPTNeo Model to FLAX
06-13-2021 14:05:16
06-13-2021 14:05:16
GPTNeo is available in Flax.
transformers
12,139
closed
Add output in a dictionary for TF `generate` method
This PR adds two components into the TF `generate` method: 1. It enables the model outputs `attentions`, `hidden_states` and `scores` 2. It enables `return_dict_in_generate` This PR thus narrows the gap between PyTorch and TF `generate` method implementations. This PR also adds two tests for the dictionary output. Besides, this PR fixes handling of 2-tuples of attentions for the XLNet model when `target_mapping is not None`. **Reviewers:** @Rocketknight1 @patrickvonplaten @sgugger (anyone else in the community) <hr> Edit: The above-mentioned features are not implemented for the `generate` method of `TFRagSequenceForGeneration` model.
06-13-2021 11:43:44
06-13-2021 11:43:44
Great job @stancld !
transformers
12,138
closed
Using checkpoints in gpt neo xl
Hi, I downloaded gpt neo xl pretrained model from theeye.eye on my pc. It downloaded various checkpoints. How do i use them? ... Because in order to load and use model I'd need encoder. Json, pytorch. Bin, etc..
06-13-2021 10:21:32
06-13-2021 10:21:32
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,137
closed
wav2vec2 not converging when finetuning
## Environment info - `transformers` version: 4.4.0 - Platform: Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.8.1+cu111 (True) - Tensorflow version (GPU?): 2.5.0 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten ## Information Model I am using: wav2vec2 The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce I have a dataset of single English word, 1 second long audio files sampled at 16kHz. I wanted to use wav2vec2 for speech recognition instead of just doing audio classification because I wanted the model to be able to generalize to longer audio samples with more words. I followed [the official wav2vec2 guide](https://huggingface.co/blog/fine-tune-wav2vec2-english) almost exactly (the only difference was the dataset used, but I made sure the dataset format and vocab list format was identical as well) but the model does not seem to be converging. The loss would decrease to approx. 3 and stay around there. Checking the predictions made during evaluation, I realized that the model just kept outputting the padding token regardless of the input. Other issues with similar behaviour are #10884 and #10983. I have tried suggestions there such as increasing the learning rate with no success. ## Expected behavior The model should show signs of convergence, such as slowly starting to output sensible prediction strings. Any help is greatly appreciated!
06-13-2021 10:16:57
06-13-2021 10:16:57
Hey @meeps123, We try to keep the github issues for code related bugs. For such questions, could you please the [forum](https://discuss.huggingface.co/) instead? :-) Feel free to tag me there! Also could you attach a google colab so that I can take a look at your training script? It is very difficult to draw any conclusions just from reading the text. Cheers, Patrick<|||||>Hi @patrickvonplaten, Sure thing! I have opened a topic [here](https://discuss.huggingface.co/t/wav2vec2-not-converging-when-finetuning/6773). The Colab notebook is linked there. Thank you for the assistance!
transformers
12,136
closed
Fix t5 error message
# What does this PR do? Change `inputs` to `input_ids` in error message. ```diff - f"You cannot specify both {err_msg_prefix}inputs and {err_msg_prefix}inputs_embeds at the same time" + f"You cannot specify both {err_msg_prefix}inputs_ids and {err_msg_prefix}inputs_embeds at the same time" ``` - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
06-13-2021 09:00:41
06-13-2021 09:00:41
transformers
12,135
closed
[lm examples] Replicate --config_overrides addition to other LM examples
# What does this PR do? This PR replays the new feature `--config_overrides` for other scripts under `examples/pytorch/language-modeling/` which was added by https://github.com/huggingface/transformers/pull/11798/ <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> Fixes: https://github.com/huggingface/transformers/issues/11875 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @stas00
06-13-2021 04:36:41
06-13-2021 04:36:41
I don't think this change applies to `run_clm_no_trainer.py` and `run_mlm_no_trainer.py` since the argument `model_name_or_path` is a required argument and we can't have both arguments `model_name_or_path` and `config_overrides` at the same time.
transformers
12,134
closed
Ray Tune Integration Updates
# What does this PR do? - Automatically disables memory tracker if enabled since the memory tracker is not serializable - Fixes the Ray Tune integration test - Adds a new test for Ray Client API - Adds integration tests back to the scheduled Github Actions pipeline Closes #11249, https://github.com/huggingface/transformers/issues/12177 @LysandreJik @richardliaw
06-13-2021 03:13:27
06-13-2021 03:13:27
thanks @sgugger for the fast review! anything blocking to get this merged :) ?<|||||>This is good for me. You had standing questions for Lysandre so not sure it was ready to be merged, but I will do so if you tell me everything is okay on your side :-)<|||||>@sgugger yep this is ready to merge!<|||||>Thanks again!
transformers
12,133
closed
Adding fastseq support to more recent version of HF transformers
# 🚀 Feature request Would it be possible to integrate [fastseq](https://github.com/microsoft/fastseq) with the later version of HuggingFace transformer models. ## Motivation fastseq is a library that gives speedup on Transformers for text generation. They claim to have pretty large speedups (3-8x) for various Transformer architectures (GPT2, Bart, etc.). The only caveat is they only support an older version of HF transformers (3.0.2). Has anyone already looked into making it compatible with the latest API of HuggingFace models? ## Your contribution I am willing to discuss and can contribute if no one has planned to do so already.
06-12-2021 20:53:10
06-12-2021 20:53:10
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,132
closed
Use text_column_name variable instead of "text"
`text_column_name` was already defined above where I made the changes and it was also used below where I made changes. This is a very minor change. If a dataset does not use "text" as the column name, then the `tokenize_function` will now use whatever column is assigned to `text_column_name`. `text_column_name` is just the first column name if "text" is not a column name. It makes the function a little more robust, though I would assume that 90% + of datasets use "text" anyway. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger, @patil-suraj <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-12-2021 17:23:09
06-12-2021 17:23:09
transformers
12,131
closed
[Flax] Add Beam Search
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This adds beam search for Flax. Aggressive integration tests for Bart-large-cnn is added. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-12-2021 14:39:26
06-12-2021 14:39:26
Circle CI error seem unrelated: ```OSError: /home/circleci/.local/lib/python3.7/site-packages/torch_scatter/_scatter_cpu.so: undefined symbol: _ZNK2at6Tensor6deviceEv```<|||||>Rebase from https://github.com/huggingface/transformers/pull/12181
transformers
12,130
closed
Fix for making student ProphetNet for Seq2Seq Distillation
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) Enables making student model of ProphetNet ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger @LysandreJik <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-12-2021 08:46:48
06-12-2021 08:46:48
We're not actively maintaining those examples, so would need an approval from the original author (@sshleifer ) before merging<|||||>LGTM!<|||||>Thank you both!
transformers
12,129
closed
TypeError when trying to load pretrained ALBERT model in BertTokenizer
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.6.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.1+cu101 (False) - Tensorflow version (GPU?): 2.5.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): BertTokenizer The problem arises when using: * [ *] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [* ] my own task or dataset: (give details below) Trying to tokenize a dataset of tweets ## To reproduce Steps to reproduce the behavior: from transformers import BertTokenizer tokenizerr = BertTokenizer.from_pretrained("albert-base-v2") ->> The error message is -> TypeError Traceback (most recent call last) <ipython-input-11-f632f8d4de7e> in <module>() 1 from transformers import BertTokenizer ----> 2 tokenizerr = BertTokenizer.from_pretrained("albert-base-v2") 3 frames /usr/lib/python3.7/genericpath.py in isfile(path) 28 """Test whether a path is a regular file""" 29 try: ---> 30 st = os.stat(path) 31 except OSError: 32 return False TypeError: stat: path should be string, bytes, os.PathLike or integer, not NoneType <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior The code should load the pre-trained BertTokenizer model for the Albert-base-v2 model. The same thing happened with Albert-base-v1 <!-- A clear and concise description of what you would expect to happen. -->
06-12-2021 08:10:33
06-12-2021 08:10:33
Hello! Why don't you try to load the ALBERT tokenizer in an ALBERT tokenizer? ```py from transformers import AlbertTokenizer tokenizer = AlbertTokenizer.from_pretrained("albert-base-v2") ```<|||||>Hey! I was not aware of its existence to be honest, but shouldnt loading it in the BertTokenizer work?<|||||>ALBERT and BERT are different models, and the ALBERT tokenizer isn't related to BERT's tokenizer at all. They're not based on the same algorithms: BERT's tokenizer is using WordPiece, ALBERT's using Unigram. If you're looking for a tokenizer to encompass all other tokenizers, take a look at the [`Auto*` classes](https://huggingface.co/transformers/model_doc/auto.html): ```py from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("albert-base-v2") ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,128
closed
got multiple values for argument 'input_shape'
## Environment info - `transformers` version: 4.6.1 - Platform: Linux-4.14.232-176.381.amzn2.x86_64-x86_64-with-glibc2.2.5 - Python version: 3.7.9 - PyTorch version (GPU?): 1.8.1+cu102 (True) - Tensorflow version (GPU?): 2.5.0 (True) - Using GPU in script?: True - Using distributed or parallel set-up in script?: No ### Who can help @TevenLeScao, @Patrickvonplaten ## Information Model I am using (Bert, XLNet ...): Bert The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) ## To reproduce Steps to reproduce the behavior: 1. bash transformers/examples/research_projects/performer/sanity_script.sh [05:45:53] - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False [05:45:53] - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir=experiments, overwrite_output_dir=True, do_train=True, do_eval=False, do_predict=False, evaluation_strategy=IntervalStrategy.NO, prediction_loss_only=False, per_device_train_batch_size=4, per_device_eval_batch_size=8, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=0.0005, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_ratio=0.0, warmup_steps=100, logging_dir=runs/Jun12_05-45-53_ip-10-228-58-93.int.klarna.net, logging_strategy=IntervalStrategy.STEPS, logging_first_step=False, logging_steps=500, save_strategy=IntervalStrategy.STEPS, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, fp16_full_eval=False, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=[], dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=experiments, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=[], deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, length_column_name=length, report_to=['tensorboard'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, use_legacy_prediction_loop=False, push_to_hub=False, resume_from_checkpoint=None, _n_gpu=1, mp_parameters=) [05:45:53] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): s3.amazonaws.com:443 [05:45:53] - DEBUG - urllib3.connectionpool - https://s3.amazonaws.com:443 "HEAD /datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py HTTP/1.1" 200 0 [05:45:53] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): raw.githubusercontent.com:443 [05:45:54] - DEBUG - urllib3.connectionpool - https://raw.githubusercontent.com:443 "HEAD /huggingface/datasets/1.8.0/datasets/wikipedia/wikipedia.py HTTP/1.1" 200 0 [05:45:54] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): raw.githubusercontent.com:443 [05:45:54] - DEBUG - urllib3.connectionpool - https://raw.githubusercontent.com:443 "HEAD /huggingface/datasets/1.8.0/datasets/wikipedia/dataset_infos.json HTTP/1.1" 200 0 [05:45:54] - WARNING - datasets.builder - Reusing dataset wikipedia (/home/silvano.garnerone/.cache/huggingface/datasets/wikipedia/20200501.simple/1.0.0/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1) [05:45:54] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): s3.amazonaws.com:443 [05:45:54] - DEBUG - urllib3.connectionpool - https://s3.amazonaws.com:443 "HEAD /datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py HTTP/1.1" 200 0 [05:45:54] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): raw.githubusercontent.com:443 [05:45:54] - DEBUG - urllib3.connectionpool - https://raw.githubusercontent.com:443 "HEAD /huggingface/datasets/1.8.0/datasets/wikipedia/wikipedia.py HTTP/1.1" 200 0 [05:45:54] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): raw.githubusercontent.com:443 [05:45:55] - DEBUG - urllib3.connectionpool - https://raw.githubusercontent.com:443 "HEAD /huggingface/datasets/1.8.0/datasets/wikipedia/dataset_infos.json HTTP/1.1" 200 0 [05:45:55] - WARNING - datasets.builder - Reusing dataset wikipedia (/home/silvano.garnerone/.cache/huggingface/datasets/wikipedia/20200501.simple/1.0.0/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1) [05:45:55] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): s3.amazonaws.com:443 [05:45:55] - DEBUG - urllib3.connectionpool - https://s3.amazonaws.com:443 "HEAD /datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py HTTP/1.1" 200 0 [05:45:55] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): raw.githubusercontent.com:443 [05:45:55] - DEBUG - urllib3.connectionpool - https://raw.githubusercontent.com:443 "HEAD /huggingface/datasets/1.8.0/datasets/wikipedia/wikipedia.py HTTP/1.1" 200 0 [05:45:55] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): raw.githubusercontent.com:443 [05:45:55] - DEBUG - urllib3.connectionpool - https://raw.githubusercontent.com:443 "HEAD /huggingface/datasets/1.8.0/datasets/wikipedia/dataset_infos.json HTTP/1.1" 200 0 [05:45:55] - WARNING - datasets.builder - Reusing dataset wikipedia (/home/silvano.garnerone/.cache/huggingface/datasets/wikipedia/20200501.simple/1.0.0/2fe8db1405aef67dff9fcc51e133e1f9c5b0106f9d9e9638188176d278fd5ff1) [05:45:55] - INFO - absl - Starting the local TPU driver. [05:45:55] - INFO - absl - Unable to initialize backend 'tpu_driver': Not found: Unable to find driver in registry given worker: local:// [05:45:55] - INFO - absl - Unable to initialize backend 'tpu': Invalid argument: TpuPlatform is not available. [05:45:56] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443 [05:45:56] - DEBUG - urllib3.connectionpool - https://huggingface.co:443 "HEAD /bert-base-cased/resolve/main/config.json HTTP/1.1" 200 0 [05:45:56] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443 [05:45:56] - DEBUG - urllib3.connectionpool - https://huggingface.co:443 "HEAD /bert-base-cased/resolve/main/config.json HTTP/1.1" 200 0 [05:45:56] - DEBUG - urllib3.connectionpool - Starting new HTTPS connection (1): huggingface.co:443 [05:45:57] - DEBUG - urllib3.connectionpool - https://huggingface.co:443 "HEAD /bert-base-cased/resolve/main/flax_model.msgpack HTTP/1.1" 302 0 Traceback (most recent call last): File "run_mlm_performer.py", line 543, in <module> dropout_rate=0.1, File "/home/silvano.garnerone/.local/lib/python3.7/site-packages/transformers/modeling_flax_utils.py", line 326, in from_pretrained model = cls(config, *model_args, **model_kwargs) File "/home/silvano.garnerone/performer/modeling_flax_performer.py", line 482, in __init__ super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype) #input_shape is already present in config TypeError: __init__() got multiple values for argument 'input_shape' ## Expected behavior The script to run without error
06-12-2021 05:50:58
06-12-2021 05:50:58
Hi, It seems to me that `input_shape` is defined twice inside super().init(...), probably both in 'config' and 'input_shape'. Thanks for helping!<|||||>Hey @garner1, Would you like to open a PR to fix it in `research_projects/performer/`?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,127
closed
Multi-GPU training has literally no GPU-Utilization (0%)
I know that multi-GPU training is handled by the trainer class automatically through `CUDA_VISIBLE_DEVICES=...` flag in transformers. But, I'm having a weird problem. Like, after setting `CUDA_VISIBLE_DEVICES=0,1,2`, 3 GPUs are being used and `nvidia-smi`outputs the following: ``` +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla V100-SXM2... On | 00000000:00:17.0 Off | 0 | | N/A 76C P0 293W / 300W | 13758MiB / 16160MiB | 94% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 Tesla V100-SXM2... On | 00000000:00:18.0 Off | 0 | | N/A 43C P0 72W / 300W | 4770MiB / 16160MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 2 Tesla V100-SXM2... On | 00000000:00:19.0 Off | 0 | | N/A 43C P0 73W / 300W | 4770MiB / 16160MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ ``` I'm trying the inference mode of pegasus model: ``` CUDA_VISIBLE_DEVICES=0,1,2 python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path /home/code-base/user_space/saved_models/pytorch/reddit_tifu/ \ --do_predict \ --train_file $DS_BASE_DIR/train.json \ --validation_file $DS_BASE_DIR/validation.json \ --test_file $DS_BASE_DIR/test.json \ --output_dir /home/code-base/user_space/saved_models/pegasus/ \ --per_device_train_batch_size=3 \ --per_device_eval_batch_size=3 \ --overwrite_output_dir \ --predict_with_generate \ --text_column text \ --summary_column summary \ --num_beams 5 ``` The strange thing to me is that GPU-Util of GPU-1 and GPU-2 is 0%, while they got a part of their memory filled. Though, this is not the case about GPU-0. I'm hesitating now if I'm using the correct way of doing multi-GPU training. Any advice or hint would be appreciated! ## Environment info - `transformers` version: 4.7.0 dev - Platform: Ubuntu 18.04 - Python version: 3.8 - PyTorch version (GPU?): 1.6 - Tensorflow version (GPU?): - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help @sgugger
06-12-2021 05:00:10
06-12-2021 05:00:10
Solved by #11045 I had to use distributed training to this end. Closing...
transformers
12,126
open
[Performance] Tracking open Issues and PRs (pytorch transformers)
Let's use this Issue to track performance issues and enhancement requests, so it's easier to prioritize the work. **This is for pytorch `transformers`** Also I will label it as a `Good Difficult Issue` in case someone is ready for a challenging but rewarding experience of figuring things out. If you do want to take the challenge comment in the corresponding Issue/PR that resonates with you so others would know you're working on it. If I missed any other relevant open performance-related Issues/PRs that need attention please comment below. ## Regression: - [ ] https://github.com/huggingface/transformers/pull/11218 Regression after Bart-like refactoring - need to compare the original Bart refactoring PR since most likely the regression happened there. - [ ] ## Odd slowness: - [ ] https://github.com/huggingface/transformers/issues/10816 figuring out why eval with --fp16_full_eval is 25% slower - [ ] ## Fused kernels possibilities: - [ ] https://github.com/huggingface/transformers/issues/11368 Megatron fused CUDA kernels to improve Hugging Face model classes' scalability - [ ] research pytorch kernels? - [ ] I know Deepspeed has various kernels that we might be able to use ## Faster / leaner startup / module loading - [ ] https://github.com/huggingface/transformers/issues/12274 - skip storage allocation which gets dropped for pretrained weights ## Faster optimizers - [ ] https://github.com/huggingface/transformers/issues/12084 - a proposal to port `MemoryEfficientFP16Optimizer` from fairseq - [ ] https://github.com/huggingface/transformers/issues/9965 - `torch.optim._multi_tensor` faster optimizers - having some bottleneck in the test script - need to profile ## Scalability - [ ] https://github.com/huggingface/transformers/issues/10321 Tensor Parallelism ## Deepspeed-specific features - [ ] https://github.com/huggingface/transformers/issues/9606 a list of features that can be integrated - [ ] https://github.com/huggingface/transformers/issues/12273 - make `from_pretrained` loading faster ## Tests - [ ] No issue yet, but we really need to add performance regression tests
06-12-2021 03:45:57
06-12-2021 03:45:57
@stas00 If I want to work on this issue, should I pick one of those issues to keep track of its performance? Can you also tell me how I can keep track of the performances? Can you give me some guidance? <|||||>Hi @JuheonChu, this is not an Issue to work on. As the title says this is a collection of pointers to track other Issues. It's dated but many issues that it links to are still valid. So you can click on the issue that resonates with you and discuss the details there - not here. I hope this addresses your question.