repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
12,425
closed
Loading custom model
I had changed the definition of a token classification model and added another output head. Now when I try to load the model using AutoModelForTokenClassification it does not load the weights of the modified final layer which I had added. Is there any other class I can use to load this custom model.
06-30-2021 01:10:14
06-30-2021 01:10:14
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Same. I run into several problems. I figured that I would need to call something like `CustomModel.from_pretrained("PATH_TO_CHECKPOINT")` to load the model. But that did not work for me either. If I find a solution I will give an update.
transformers
12,424
closed
Fix default bool in argparser
# What does this PR do? As outlined in #12423, in a dataclass with no default for a bool parameter, the bool ended up defaulting to `True` when not passed, which is the opposite of the wanted behavior. Fixes #12423
06-29-2021 21:58:34
06-29-2021 21:58:34
Do you want to pull in the test snippet from the issue to make sure it doesn't happen again? That test doesn't actually parse any arguments at the moment, so it relies on the argparser being configured right in the test, which is more error prone than just parsing simple things and checking them.
transformers
12,423
closed
HfArgumentParser defaults booleans to on
## Environment info - `transformers` version: 4.9.0.dev0 (initially discovered on 4.8.1) - Platform: macOS-11.4-x86_64-i386-64bit - Python version: 3.9.5 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed ### Who can help @sgugger ## Information HfArgumentParser when used on a dataclass with a bool field with no default turns the bool on unless it's supplied with `--<field_name> False` or similar "false-y" value. I would expect that as the field has no default it should be false unless `--<field_name>` or `--<field_name> True` is supplied. This is a behaviour change from `v3.0.0` where the booleans are parsed correctly as we're looking at upgrading and this issue hit us. ## To reproduce Steps to reproduce the behavior: 1. Define a dataclass with a boolean field 2. Supply a list of arguments which does not include that field name 3. The field is turned on. Appending this snippet to the bottom of [`test_basic` at line 110](https://github.com/huggingface/transformers/blob/master/tests/test_hf_argparser.py#L110) in `test_hf_argparser.py` fails the test. ```python args = ["--foo", "1", "--baz", "quux", "--bar", "0.5"] example, = parser.parse_args_into_dataclasses(args, look_for_args_file=False) self.assertFalse(example.flag) ``` Extending `args` with `["--flag","False"]` recovers the expected behaviour. ## Expected behavior The boolean should be set to false if the argument is not passed in.
06-29-2021 21:43:04
06-29-2021 21:43:04
If you agree this is a bug in the parsing logic then we'd be happy to fix it and send a PR.<|||||>Yes, it does look like a bug. Fix is quite simple, I can make a PR.<|||||>Ok thanks. That saves me a job later this week.<|||||>If you want to have a look at the PR mentioned above and check it does give the expected behavior, that would be great!<|||||>Ok, I'll check it tomorrow against our internal use case to make sure it fixes that too.
transformers
12,422
closed
[modelcard] fix
this PR is fixing an incorrect attribute - probably some tests are needed? @sgugger
06-29-2021 21:20:10
06-29-2021 21:20:10
transformers
12,421
closed
Add option to save on each training node
# What does this PR do? There is currently a problem when using `load_best_model_at_end=True` on a training with multiple nodes: the model is only saved on the main process (so the machine rank 0) and machine with other ranks can't see the saved model (unless the system uses some kind of shared storage). This PR adds a flag to enable the save on each node for that situation, and avoids the hard failure when the model to reload is not found.
06-29-2021 19:03:25
06-29-2021 19:03:25
transformers
12,420
closed
Easily train a new fast tokenizer from a given one - tackle the special tokens format (str or AddedToken)
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> This PR is a sub-PR of the feature developed in PR #12361. In the `train_new_from_iterator` method, the user can indicate that he wants to change the wording of a special token with the `special_tokens_map` argument. In terms of behavior, we expect the resulting tokenizer to have special tokens that behave like the special tokens that were in the initial tokenizer. In other words, if in the initial token the special token linked to the `mask_token` was an `AddedToken` with `lstrip=True` then this parameter must be kept in the new trained tokenizer even if the user in the `special_tokens_map` argument indicates that the wording changes. For example from `[MASK]` to `<mask>`. This PR proposes this behavior and tests it
06-29-2021 17:13:26
06-29-2021 17:13:26
transformers
12,419
closed
[JAX/Flax readme] add philosophy doc
# What does this PR do? Adds a section about Flax's design philosophy in Transformers.
06-29-2021 17:07:47
06-29-2021 17:07:47
Thanks a lot Patrick for the awesome images :)
transformers
12,418
closed
DeepSpeed gets stuck when training
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.8.1 - Platform: Linux-4.15.0-140-generic-x86_64-with-debian-buster-sid - Python version: 3.7.10 - PyTorch version (GPU?): 1.9.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: single gpu ### Who can help @stas00 <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Trying to replicate [this](https://github.com/dredwardhyde/gpt-neo-fine-tuning-example/blob/main/gpt_neo_xl_deepspeed.py), I am using a 125M GPT Neo model and fine-tune it with using the Trainer. Training arguments include a DeepSpeed option. The Trainer gets stuck with: ``` [2021-06-29 14:29:44,747] [INFO] [logging.py:60:log_dist] [Rank 0] DeepSpeed info: version=0.4.1, git-hash=unknown, git-branch=unknown [2021-06-29 14:29:44,757] [INFO] [utils.py:13:_initialize_parameter_parallel_groups] data_parallel_size: 1, parameter_parallel_size: 1 ``` ds_report gives: ``` -------------------------------------------------- DeepSpeed C++/CUDA extension op report -------------------------------------------------- NOTE: Ops not installed will be just-in-time (JIT) compiled at runtime if needed. Op compatibility means that your system meet the required dependencies to JIT install the op. -------------------------------------------------- JIT compiled ops requires ninja ninja .................. [OKAY] -------------------------------------------------- op name ................ installed .. compatible -------------------------------------------------- cpu_adam ............... [NO] ....... [OKAY] fused_adam ............. [NO] ....... [OKAY] fused_lamb ............. [NO] ....... [OKAY] sparse_attn ............ [NO] ....... [OKAY] transformer ............ [NO] ....... [OKAY] stochastic_transformer . [NO] ....... [OKAY] [WARNING] async_io requires the libraries: ['libaio-dev'] but are missing. Can be fixed by: `apt install libaio-dev`. async_io ............... [NO] ....... [NO] transformer_inference .. [NO] ....... [OKAY] utils .................. [NO] ....... [OKAY] quantizer .............. [NO] ....... [OKAY] -------------------------------------------------- DeepSpeed general environment info: torch install path ............... ['/home/jovyan/anaconda3/envs/esemala/lib/python3.7/site-packages/torch'] torch version .................... 1.9.0 torch cuda version ............... 11.1 nvcc version ..................... 10.1 deepspeed install path ........... ['/home/jovyan/anaconda3/envs/esemala/lib/python3.7/site-packages/deepspeed'] deepspeed info ................... 0.4.1, unknown, unknown deepspeed wheel compiled w. ...... torch 1.9, cuda 11.1 ``` Is there a way to debug this? <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## To Replicate I modified the [original code](https://github.com/dredwardhyde/gpt-neo-fine-tuning-example/blob/main/gpt_neo_xl_deepspeed.py) slightly to remove the errors: ```python training_args = tr.TrainingArguments(output_dir=save_dir, num_train_epochs=5, logging_steps=300, save_steps=300, per_device_train_batch_size=1, per_device_eval_batch_size=1,warmup_steps=50, learning_rate=0.001,adam_epsilon=1e-06,fp16=True, weight_decay=0.01, logging_dir=f'{save_dir}/logs', deepspeed='./ds_config.json') ``` and ds_config.json is now: ```json { "fp16": { "enabled": true, "min_loss_scale": 1, "opt_level": "O3" }, "zero_optimization": { "stage": 3, "cpu_offload": true, "cpu_offload_params" : true, "contiguous_gradients": true, "overlap_comm": true }, "optimizer": { "type": "AdamW", "params": { "lr": 0.001, "betas": [ 0.9, 0.999 ], "eps": 1e-6 } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": 0, "warmup_max_lr": 0.001, "warmup_num_steps": 50 } }, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "steps_per_print":1 } ```
06-29-2021 15:56:13
06-29-2021 15:56:13
I added your changes to the original and I am not able to reproduce the hanging with "EleutherAI/gpt-neo-2.7B" as it is in the original. I'm on transformers master, but I don't think it makes any difference. If you want me to try anything else please fork https://github.com/dredwardhyde/gpt-neo-fine-tuning-example/, apply whatever changes you need and share the link to your fork. To debug hanging do: ``` pip install py-spy sudo py-spy dump --PID pid_of_the_hanging_process ``` and share the backtraces. Unrelated, if you could make a PR to https://github.com/dredwardhyde/gpt-neo-fine-tuning-example/ with the new ds_config.json it'd help others.<|||||>Thanks @stas00, I installed `transformers `with `pip`. Created a simple example and packed everything into a repo along with all the requirements. Attaching the link to the repo here [https://github.com/SamsTheGreatest/gpt-neo-with-deepspeed.git](https://github.com/SamsTheGreatest/gpt-neo-with-deepspeed.git). I have put other relevant info is in the README. Hopefully, it will help to shine some light on this. Unfortunately, I don't have sudo access. Maybe there is another way to backtrace it? If I could have interrupted the kernel in Jupiter, it would show me some traceback, however in this case, when I start the `Trainer`, I can't even interrupt the kernel anymore.<|||||>That's a wonderful way to do it, @SamsTheGreatest - thank you! OK, so I run your fork and it's running just fine. i.e. it started training - I didn't wait for it to finish. wrt, debug 1. try `py-spy` w/o `sudo` if your system has ptrace set to 0 ``` cat /proc/sys/kernel/yama/ptrace_scope ``` you don't need `sudo` to attach to the process. 2. if it's >0, then used `faulthandler` add this to your code: ``` import faulthandler faulthandler.dump_traceback_later(20, repeat=True) ``` and when you run it, it will dump the bt for each thread every 20 sec. (I haven't tried it in the notebook, but it should probably work just fine) <|||||>Thanks @stas00, that's very detailed! `cat /proc/sys/kernel/yama/ptrace_scope` yields `1` so ill do it with `faulthandler`. Accidentally found out that when removing DeepSpeed option from trainer, it still gets stuck. Removing ``` # os.environ['MASTER_ADDR'] = 'localhost' # os.environ['MASTER_PORT'] = '9994' # os.environ['RANK'] = "0" # os.environ['LOCAL_RANK'] = "0" # os.environ['WORLD_SIZE'] = "1" ``` starts training as expected again. I also tried letting the settings to be discovered via `mpi4py`, as you wrote in the original post, it says `mpi4py` needs to be installed (can't install as I need `sudo` .....again). Could it be all due to the fact that I'm running things not on my own machine directly but using `kubeflow` notebook server? I have dumped the traceback files from all 3 experiments into the same repo. `FP16` is on during all of them. `No settings` means that `os.environ` is commented out. I have also labeled the start of training with `\n\nNow training\n\n`. Thanks again<|||||>You don't need `sudo` to install `mpi4py` - this is just `pip install mpi4py` Perhaps you're the first one to run deepspeed on kubeflow, by looking at the traces seems like it has some distributed issues there Thank you for making the traces. It seems to be stuck at: ``` File "/home/jovyan/anaconda3/envs/esemala/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1080 in broadcast ``` It might be something specific to the their jupyter setup? If I understand correctly kubeflow is notebook only, right? Can you run deepspeed from the command line? e.g. as in this example? https://github.com/stas00/porting/blob/master/transformers/deepspeed/DeepSpeed_on_colab_CLI.ipynb All the `os.environ` code is that we are emulating a distributed launcher in the notebook. (instead of runing `torch.distributed.launch` or the `deepspeed` launcher.) Also try a different port? A different address? Perhaps `127.0.0.1` or find its IP address? It's very possible that the distributed network gets stuck because of either of these 2 as it can't network. Deepspeed requires a fully distributed setup even with just one gpu, since it wasn't really designed for that kind of situation in mind (But perhaps it could).<|||||>Hi @stas00, Sorry for the long wait. Tried other IP, but all yield Permission errors and such.. The correct IP seems to be localhost or **IP of the Kubernetes Pod**. This are the only options I have tried that don't yield errors, however the script still hangs at the same spot. [The notebook you referenced](https://github.com/stas00/porting/blob/master/transformers/deepspeed/DeepSpeed_on_colab_CLI.ipynb ), hangs at the same spot unfortunately. ```python Downloading: 5.40kB [00:00, 3.13MB/s] Using amp fp16 backend [2021-07-05 08:20:38,917] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.4.2, git-hash=unknown, git-branch=unknown [2021-07-05 08:20:43,129] [INFO] [utils.py:13:_initialize_parameter_parallel_groups] data_parallel_size: 1, parameter_parallel_size: 1 ^CKilling subprocess 4452 Main process received SIGINT, exiting Traceback (most recent call last): File "/home/jovyan/anaconda3/envs/esemala/bin/deepspeed", line 6, in <module> main() File "/home/jovyan/anaconda3/envs/esemala/lib/python3.7/site-packages/deepspeed/launcher/runner.py", line 362, in main result.wait() File "/home/jovyan/anaconda3/envs/esemala/lib/python3.7/subprocess.py", line 1019, in wait return self._wait(timeout=timeout) File "/home/jovyan/anaconda3/envs/esemala/lib/python3.7/subprocess.py", line 1653, in _wait (pid, sts) = self._try_wait(0) File "/home/jovyan/anaconda3/envs/esemala/lib/python3.7/subprocess.py", line 1611, in _try_wait (pid, sts) = os.waitpid(self.pid, wait_flags) KeyboardInterrupt (esemala) tf-docker ~/transformers > ``` (Had to keyboard-interrupt it) I have installed transformers and deepspeed as suggested in the notebook. PS: quick suggestion: in the last cell, when running the example, one might consider changing `rm -r output_dir` to `rm -rf output_dir` so that we don't get an error if the directory does not exist. Could we investigate this a little further? Maybe there is something wrong with the mismatch of cuda and cuda-toolkit installed? `nvcc -V` yields `10.1`, however the latest pytorch is installed as for `11.1`. Trying to follow [this tutorial](https://github.com/mallorbc/GPT_Neo_fine-tuning_notebook/blob/main/GPT_Neo_Fine-tune.ipynb) ,now, instead of installing OPs for Deepspeed just in time, I treid `DS_BUILD_OPS=1 pip install .`, however it says ```python Exception: Installed CUDA version 10.1 does not match the version torch was compiled with 11.1, unable to compile cuda/cpp extensions without a matching cuda version. ```<|||||>So the issue in this one is in launching a pytorch subprocess here. Is there a way I could have a direct access to the same environment? > PS: quick suggestion: in the last cell, when running the example, one might consider changing rm -r output_dir to rm -rf output_dir so that we don't get an error if the directory does not exist. That's a great suggestion, @SamsTheGreatest - done! > Exception: Installed CUDA version 10.1 does not match the version torch was compiled with 11.1, unable to compile cuda/cpp extensions without a matching cuda version. You need to install pytorch built with cuda 10 for that. As of this writing this is done with: ``` pip install torch torchvision torchaudio ``` Normally find the right command here: https://pytorch.org/get-started/locally/ DS will handle minor version mismatch no problem.<|||||>@stas00, Unfortunately, I am not authorized to do that.. but I can provide you with the exact docker image I am using. Here is a link: [https://github.com/kubeflow/kubeflow/tree/v1.2.0/components/tensorflow-notebook-image](https://github.com/kubeflow/kubeflow/tree/v1.2.0/components/tensorflow-notebook-image) I tried installing torch for 10.1, process still hangs at > File "/home/jovyan/anaconda3/envs/esemala/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1080 in broadcast just as before. Now, I had to rebuild the docker container as `sudo` password wasn't set. I am now root, so I installed `conda 11.1.1` for linux. All versions are now matching and I managed to build all OPs for deepspeed except `async_io` (I assume I don't need it atm..) using `DS_BUILD_OPS=1 pip install .`. So.. now ds_report shows that all OPs are installed and all cuda versions are matching. Still hangs at the same spot... Reading though some issues, could it be that its due to the `nccl` usage? Is there a trivial way to set backend to `gloo` within the notebook I shared with you @stas00?<|||||>I'm not succeeding at building that Docker image. If I use `build_image.sh` it hangs, if I try to `docker build .` it fails with some deps missing. Do you have a ready docker image I could pull? Since kubeflow is run in a docker image most likely the issue has something to do with its setup/configuration. > Reading though some issues, could it be that its due to the nccl usage? Is there a trivial way to set backend to gloo within the notebook I shared with you @stas00? It's very possible. I haven't run into this myself, so I trust your research. gloo doesn't provide the same functionality as nccl, but it looks that Deepspeed docs say it should work. OK, what if you do: `deepspeed.init_distributed("gloo")` here? instead of `deepspeed.init_distributed()` https://github.com/huggingface/transformers/blob/d7e156bd1ae2467e9ea1dbc44f31da0ed2296aee/src/transformers/training_args.py#L812 I found this issue https://github.com/microsoft/DeepSpeed/issues/1030 where a user was able to use the gloo backend with Deepspeed. <|||||>@stas00 consulted internally again and tried using "gloo" as you specified. Colleagues said they could not manage to run `nccl` on `kubeflow` either. Basically cloned the transformers repo and changed the training_args as you specified. Changed model for trainer like so too: ```python trainer = tr.Trainer(model=model.requires_grad_(False), args=training_args, ..... ``` **Now, with `gloo` code runs a little further!!** ```python [2021-07-08 15:28:56,767] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.4.3+c9fee82, git-hash=c9fee82, git-branch=master [2021-07-08 15:28:56,775] [INFO] [utils.py:13:_initialize_parameter_parallel_groups] data_parallel_size: 1, parameter_parallel_size: 1 [2021-07-08 15:28:56,891] [INFO] [engine.py:177:__init__] DeepSpeed Flops Profiler Enabled: False --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-18-ccb66750b859> in <module> 10 # Start training process! 11 ---> 12 trainer.train() 13 trainer.save_model(save_dir) 14 tokenizer.save_pretrained(save_dir+'/tokenizer/') ~/anaconda3/envs/esemala/lib/python3.7/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1122 if args.deepspeed: 1123 deepspeed_engine, optimizer, lr_scheduler = deepspeed_init( -> 1124 self, num_training_steps=max_steps, resume_from_checkpoint=resume_from_checkpoint 1125 ) 1126 self.model = deepspeed_engine.module ~/anaconda3/envs/esemala/lib/python3.7/site-packages/transformers/deepspeed.py in deepspeed_init(trainer, num_training_steps, resume_from_checkpoint) 369 config_params=config, 370 optimizer=optimizer, --> 371 lr_scheduler=lr_scheduler, 372 ) 373 ~/anaconda3/envs/esemala/lib/python3.7/site-packages/deepspeed/__init__.py in initialize(args, model, optimizer, model_parameters, training_data, lr_scheduler, mpu, dist_init_required, collate_fn, config, config_params) 134 collate_fn=collate_fn, 135 config=config, --> 136 config_params=config_params) 137 else: 138 assert mpu is None, "mpu must be None with pipeline parallelism" ~/anaconda3/envs/esemala/lib/python3.7/site-packages/deepspeed/runtime/engine.py in __init__(self, args, model, optimizer, model_parameters, training_data, lr_scheduler, mpu, dist_init_required, collate_fn, config, config_params, dont_change_device) 189 self.lr_scheduler = None 190 if model_parameters or optimizer: --> 191 self._configure_optimizer(optimizer, model_parameters) 192 self._configure_lr_scheduler(lr_scheduler) 193 self._report_progress(0) ~/anaconda3/envs/esemala/lib/python3.7/site-packages/deepspeed/runtime/engine.py in _configure_optimizer(self, client_optimizer, model_parameters) 701 logger.info('Using client Optimizer as basic optimizer') 702 else: --> 703 basic_optimizer = self._configure_basic_optimizer(model_parameters) 704 if self.global_rank == 0: 705 logger.info( ~/anaconda3/envs/esemala/lib/python3.7/site-packages/deepspeed/runtime/engine.py in _configure_basic_optimizer(self, model_parameters) 772 optimizer = DeepSpeedCPUAdam(model_parameters, 773 **optimizer_parameters, --> 774 adamw_mode=effective_adam_w_mode) 775 else: 776 from deepspeed.ops.adam import FusedAdam ~/anaconda3/envs/esemala/lib/python3.7/site-packages/deepspeed/ops/adam/cpu_adam.py in __init__(self, model_params, lr, bias_correction, betas, eps, weight_decay, amsgrad, adamw_mode) 72 bias_correction=bias_correction, 73 amsgrad=amsgrad) ---> 74 super(DeepSpeedCPUAdam, self).__init__(model_params, default_args) 75 76 self.opt_id = DeepSpeedCPUAdam.optimizer_id ~/anaconda3/envs/esemala/lib/python3.7/site-packages/torch/optim/optimizer.py in __init__(self, params, defaults) 47 param_groups = list(params) 48 if len(param_groups) == 0: ---> 49 raise ValueError("optimizer got an empty parameter list") 50 if not isinstance(param_groups[0], dict): 51 param_groups = [{'params': param_groups}] ValueError: optimizer got an empty parameter list ``` Trying to battle this value error now, is it because `AdamW` was used and now its `DeepSpeedCPUAdam`? Shall I be concerned that CPU is being used? We are using multi-node with single GPU in each cluster, so those issue could be arising from such architecture, but I'm not sure. I will respond on your request for the Docker image a little later once I get it sorted out. Thanks again <|||||>Now, concerning the Docker image. We used the same docker image as one I shared, but at the end used `USER root` instead of `jovyan`. also used those commands for this. Sorry I didn't share this earlier, was not the one involved with images... ```python python build_image.py --tf_version=1.15.2 --platform=gpu tf_notebook pip install --upgrade pip python3 -m pip install -r tensorflow-notebook-image/requirements.txt ``` If it helps I will try building the image and pushing it to docker hub myself, with all necessary requirements (on top of what I gave you I just installed necessary version of torch, compatible with cuda 10.1, huggingface transformers and deepspeed). But I would likely need some time for this...till next week or so<|||||>@SamsTheGreatest, glad to see you made some progress! Not sure why you needed to turn gradients off - that surely won't work as the optimizer now has no params to optimize, which is probably the reason why you had that most recent failure. ------------------- As we are progressing with the diagnosis of OP, it's becoming clear now that this issue has little to do with `transformers` (other than having a hardcoded `nccl` backend) and we should probably try to sort it out on the DeepSpeed Issues-side of things. Once sorted out we can then adjust the HF Trainer to do the right thing as `deepspeed` needs it. Could you please open a new issue at https://github.com/microsoft/DeepSpeed/issues and I suppose the topic should be something along the lines of: using deepspeed in env where nccl doesn't work And then specific sub-issues: 1. make deepspeed work on kubeflow - `nccl`-backend hangs - your OP report 2. make deepspeed work with the 'gloo' backend - your last gloo-specific report https://github.com/huggingface/transformers/issues/12418#issuecomment-876545975 or perhaps these should be 2 separate issues? I trust your judgment. And from there let's see what the Deepspeed developers need, i.e. whether they will want the image or they already know what to do.<|||||>Thanks, @stas00! Yes it seems reasonable, I will reply shortly to this in a little more detail. Also, discovered one more thing. Remember I mentioned this, > Accidentally found out that when removing DeepSpeed option from trainer, it still gets stuck. When trying the same but also changing `nccl` to `gloo` in `training_args.py`, gets everything unstuck aswell! ```python torch.distributed.init_process_group(backend="gloo") device = torch.device("cuda", self.local_rank) self._n_gpu = 1 ``` Could we conclude that for some reason `nccl` doesn't work on with the current hardware setup? Could there be a particular reason for that?<|||||>Great to know that this is not deepspeed specific then - thank you for the experiments, @SamsTheGreatest I'd say make a short repro script like: ``` echo 'import torch; torch.distributed.init_process_group(backend="nccl")' > run python -m torch.distributed.launch --nproc_per_node=2 run ``` and if it hangs file an issue at pytorch? Hopefully someone on their team has dealt with kubeflow. It probably has to do with how it builds Docker with regards to pytorch and cuda tools, or the interface to the gpu cards. For example, what happens if you install the normal pytorch on that kubeflow instance after it was built? That would tests whether the issue is with how the pre-built pytorch was created while building the kubeflow image. <|||||>@stas00 > Not sure why you needed to turn gradients off - that surely won't work as the optimizer now has no params to optimize, which is probably the reason why you had that most recent failure. yes turning on gradients doesn't make any sense. I was attempting to battle the issue with using 'gloo' backend that you referred to... not sure how to fix it https://github.com/microsoft/DeepSpeed/issues/1030 <|||||>Also, have a look at when to use which backend notes here: https://pytorch.org/docs/stable/distributed.html Scroll down to "Which backend to use?" Do any of these ring a bell? -------------- And also these may aid debugging the NCCL issues: ``` export NCCL_DEBUG=INFO export NCCL_DEBUG_SUBSYS=ALL ``` Finally, you can attach to a hanging process with `strace` (or start it under strace) and see where it is hanging on the libc-level.<|||||>> @stas00 > > > Not sure why you needed to turn gradients off - that surely won't work as the optimizer now has no params to optimize, which is probably the reason why you had that most recent failure. > > yes turning on gradients doesn't make any sense. I was attempting to battle the issue with using 'gloo' backend that you referred to... not sure how to fix it [microsoft/DeepSpeed#1030](https://github.com/microsoft/DeepSpeed/issues/1030) Open a new issue there?<|||||>@SamsTheGreatest trying to get caught up on this thread but are you able to run NCCL without deepspeed? Even if we can get the gloo backend working I suspect the performance would not be ideal. Can you try a simple all-reduce test in your environment using NCCL? We often run this gist on our systems to test basic NCCL functionality: https://gist.github.com/jeffra/b5e80466b4c86be00ea3b6f130fb7a36<|||||>> Can you try a simple all-reduce test in your environment using NCCL? So a simple test could be something like: ``` # test.py import torch.distributed as dist import argparse import torch parser = argparse.ArgumentParser() parser.add_argument("--local_rank", type=int) args = parser.parse_args() torch.cuda.set_device(args.local_rank) device = torch.device("cuda", local_rank) dist.init_process_group("nccl") dist.all_reduce(torch.ones(1).to(device), op=dist.ReduceOp.SUM) ``` ``` # to run python -m torch.distributed.launch --nproc_per_node=2 test.py ``` adjust the number of gpus above - probably just 1 in your case. You have only 1 gpu, correct? **Edit** I see you reported earlier 1 gpu per node, > We are using multi-node with single GPU in each cluster, so those issue could be arising from such architecture, but I'm not sure. so then you need to adapt the above to include the `--nnode=` as well.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, I'm having the same issue when trying to reproduce the Academic-Budget-Bert code. I've run the provided test.py code and encountered the same behavior ``` # test.py import torch.distributed as dist import argparse import torch parser = argparse.ArgumentParser() parser.add_argument("--local_rank", type=int) args = parser.parse_args() torch.cuda.set_device(args.local_rank) device = torch.device("cuda", args.local_rank) dist.init_process_group("nccl") dist.all_reduce(torch.ones(1).to(device), op=dist.ReduceOp.SUM) ``` _________________________________________________ ``` @^CTraceback (most recent call last): File "/home/ROCQ/alpage/seddah/src/miniconda3/envs/budgetBERT/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/ROCQ/alpage/seddah/src/miniconda3/envs/budgetBERT/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/ROCQ/alpage/seddah/src/miniconda3/envs/budgetBERT/lib/python3.9/site-packages/torch/distributed/launch.py", line 260, in <module> main() File "/home/ROCQ/alpage/seddah/src/miniconda3/envs/budgetBERT/lib/python3.9/site-packages/torch/distributed/launch.py", line 253, in main process.wait() File "/home/ROCQ/alpage/seddah/src/miniconda3/envs/budgetBERT/lib/python3.9/subprocess.py", line 1189, in wait return self._wait(timeout=timeout) File "/home/ROCQ/alpage/seddah/src/miniconda3/envs/budgetBERT/lib/python3.9/subprocess.py", line 1917, in _wait (pid, sts) = self._try_wait(0) File "/home/ROCQ/alpage/seddah/src/miniconda3/envs/budgetBERT/lib/python3.9/subprocess.py", line 1875, in _try_wait (pid, sts) = os.waitpid(self.pid, wait_flags) KeyboardInterrupt ``` So, if anyone has a workaround, that would be great. Best, Djamé <|||||>I think you could try this solution: `rm -rf ~/.cache/torch_extensions/` ref: https://github.com/huggingface/transformers/issues/12715<|||||>Is this already solved? I also have this problem when training inside pod.<|||||>Creating a new pod has solved this issue for me a couple of times.
transformers
12,417
closed
Raises an error when BertTokenizer is initialized from BertJapaneseTokenizer
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #12416 This PR makes `BertTokenizer` raise an error if it is initialized from `BertJapaneseTokenizer` pretrained tokenizer. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-29-2021 15:08:20
06-29-2021 15:08:20
transformers
12,416
closed
BertTokenizer with BertJapaneseTokenizer pretrained model generates unintended tokenization.
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.0.dev0 - Platform: Windows-10-10.0.19043-SP0 - Python version: 3.9.5 - PyTorch version (GPU?): 1.9.0+cpu (False) - Tensorflow version (GPU?): 2.5.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information `BertTokenizer` with `BertJapaneseTokenizer` pretrained model generates unintended tokenization without any caution. ## To reproduce Steps to reproduce the behavior: Run ```python EXAMPLE_BERT_JAPANESE_ID = "cl-tohoku/bert-base-japanese" tokenizer = BertTokenizer.from_pretrained(EXAMPLE_BERT_JAPANESE_ID) print(tokenizer.tokenize("今日はいい天気ですね")) ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior ```python not_correct = BertTokenizer.from_pretrained(EXAMPLE_BERT_JAPANESE_ID) correct = BertJapaneseTokenizer.from_pretrained(EXAMPLE_BERT_JAPANESE_ID) print(not_correct.tokenize("今日はいい天気ですね")) print(correct.tokenize("今日はいい天気ですね")) ``` Because the two tokenizers were made from the same pretrained model, the output should have been ``` ['今日', 'は', 'いい', '天気', 'です', 'ね'] ['今日', 'は', 'いい', '天気', 'です', 'ね'] ``` or `BertTokenizer.from_pretrained(EXAMPLE_BERT_JAPANESE_ID)` should have raised an error. However, the actual result was ``` ['今', '日', 'はい', '##い', '天', '気', 'です', '##ね'] ['今日', 'は', 'いい', '天気', 'です', 'ね'] ``` and no error or warning raised. <!-- A clear and concise description of what you would expect to happen. -->
06-29-2021 14:57:00
06-29-2021 14:57:00
I think you're raising a good issue that tokenizers will not tell you when they're instantiated from a checkpoint that doesn't have the same tokenizer architecture. However, I think this should be resolved for all tokenizers rather than for a single one, probably by checking the `tokenizer_class` inside the `config.json` and `tokenizer_config.json`. cc @SaulLu @sgugger <|||||>> However, I think this should be resolved for all tokenizers rather than for a single one, probably by checking the `tokenizer_class` inside the `config.json` and `tokenizer_config.json`. I agree. `AutoTokenizer` can choose a tokenizer automatically, so until it is solved, I think that recommending a user uses `AutoTokenizer` is a better way to prevent the silent error. <|||||>Thank you very much for reporting this problem @europeanplaice :+1:. Indeed, I share your opinion, it would be better if a warning was logged if ever the class tokenizer used to load a pretrained tokenizer is not the same type. I also agree with @LysandreJik, it should be possible to find this information in `config.json` and/or` tokenizer_config.json` and this would allow to have a logged warning for all types of tokenizers. @europeanplaice, would you like to work on this? If you want, I could of course help you. Or if you don't have the time/want to, I can take over your PR in the next days and adapt it to this new approach. What do you think? :blush: <|||||>@SaulLu Thank you for your offer. I want to try to tackle this problem. I plan to add something like below https://github.com/huggingface/transformers/blob/122d7dc34fd0e397a08b8a584a632fc57d3fd5d0/src/transformers/models/auto/tokenization_auto.py#L527-L551 to `from_pretrained` in `PreTrainedTokenizerBase (tokenization_utils_base.py)` to make sure that we can check whether a user is trying to use different tokenizers between `cls` and `config.json or tokenizer_config.json`'s class before a tokenizer returns. If this detected conflicts between them, a warning would be logged, or an error would occur. I want my PR to be in line with your overall plan, so I hope to get your opinion about this comment. <|||||>Thank you very much for offering to take care of this issue! From my point of view, what you described above sounds really great! :+1: <|||||>I opened a new pull request about this issue. However, there is a point I couldn't overcome. If `config.json` and/or `tokenizer_config.json` don't have information about the tokenizer's class, it's impossible to specify which model is correct. In `AutoTokenizer`, it seems that TOKENIZER_MAPPING is used in this pattern, so I first intended to import `AutoTokenizer` in tokenization_utils_base.py, but it was a circular import. 😂
transformers
12,415
closed
Added talks
Added info on talks and speakers from our doc, will add remaining speaker info tomorrow (there's still a bit to be confirmed).
06-29-2021 14:46:04
06-29-2021 14:46:04
Awesome looks great!
transformers
12,414
closed
Streaming mode in training examples
# 🚀 Feature request The v1.8,0 of `datasets` introduced [streaming mode](https://huggingface.co/docs/datasets/master/dataset_streaming.html). It would be very useful to include this mode in the training examples, either with a parameter `--stream_dataset` or an ad-hoc training script if radical changes are required to make it work. ## Motivation Besides showcasing the potential of the new streaming feature, this would be very useful in the context of pre-training scripts (e.g., the new [`run_t5_mlm_flax.py`](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_t5_mlm_flax.py)) where large dataset like OSCAR and C4 are commonly leveraged. It would be especially useful to have it for the Flax Community Event! ## Your contribution I can submit a PR and work to integrate the feature in some of the Flax examples. Would love to hear the opinion of @patrickvonplaten and @patil-suraj on whether this is relatively easy to pull off!
06-29-2021 14:33:37
06-29-2021 14:33:37
Hey @gsarti, That's a great issue! We will provide at least one streaming example until Friday :-)<|||||>For future watchers, @patrickvonplaten is working on this in #12470! Thanks!<|||||>Note that it's actually `datasets` 1.9.0 that will probably be released on Monday that will feature Streaming. For now it's still available on the `master` branch only !
transformers
12,413
closed
Benchmark Colab does not work
https://colab.research.google.com/github/huggingface/notebooks/blob/master/transformers_doc/tensorflow/benchmarks.ipynb (https://huggingface.co/transformers/benchmarks.html?highlight=benchmark) Running the above Colab on a CPU crashes with: ``` 1 / 1 Process killed. Error in Process Process killed. Error in Process --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-4-df0caab4d791> in <module>() ----> 1 results = benchmark.run() 2 print(results) /usr/local/lib/python3.7/dist-packages/transformers/benchmark/benchmark_utils.py in run(self) 705 if self.args.inference: 706 if self.args.memory: --> 707 memory, inference_summary = self.inference_memory(model_name, batch_size, sequence_length) 708 inference_result_memory[model_name]["result"][batch_size][sequence_length] = memory 709 if self.args.speed: ValueError: too many values to unpack (expected 2) ```
06-29-2021 13:49:24
06-29-2021 13:49:24
This issue is related directly to Google Colab environment. I ran same code on my local machine (CPU only) and whole benchmark has passed. ```{python} from transformers import TensorFlowBenchmark, TensorFlowBenchmarkArguments args = TensorFlowBenchmarkArguments(models=["bert-base-uncased"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512]) benchmark = TensorFlowBenchmark(args) results = benchmark.run() print(results) ``` ``` ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- bert-base-uncased 8 8 0.139 bert-base-uncased 8 32 0.369 bert-base-uncased 8 128 1.319 bert-base-uncased 8 512 5.523 -------------------------------------------------------------------------------- ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- bert-base-uncased 8 8 1089 bert-base-uncased 8 32 1212 bert-base-uncased 8 128 1535 bert-base-uncased 8 512 1956 -------------------------------------------------------------------------------- ```<|||||>It seems that colab is killing your process, which may be due to a lack of resources. Does it still happen if you use a tiny model on small batch sizes & sequence lengths?<|||||>I have found out that there is a problem with the multiprocess benchmark on Colab. I added **multi_process = False** in args and the benchmark has passed. ``` args = TensorFlowBenchmarkArguments(models=["bert-base-uncased"], batch_sizes=[8], sequence_lengths=[8, 32, 128, 512], multi_process = False) ``` The default value for the multi_process flag is True. I don't know why Colab is killing those processes. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,412
closed
fix ids_to_tokens naming error in tokenizer of deberta v2
# What does this PR do? "ids_to_tokens" is named as "id_to_tokens" in tokenizer of deberta v2, which may cause an exception when "convert_ids_to_tokens" is called. So fix ids_to_tokens naming error in tokenizer of deberta v2. ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-29-2021 11:55:12
06-29-2021 11:55:12
transformers
12,411
open
🌟 New model addition: FNet
# 🌟 New model addition: FNet FNet is a highly efficient Transformer-like encoder architecture, wherein the self-attention sublayers have been wholly replaced by standard, unparameterized Fourier Transforms. I would like to help adding this! ## Open source status * [x] the model implementation is available: https://github.com/google-research/google-research/tree/master/f_net * [x] the model weights are available: https://github.com/google-research/google-research/tree/master/f_net * [x] who are the authors: (@ilyaeck @santiontanon) (Not sure, googled the authors' name + github, sorry if it's incorrect)
06-29-2021 11:53:40
06-29-2021 11:53:40
Somebody is already working on this, see #12335 <|||||>Thanks @NielsRogge , weird that I didn't see it when I searched.<|||||>@cccntu I believe what you want for the JAX/Flax community week is a Flax model. It seems unlikely that I will finish the PR in the next week. Maybe, you can start working on the Flax model parallely? Or, we can discuss over slack and then try to finish both. @patil-suraj @patrickvonplaten wdyt? Is it easier to go from PyTorch to Flax? Or it doesn't matter at all? In case PT is needed, I am willing to spend my time next week on this and try to finish it. <|||||>@gchhablani Yes! I would love to add the Flax part. @patil-suraj @patrickvonplaten I have a few questions before I proceed: * There is no license in the original repo, should I email the authors for permission for code and weights? * How much of the original model code should I modify, other than wrapping it in huggingface/transformers classes? Should we refactor it for better weight alignment with pytorch code e.t.c? Thanks! <|||||>Great @cccntu! Let's discuss over Slack.
transformers
12,410
open
New Model: Charformer: Fast Character Transformers via Gradient-based Subword Tokenization
# 🌟 New model addition ## Model description arXiv = https://arxiv.org/pdf/2106.12672.pdf (pre-print; under review) In this paper, they introduce a soft gradient-based subword tokenization module (GBST) that automatically learns latent subword representations from characters in a data-driven fashion. More importantly, is the introduction of Charformer, a deep Transformer model that integrates GBST and operates on the byte level. > Via extensive experiments on English GLUE, multilingual, and noisy text datasets, we show that Charformer outperforms a series of competitive byte-level baselines while generally performing on par and sometimes outperforming subword-based models. Additionally, Charformer is fast, improving the speed of both vanilla byte-level and subword-level Transformers by 28%-100% while maintaining competitive quality. We believe this work paves the way for highly performant token-free models that are trained completely end-to-end. ## Open source status * [ ] the model implementation is available: ----] [Implementation and weights](https://github.com/google-research/google-research/charformer) * [ ] the model weights are available: ------------] [to be released soon here](https://github.com/google-research/google-research/charformer) * [x] who are the authors: Yi Tay, Vinh Q. Tran, Sebastian Ruder*, Jai Gupta, Hyung Won Chung, Dara BahriZhen Qin, Simon Baumgartner, Cong Yu, Donald Metzler (Google and DeepMind-->`*`)
06-29-2021 11:15:45
06-29-2021 11:15:45
Code is out now: https://github.com/google-research/google-research/tree/master/charformer (please note the different url - compared to paper url)<|||||>An unofficial PyTorch implementation for Charformer https://github.com/lucidrains/charformer-pytorch<|||||>Thanks for the great work! Will charformer be supported in the near future? <|||||>Still not supported yet
transformers
12,409
closed
[Flax] Example scripts - correct weight decay
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR corrects the weight decay in most flax examples. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-29-2021 08:36:42
06-29-2021 08:36:42
transformers
12,408
closed
Wrong logical operation
06-29-2021 05:49:49
06-29-2021 05:49:49
If it was an 'or' operation, then with `raw_datasets = ["validation"]`, the second part of the statement would return `True` which would raise an error
transformers
12,407
closed
Validation split added: custom data files @sgugger, @patil-suraj
# What does this PR do? Validation split added in case of no validation file and loading custom data for TensorFlow examples run_mlm.py file Fixes # (issue) Issue #12406 fixed. Docs on language modeling TensorFlow updated.
06-29-2021 04:48:38
06-29-2021 04:48:38
@sgugger @patil-suraj <|||||>@sgugger @patil-suraj Made the necessary changes.. please see and comment.<|||||>Hopefully done now!<|||||>All good now, thanks!
transformers
12,406
closed
MLM training fails with no validation file
## Environment info - `transformers` version: 4.9.0.dev0 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.9.0+cu102 (True) - Tensorflow version (GPU?): 2.5.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ## Information Model I am using (Bert, XLNet ...): distilbert-base-cased The problem arises when using: the official example scripts: (give details below) The tasks I am working on is: MLM finetuning ## To reproduce Steps to reproduce the behavior: 1. Just run the tensorflow examples 2. python3 ./transformers/examples/tensorflow/language-modeling/run_mlm.py\ --model_name_or_path distilbert-base-cased \ --output_dir ./g \ --train_file "customdata.txt" \ 3. The model fails with error message that no validation file is there. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior it should use the validation split percentage parameter to divide the training set into training and eval samples.
06-29-2021 04:35:29
06-29-2021 04:35:29
Log: Grouping texts in chunks of 512: 100% 87/87 [00:26<00:00, 3.25ba/s] Traceback (most recent call last): File "./transformers/examples/tensorflow/language-modeling/run_mlm.py", line 604, in <module> main() File "./transformers/examples/tensorflow/language-modeling/run_mlm.py", line 493, in main eval_dataset = tokenized_datasets["validation"] File "/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py", line 37, in __getitem__ return super().__getitem__(k) KeyError: 'validation'<|||||>Fixed.
transformers
12,405
closed
Fix for the issue of device-id getting hardcoded for token_type-ids during Tracing for iBert
# What does this PR do? This PR is part of a series of PRs that follows PR #11252 and applies similar changes to Flaubert. Fixes # (issue) issue #5664 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @LysandreJik
06-29-2021 00:12:49
06-29-2021 00:12:49
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry for taking so long to get back to it, this one really fell through the cracks. Would you mind implementing a test for this like it was done with other models, for example in https://github.com/huggingface/transformers/pull/13350? Thanks a lot!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,404
closed
[FLAX] Core dump using example code
## Environment info - `transformers` version: 4.8.1 - `flax` version: 0.3.4 - `python` version: 3.8.5 ## Who can help @patrickvonplaten ## Models: FLAX - RoBERTa MLM ## Information Following the official guides for creating VMs and TPUs: https://cloud.google.com/tpu/docs/jax-quickstart-tpu-vm Following this guide for training RoBERTa on the Norwegian OSCAR training set. https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling I am unable to run the run_mlm_flax.py without getting a core dump. The same happens on the run_clm_flax.py script. ## Error message ``` tcmalloc: large alloc 435677134848 bytes == (nil) @ 0x7f61ae7be680 0x7f61ae7deff4 0x7f61ae2d5309 0x7f61ae2d6fb9 0x7f61ae2d7056 0x7f5e637fd659 0x7f5e59233a09 0x7f61ae9b2b8a 0x7f61ae9b2c91 0x7f61ae711915 0x7f61ae9b70bf 0x7f61ae7118b8 0x7f61ae9b65fa 0x7f61ae58634c 0x7f61ae7118b8 0x7f61ae711983 0x7f61ae586b59 0x7f61ae5863da 0x67299f 0x682dcb 0x684321 0x5c3cb0 0x5f257d 0x56fcb6 0x56822a 0x5f6033 0x56ef97 0x5f5e56 0x56a136 0x5f5e56 0x569f5e terminate called after throwing an instance of 'std::bad_alloc' what(): std::bad_alloc https://symbolize.stripped_domain/r/?trace=7f61ae5f418b,7f61ae5f420f&map= *** SIGABRT received by PID 8576 (TID 8576) on cpu 95 from PID 8576; stack trace: *** PC: @ 0x7f61ae5f418b (unknown) raise @ 0x7f5f7fb581e0 976 (unknown) @ 0x7f61ae5f4210 (unknown) (unknown) https://symbolize.stripped_domain/r/?trace=7f61ae5f418b,7f5f7fb581df,7f61ae5f420f&map=ca1b7ab241ee28147b3d590cadb5dc1b:7f5f72e59000-7f5f7fe8bb20 E0628 20:40:48.745220 8576 coredump_hook.cc:292] RAW: Remote crash data gathering hook invoked. E0628 20:40:48.745291 8576 coredump_hook.cc:384] RAW: Skipping coredump since rlimit was 0 at process start. E0628 20:40:48.745305 8576 client.cc:222] RAW: Coroner client retries enabled (b/136286901), will retry for up to 30 sec. E0628 20:40:48.745322 8576 coredump_hook.cc:447] RAW: Sending fingerprint to remote end. E0628 20:40:48.745346 8576 coredump_socket.cc:124] RAW: Stat failed errno=2 on socket /var/google/services/logmanagerd/remote_coredump.socket E0628 20:40:48.745362 8576 coredump_hook.cc:451] RAW: Cannot send fingerprint to Coroner: [NOT_FOUND] Missing crash reporting socket. Is the listener running? E0628 20:40:48.745366 8576 coredump_hook.cc:525] RAW: Discarding core. E0628 20:40:48.749975 8576 process_state.cc:771] RAW: Raising signal 6 with default behavior Aborted (core dumped) ``` ## To reproduce Follow the guide.
06-28-2021 21:06:46
06-28-2021 21:06:46
I got the same error, I temporarily fixed it with this [patch](https://katb.in/meow2590). <|||||>Thanks a lot. I was actually also able solve this issue by using flax from the git. git clone https://github.com/google/flax.git pip install --user -e flax <|||||>Thanks, I will try it too<|||||>Hey @Wikidepia, Thanks a lot for your error report. I'm not really sure what is causing the error here. Are you using a TPU VMv3-8? It's important to make sure that jax/flax is installed correctly as explained here: https://cloud.google.com/tpu/docs/jax-quickstart-tpu-vm<|||||>Also could you maybe follow the guide here: https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects#how-to-install-relevant-libraries to make sure everything is correctly installed? The code should work as is - please let me know if you continue getting a core dump error.<|||||>Exactly. That turned out to be the issue. Was a bit confused because installing flax with pip install gives me flax version 0.3.4. Installing from git still gives version 0.3.4, but now it works. Thanks a lot.<|||||>> Hey @Wikidepia, > > Thanks a lot for your error report. I'm not really sure what is causing the error here. Are you using a TPU VMv3-8? It's important to make sure that jax/flax is installed correctly as explained here: https://cloud.google.com/tpu/docs/jax-quickstart-tpu-vm Yes I'm using TPU VM, I might need to update transformers or just create new TPU Thanks for your help :D <|||||>Also pinging @avital @marcvanzee here in case they have seen something similar before - I don't think it's necessary to clone the flax repo to make it work on TPU VM no? Think one can just `pip install ...` it<|||||>I see you have updated the guide with additional info. Getting the TPU VMs up and running was a bit back and forth. I will do a fresh install of this in a couple of days, and check if I can reproduce this. Thanks a lot @patrickvonplaten for your response.<|||||>@patrickvonplaten. I have now reinstalled from scratch following the guides, including this https://cloud.google.com/tpu/docs/jax-quickstart-tpu-vm. The error I got was not caused by installing Flax from pip instead from source. My mistake. The core dump was caused by not running "pip install --upgrade clu". Sorry for the confusion. <|||||>No worries, thanks for documenting everything here! I'm sure it'll be helpful for others :-)<|||||>Hey @patrickvonplaten , I'm in the TRC program so I could also test some of scripts for TPU VM (before the projects starts on July, 7).<|||||>@stefan-it this would be great, actually. Could you check whether you can set up the libraries correctly according to: unshuffled_deduplicated_no and then run these steps: https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling#masked-language-modeling where instead of running it on **norwegian (no)** (which would take to long) could you run it on **Alemannic (als)**? So simply replacing all occurrences of `unshuffled_deduplicated_no` with `unshuffled_deduplicated_als` ?<|||||>Hi @patrickvonplaten , I created a virtual environment (`venv') and followed the installation instructions from [here](https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects#how-to-install-relevant-libraries). Here are my obserations: After installing `jax` there's a strange wheel output: ```bash Building wheels for collected packages: jax Building wheel for jax (setup.py) ... error ERROR: Command errored out with exit status 1: command: /home/stefan/dev/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-gl33kipc/jax/setup.py'"'"'; __file__='"'"'/tmp/pip-install-gl33kipc/ja x/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-itpzq66t cwd: /tmp/pip-install-gl33kipc/jax/ Complete output (6 lines): usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] or: setup.py --help [cmd1 cmd2 ...] or: setup.py --help-commands or: setup.py cmd --help error: invalid command 'bdist_wheel' ---------------------------------------- ERROR: Failed building wheel for jax Running setup.py clean for jax Failed to build jax Installing collected packages: six, absl-py, numpy, opt-einsum, scipy, flatbuffers, jaxlib, libtpu-nightly, jax Running setup.py install for jax ... done ``` However, `jax` is installed but: ``` In [1]: import jax /home/stefan/dev/lib/python3.8/site-packages/jax/__init__.py:27: UserWarning: cloud_tpu_init failed: ModuleNotFoundError("No module named 'requests'") This a JAX bug; please report an issue at https://github.com/google/jax/issues _warn(f"cloud_tpu_init failed: {repr(exc)}\n This a JAX bug; please report " ``` So there's something wrong with dependency management from `jax`, I manually installed `requests` and it is working. Then I could run the tokenizer script, which was perfectly working. For the `run_mlm_flax.py` I just found this message: ```bash Traceback (most recent call last): File "./run_mlm_flax.py", line 319, in <module> f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}" File "/home/stefan/transformers/src/transformers/file_utils.py", line 1641, in wrapper raise ImportError(f"Method `{func.__name__}` requires PyTorch.") ImportError: Method `device` requires PyTorch. ``` Ok, I did not install PyTorch, and this method is only used in a `logger.info` command, maybe we can write a small logic around it to not use a PyTorch-specific function. I did comment it out, so training could start. For Alemannic, the following error is thrown after first epoch: ```bash [18:40:31] - INFO - absl - Starting the local TPU driver. [18:40:31] - INFO - absl - Unable to initialize backend 'tpu_driver': Not found: Unable to find driver in registry given worker: local:// [18:40:31] - INFO - absl - Unable to initialize backend 'gpu': Not found: Could not find registered platform with name: "cuda". Available platform names are: TPU Interpreter Host [18:40:38] - INFO - absl - A polynomial schedule was set with a non-positive `transition_steps` value; this results in a constant schedule with value `init_value`. /home/stefan/dev/lib/python3.8/site-packages/jax/lib/xla_bridge.py:382: UserWarning: jax.host_count has been renamed to jax.process_count. This alias will eventually be removed; please upd ate your code. warnings.warn( /home/stefan/dev/lib/python3.8/site-packages/jax/lib/xla_bridge.py:369: UserWarning: jax.host_id has been renamed to jax.process_index. This alias will eventually be removed; please update your code. warnings.warn( Epoch ... (1/18): 0%| | 0/18 [00:00<?, ?it/s] Training...: 0%| | 0/4 [00:00<?, ?it/s] Training...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [01:13<00:00, 18.38s/it] Epoch... (1/18 | Loss: [11.003486 11.003486 11.003486 11.003486 11.003486 11.003486 11.003486 11.003486], Learning Rate: [9.0000685e-07 9.0000685e-07 9.0000685e-07 9.0000685e-07 9.0000685e-07 9.0000685e-07 9.0000685e-07 9.0000685e-07]) Epoch ... (1/18): 0%| | 0/18 [01:15<?, ?it/s] Traceback (most recent call last): File "/home/stefan/dev/lib/python3.8/site-packages/numpy/lib/shape_base.py", line 867, in split len(indices_or_sections) TypeError: object of type 'int' has no len() During handling of the above exception, another exception occurred: Traceback (most recent call last): File "./run_mlm_flax.py", line 622, in <module> eval_batch_idx = generate_batch_splits(eval_samples_idx, eval_batch_size) File "./run_mlm_flax.py", line 268, in generate_batch_splits batch_idx = np.split(samples_idx, sections_split) File "<__array_function__ internals>", line 5, in split File "/home/stefan/dev/lib/python3.8/site-packages/numpy/lib/shape_base.py", line 871, in split if N % sections: ZeroDivisionError: integer division or modulo by zero ``` Maybe the training corpus is just too small. I'm currently training a model for Amharic, and training is running (2 epochs and 1 evaluation phase) :)<|||||>Just a question - maybe it is documented already, but how should we deal with the limited hard disk space? I've seen discussions on the Alpha VM TPU Google channel, where it was suggested to use [gcsfuse](https://github.com/GoogleCloudPlatform/gcsfuse), but I've just seen your thread in the Google channel right now, so let's wait :) <|||||>@stefan-it did you install JAX as follows: ``` pip install "jax[tpu]>=0.2.16" -f https://storage.googleapis.com/jax-releases/libtpu_releases.html ``` or just via installing `transformers` from source? <|||||>Regarding limited disk space - we are currenty working on a solution :-)<|||||>@patrickvonplaten yeah, I was using the `pip install` command as above (so before installing transformers library). It seems that disk can be attached during creating of the VM, but not after creation, unfortunately, example [here](https://cloud.google.com/sdk/gcloud/reference/alpha/compute/tpus/tpu-vm/create#--data-disk).<|||||>I also got this core dump on TPUv3-8 and TPUv2-8 VMs. I'll try some of the proposed fixes tomorrow and post an update. @patil-suraj <|||||>Alemanic is also a super small dataset so if your batch size is too large it might actually be bigger than the number of examples in the eval set<|||||>The alemanic `als` script really is just a dummy dataset and should be run with a small batch size (2 per device) for testing<|||||>Hi @stefan-it, for limited disk space I found a work around which don't need using any gsutil. Since TPU-VM has huge RAM (335gb), I mount part of it as a partition, and set HF_HOME to this mount partition **before** running any tokenizers (which cache the preprocessed dataset into $HF_HOME). For more specific: ```bash mkdir $HOME/hfcache sudo mount -t tmpfs -o size=128000m tmpfs $HOME/hfcache # mount 125Gb RAM as disk export HF_HOME=/home/lethanh/hfcache ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,403
closed
[Deepspeed][initialization] pegasus: unable to load/init the weights
## Environment info - `transformers` version: 4.9.0.dev0 - Platform: Ubuntu - Python version: 3.8 - PyTorch version (GPU?): Y - Using GPU in script?: Y - Using distributed or parallel set-up in script?: Y _- Deepspeed version: deepspeed 0.4.1 (installed with pip)_ @stas00, ## Information I'm trying to fine-tuned pegasus-large model using deepspeed with multi-gpu. It seems that deepspeed is unable to initialize the weights in the beginning. While, I removed deepspeed and weights seem to be properly initialized. I'm hesitating if this is a bug with deepspeed library. Details are given below. The command: ``` deepspeed --num_gpus=8 examples/pytorch/summarization/run_summarization.py \ --model_name_or_path google/pegasus-large \ --do_train \ --do_eval \ --do_predict \ --output_dir /home/code-base/user_space/saved_models/pegasus/reddit-xsum-1024-tuned/ \ --per_device_train_batch_size=2 \ --per_device_eval_batch_size=4 \ --learning_rate 3e-5 \ --weight_decay 0.01 \ --adam_beta2 0.98 \ --num_train_epochs 10 \ --overwrite_output_dir \ --predict_with_generate \ --evaluation_strategy steps --eval_steps 1000 --save_steps 1000 --warmup_steps 10000 \ --text_column document \ --summary_column summary \ --train_file $DS_BASE_DIR_P/train.json \ --validation_file $DS_BASE_DIR_P/validation.json \ --test_file $DS_BASE_DIR_P/test.json \ --deepspeed ds_config.json ``` Error message: ``` ... Traceback (most recent call last): File "examples/pytorch/summarization/run_summarization.py", line 617, in <module> main() File "examples/pytorch/summarization/run_summarization.py", line 355, in main model = AutoModelForSeq2SeqLM.from_pretrained( File "/trainman-mount/trainman-k8s-storage-5ddccee4-32ad-4e32-ba2d-1d06b71f80b0/packages/transformers/src/transformers/models/auto/auto_factory.py", line 395, in from_pretrained return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs) File "/trainman-mount/trainman-k8s-storage-5ddccee4-32ad-4e32-ba2d-1d06b71f80b0/packages/transformers/src/transformers/modeling_utils.py", line 1176, in from_pretrained model = cls(config, *model_args, **model_kwargs) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 226, in wrapper f(module, *args, **kwargs) File "/trainman-mount/trainman-k8s-storage-5ddccee4-32ad-4e32-ba2d-1d06b71f80b0/packages/transformers/src/transformers/models/pegasus/modeling_pegasus.py", line 1209, in __init__ self.model = PegasusModel(config) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 226, in wrapper f(module, *args, **kwargs) File "/trainman-mount/trainman-k8s-storage-5ddccee4-32ad-4e32-ba2d-1d06b71f80b0/packages/transformers/src/transformers/models/pegasus/modeling_pegasus.py", line 1082, in __init__ self.encoder = PegasusEncoder(config, self.shared) File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 226, in wrapper f(module, *args, **kwargs) File "/trainman-mount/trainman-k8s-storage-5ddccee4-32ad-4e32-ba2d-1d06b71f80b0/packages/transformers/src/transformers/models/pegasus/modeling_pegasus.py", line 652, in __init__ self.embed_positions = PegasusSinusoidalPositionalEmbedding( File "/opt/conda/lib/python3.8/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 226, in wrapper f(module, *args, **kwargs) File "/trainman-mount/trainman-k8s-storage-5ddccee4-32ad-4e32-ba2d-1d06b71f80b0/packages/transformers/src/transformers/models/pegasus/modeling_pegasus.py", line 114, in __init__ self.weight = self._init_weight(self.weight) File "/trainman-mount/trainman-k8s-storage-5ddccee4-32ad-4e32-ba2d-1d06b71f80b0/packages/transformers/src/transformers/models/pegasus/modeling_pegasus.py", line 122, in _init_weight n_pos, dim = out.shape ValueError: not enough values to unpack (expected 2, got 1) Killing subprocess 3351 Killing subprocess 3352 Killing subprocess 3353 Killing subprocess 3354 Killing subprocess 3355 Killing subprocess 3356 Killing subprocess 3357 Killing subprocess 3358 ... ``` - `ds_config.json` is Zero3 copied from the repository. - I checked `self.out`: with `deepspeed` its shape is `[1]` and only contains a 1-d tensor with value 1. However, in single-gpu env, the shape is `[1024, 1024]` which contains floating numbers (i.e., much like embeddings). The problem arises when using: * [ x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ x] my own task or dataset: (give details below) --reddit_tifu_long ## To reproduce Steps to reproduce the behavior: 1. Running the above command with deepspeed.
06-28-2021 19:01:55
06-28-2021 19:01:55
Thank you for the report, @sajastu Could you please adjust the command line in your report so that it uses some small public dataset and not custom files which we don't have? Then I will sort it out. Thank you. <|||||>Sure thing! @stas00 Please let me modify the script, and then test so that it runs flawlessly. I'll give you an update shortly!<|||||>I was able to reproduce the problem with: ``` export BS=16; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0,1 deepspeed --num_gpus=2 \ examples/pytorch/summarization/run_summarization.py --model_name_or_path \ google/pegasus-cnn_dailymail --output_dir output_dir --adam_eps 1e-06 --do_train --label_smoothing \ 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 500 --max_source_length 128 \ --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size \ $BS --predict_with_generate --sortish_sampler --dataset_name cnn_dailymail --dataset_config "3.0.0" \ --val_max_target_length 128 --warmup_steps 50 --max_train_samples 50 --max_eval_samples 50 \ --deepspeed tests/deepspeed/ds_config_zero3.json ``` So nothing else needs to be done by your side. <|||||>so the quick fix is: ``` --- a/src/transformers/models/pegasus/modeling_pegasus.py +++ b/src/transformers/models/pegasus/modeling_pegasus.py @@ -26,6 +26,7 @@ from torch import nn from torch.nn import CrossEntropyLoss from ...activations import ACT2FN +from ...deepspeed import is_deepspeed_zero3_enabled from ...file_utils import ( add_end_docstrings, add_start_docstrings, @@ -109,7 +110,13 @@ class PegasusSinusoidalPositionalEmbedding(nn.Embedding): def __init__(self, num_positions: int, embedding_dim: int, padding_idx: Optional[int] = None): super().__init__(num_positions, embedding_dim) - self.weight = self._init_weight(self.weight) + if is_deepspeed_zero3_enabled(): + import deepspeed + with deepspeed.zero.GatheredParameters(self.weight, modifier_rank=0): + self.weight = self._init_weight(self.weight) + else: + self.weight = self._init_weight(self.weight) + @staticmethod def _init_weight(out: nn.Parameter): ``` Let me know if you can handle the diff. I will work on a normal PR and test. Ideally should think of something that requires less code changes, but it will do the right thing for now.<|||||>@stas00 Thanks. It works perfectly now! <|||||>thank you for validating that it works for you. I'm trying to have this solved on the deepspeed side, so that all our models will work w/o needing to change each one of them separately. so I will keep you posted on the progress.<|||||>If you want to try the fix on the deepspeed side, instead of the workaround on transformers side, you can try this branch: https://github.com/microsoft/DeepSpeed/pull/1202 <|||||>https://github.com/microsoft/DeepSpeed/pull/1202 has been merged, so if you use the master version of deepspeed, you no longer need the workaround I shared with you. I will close this, but if you still encounter any problems please feel free to re-open.
transformers
12,402
closed
[Flax][WIP] added Flax Pegasus Models
# What does this PR do? ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @patil-suraj @patrickvonplaten
06-28-2021 18:52:05
06-28-2021 18:52:05
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @bhadreshpsavani thanks for the PR, let us know if you want to continue working on this :) <|||||>Sure @patil-suraj, I want to continue on it but I will need some help from you I think! Shall I contact you slack If I need any guidance from you? I will update this PR today with latest changes!<|||||>Sounds good, happy to help :)<|||||>Given that we have a cookie-cutter example now, it might be worth actually starting from scratch using the cookie-cutter that is based on BART - will probably be more efficient<|||||>Hi @patrickvonplaten I will start from scratch! That would be better. I create another PR, that will be fine right?<|||||>This one is being handled by above mentioned PR
transformers
12,401
closed
[Deepspeed] match the trainer log level
This PR sets the trainer log level for Deepspeed, so the whole application runs on the same log level. Once https://github.com/microsoft/DeepSpeed/pull/1190 is merged running the trainer with `--log_level error --log_level_replica error` with deepspeed is absolutely silent, just gives you the training results. well, minus the lame `tensorflow` info logs who refuses to be respectful of the ecosphere. and pt-1.9.0's distributed + launch which too has the default log level wrong, but it will be fixed in 1.9.1 @sgugger
06-28-2021 18:29:27
06-28-2021 18:29:27
transformers
12,400
closed
[WIP] train tokenizer like test
Suggestion to test the special tokens mapping into the common tests
06-28-2021 16:53:10
06-28-2021 16:53:10
transformers
12,399
closed
Reference postprocess_qa_predictions score method
To the best of my knowledge literature on question answering computes the probability of a span by first computing the probability for a token to be the start token or the end token independently and then multiplying those probabilities. In contrast, the [postprocess_qa_predictions](https://github.com/huggingface/transformers/blob/57461ac0b4e4f7349c2437fcf8d4115014d6ceda/examples/pytorch/question-answering/utils_qa.py#L31) function computes the probability for an answer by first summing both logits, ranking them, and applying the softmax over the top n_best scores. Is there any reference in the literature supporting this?
06-28-2021 15:57:05
06-28-2021 15:57:05
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,398
closed
[Flax community event] Add more description to readme
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adds information and tips for the team work. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-28-2021 15:17:02
06-28-2021 15:17:02
Thanks a lot for the feedback!
transformers
12,397
closed
[RoFormer] Fix some issues
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> - add RoFormerTokenizerFast into AutoTokenizer - fix typo in roformer docs - Fix #12000 and make onnx export happy - update RoFormerConfig embedding_size - use jieba not rjieba and then we can enjoy "Hosted inference API in huggingface.co" - fix #12244 and make test_alignement passed - update roformer ARCHIVE_MAP ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @patil-suraj <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-28-2021 14:52:46
06-28-2021 14:52:46
Ok thanks, I just will let Lysandre review the `try:except` block, that's the only thing remaining.<|||||>> This looks good, but why did you change `rjieba` to `jieba`? Is the latter better? @LysandreJik I want to use `Hosted inference API` in https://huggingface.co. ![image](https://user-images.githubusercontent.com/50394665/124454105-ac577380-ddba-11eb-9d0c-bbbfe0a1a700.png) I found `CpmTokenizer` and `XLMTokenizer` use jieba. https://github.com/huggingface/transformers/blob/fb41f9f50c37aba0eced055323ba17e4203f7d57/src/transformers/models/cpm/tokenization_cpm.py#L31 https://github.com/huggingface/transformers/blob/b24ead87e1be6bce17e4ec5c953b6d028e4b3af7/src/transformers/models/xlm/tokenization_xlm.py#L530<|||||>cc @Narsil <|||||>@JunnYu Should work now: https://huggingface.co/junnyu/roformer_chinese_base?text=%E7%94%9F%E6%B4%BB%E7%9A%84%E7%9C%9F%E8%B0%9B%E6%98%AF+%5BMASK%5D%E3%80%82 We do update the API regularly with dependencies, `rjieba`was added pretty recently. Cheers ! <|||||>@Narsil thank you!<|||||>@LysandreJik (1) Now I use `rust jieba` . (2) This pr https://github.com/huggingface/transformers/pull/12361 add `test_training_new_tokenizer` and `test_training_new_tokenizer_with_special_tokens_change`. I found `test_tokenization_roformer.py` can't pass these two tests. Because `roformer tokenizer` has a custom PreTokenizer. https://github.com/huggingface/transformers/blob/e2c1dd09667af5a535689c371b4658c36681131f/src/transformers/convert_slow_tokenizer.py#L318
transformers
12,396
closed
getting error with BertForMaskedLM
config = BertConfig.from_pretrained("bert-base-uncased") self.bert = BertModel.from_pretrained("bert-base-uncased") sequence_output = self.bert( inputs_embeds=input_embeddings, position_ids=position_ids, token_type_ids=token_type_ids, attention_mask=attention_mask, )[0][:, max_ids:, :] self.x1 = BertForMaskedLM(config) x2 = self.x1(sequence_output) While running above code , getting below error at last line (x2 = self.x1(sequence_output). Unable to relate the error with code sequence , and why is this error coming. Is there any issue with BertForMaskedLM RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/gems/pytorch-env_py3.7/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/home/gems/pytorch-env_py3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "<ipython-input-9-d980e37a9621>", line 90, in forward x_scores = self.x_head(sequence_output).to(self.device) File "/home/gems/pytorch-env_py3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/gems/pytorch-env_py3.7/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 1185, in forward return_dict=return_dict, File "/home/gems/pytorch-env_py3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/gems/pytorch-env_py3.7/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 862, in forward input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds File "/home/gems/pytorch-env_py3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/gems/pytorch-env_py3.7/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 198, in forward inputs_embeds = self.word_embeddings(input_ids) File "/home/gems/pytorch-env_py3.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/gems/pytorch-env_py3.7/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 160, in forward self.norm_type, self.scale_grad_by_freq, self.sparse) File "/home/gems/pytorch-env_py3.7/lib/python3.7/site-packages/torch/nn/functional.py", line 2043, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.cuda.FloatTensor instead (while checking arguments for embedding)
06-28-2021 14:46:35
06-28-2021 14:46:35
Hello, do you have a reproducible code example that showcases what's in your variables such as `input_embeddings`? Thank you!<|||||>Did not mention parameters like input_embeddings, however input looks like below: tensor([[[-0.0129, -0.0099, 0.2570, ..., -0.1841, 0.3609, 0.2851], [-0.1268, 0.0133, 0.2927, ..., -0.1167, 0.4605, -0.1288], [-0.9993, 0.4705, 0.5119, ..., -0.9274, 0.5529, -0.5890], ..., [-0.4415, 0.0786, 0.3132, ..., -0.2137, -0.0387, 0.2496], [ 0.2243, 0.2535, 0.2158, ..., -0.0974, -0.1830, 0.1292], [ 0.0239, -0.2080, 0.4332, ..., -0.2069, 0.0078, 0.2262]], [[-0.2665, 0.1647, 0.4427, ..., 0.0847, 0.4180, 0.7866], [ 0.1999, -0.3408, 0.4952, ..., -0.3468, 0.4271, -0.5220], [-0.6198, 0.1422, 0.5547, ..., -0.1745, -0.0165, -0.4338], ..., [-0.0902, 0.1044, 0.2038, ..., -0.0335, 0.4127, 0.2904], [-0.0747, 0.0279, 0.2409, ..., -0.0989, 0.0915, 0.0109], [ 0.2910, 0.1765, 0.3457, ..., 0.0559, 0.0067, -0.0191]], [[-0.0183, -0.0937, 0.6092, ..., -0.4594, 0.2707, 0.1108], [ 0.5192, -0.0532, 0.4865, ..., 0.1216, 0.0658, 0.5460], [-0.0984, -0.1430, 0.3035, ..., -0.0563, 0.3445, -0.2272], ..., [ 0.1298, -0.1624, 0.1905, ..., 0.0979, -0.0197, -0.3143], [-0.3790, 0.0682, 0.0601, ..., 0.0266, -0.1095, -0.2442], [-0.0352, -0.0526, 0.1690, ..., 0.0723, 0.1064, -0.2718]], ..., [[-0.3095, -0.3042, 0.2681, ..., -0.1081, -0.0650, 0.3146], [ 0.3054, 0.0550, 0.1716, ..., -0.1492, -0.0201, -0.1543], [-0.4458, 0.0661, 0.2862, ..., -0.2693, 0.3367, 0.0015], ..., [-0.3311, -0.0961, 0.2018, ..., 0.0840, -0.1578, 0.3397], [-0.0362, 0.0713, 0.4921, ..., 0.0881, 0.0501, -0.1048], [-0.0793, -0.1054, 0.1489, ..., -0.0762, -0.0039, -0.0471]], [[ 0.2076, -0.4345, 0.0533, ..., -0.0296, -0.1365, -0.1304], [ 0.5159, 0.3230, 0.6001, ..., -0.4266, -0.1751, -0.6830], [ 0.2633, -0.0747, 0.6887, ..., -0.5294, 0.4353, -0.3712], ..., [ 0.3373, 0.2944, 0.3050, ..., -0.0972, -0.1798, -0.2998], [-0.3282, 0.1189, 0.3962, ..., -0.2579, -0.2661, -0.0275], [-0.0706, 0.0654, 0.6177, ..., -0.1825, 0.0214, -0.1656]], [[ 0.0669, -0.3896, -0.0204, ..., -0.2962, -0.3721, -0.0138], [ 0.4778, 0.1336, 0.5360, ..., -0.0931, -0.3350, -0.3153], [ 0.3600, -0.2580, 0.1261, ..., 0.0296, -0.0979, 0.1038], ..., [ 0.0821, 0.0034, 0.2967, ..., -0.1719, -0.2646, -0.1868], [-0.0868, 0.4321, 0.0466, ..., 0.2056, -0.4406, -0.1953], [-0.1131, -0.1266, 0.1438, ..., -0.3065, -0.2185, 0.1069]]], device='cuda:1', grad_fn=<SliceBackward>) Will check how to put reproducible code example<|||||>weirdly it seems that your input embeddings are treated as input IDs, which should not happen. Can you let me know the result of `transforemrs-cli env` in your environment?<|||||>I get below result for transformers-cli env : Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.1.1 - Platform: Linux-4.15.0-107-generic-x86_64-with-debian-stretch-sid - Python version: 3.7.4 - PyTorch version (GPU?): 1.9.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> <|||||>Ah, does you error come from `x2 = self.x1(sequence_output)` ? What are you trying to do here, are you passing the BERT output as input IDs to a BERT Masked LM model?<|||||>Changed this line x2 = self.x1(sequence_output) to x2 = self.x1(inputs_embeds=sequence_output) as suggested above. It works now , Thankyou<|||||>Can I change this code ```py self.bert = BertModel.from_pretrained("bert-base-uncased") sequence_output = self.bert( inputs_embeds=input_embeddings, position_ids=position_ids, token_type_ids=token_type_ids, attention_mask=attention_mask, )[0][:, max_ids:, :] ``` as below: ```py self.bert = BertModel.from_pretrained("bert-base-uncased") sequence_output = self.bert( inputs_embeds= position_embeddings, #position_ids=position_ids, token_type_ids=token_type_ids, attention_mask=attention_mask, )[0][:, max_ids:, :] ``` ie assign position_embeddings to inputs_embeds or like this `input_embeds = input_embeddings + position_embeddings`<|||||>No, the inputs embedding are only for the input IDs. Position embeddings will be added to that variable in the embedding layer.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,395
closed
Minor fixes in original RAG training script
# What does this PR do? 1. Did a minor fix for the original RAG's fine-tuning script in order to train with distributed GPU architectures (multiple node). 2. Corrected a typo in callbacks_rag.py Who can review? @lhoestq @patrickvonplaten
06-28-2021 14:18:04
06-28-2021 14:18:04
transformers
12,394
closed
Remove the need for `einsum` in Albert's attention computation
This change makes it easier to optimize for exporting libraries such as ONNX and/or TensorRT.
06-28-2021 13:40:29
06-28-2021 13:40:29
We did some with Lysandre, but I can rerun some more checks before we merge, just to be sure we are not breaking anything 👍🏻
transformers
12,393
closed
[example/flax] add summarization readme
# What does this PR do? This PR adds readme with instructions for the summarization example.
06-28-2021 11:51:42
06-28-2021 11:51:42
Looks good, maybe also add a `requirements.txt` file?
transformers
12,392
open
GLM Model implementation [WIP]
#11377 Started implementation of GLM model. @patil-suraj ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
06-28-2021 11:03:59
06-28-2021 11:03:59
I can review this! Ping me when you are ready.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Gently pinging @spatil6 :) <|||||>> Gently pinging @spatil6 :) Yeah, i'm on it. will have update by next week,
transformers
12,391
closed
[Flax] Adapt flax examples to include `push_to_hub`
# What does this PR do? This PR adapts all Flax examples to automatically push trained checkpoints to the hub
06-28-2021 11:01:43
06-28-2021 11:01:43
@patil-suraj, I think we should also add a `README.md` for the summarization example (this will probs be used a lot during the sprint).
transformers
12,390
closed
`fill-mask` pipeline provides `<mask>` token among predictions
## Environment info - `transformers` version: 4.8.1 - Platform: Linux-5.4.0-74-generic-x86_64-with-glibc2.31 - Python version: 3.9.5 - PyTorch version (GPU?): 1.9.0+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: see below - Using distributed or parallel set-up in script?: no ### Who can help @LysandreJik @sgugger ## Information Model I am using (Bert, XLNet ...): RoBERTa The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce ``` from transformers import RobertaTokenizerFast, RobertaForMaskedLM, pipeline tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base") model = RobertaForMaskedLM.from_pretrained("roberta-base") fill_mask = pipeline( "fill-mask", model=model, tokenizer=tokenizer, ) resp = fill_mask("My <mask> is Roberto.", top_k=len(tokenizer.get_vocab())) [x for x in resp if x['token'] == tokenizer.mask_token_id] ``` ## Expected behavior Because the job of the `fill-mask` pipeline is to fill the `<mask>` special token, the expectation is that `<mask>` itself is not part of the possible predictions.
06-28-2021 09:36:10
06-28-2021 09:36:10
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,389
closed
GPT2-large for sequence classification default num_labels differs from the default for GPT2-small and GPT2-medium
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.0 - Platform: Linux-5.4.0-74-generic-x86_64-with-glibc2.29 - Python version: 3.8.5 - PyTorch version (GPU?): 1.8.1+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help Models: - gpt2: @patrickvonplaten, @LysandreJik ## Information When creating an `AutoModelForSequenceClassification` using `from_pretrained` if you pass in `gpt2` as the model name then you receive a classifier with two targets (`model.config.num_labels` = 2). If you instead pass in `gpt2-large` as the model name then you receive a regressor with one target (`model.config.num_labels` = 1). Model I am using: GPT-2 The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: The Stanford Sentiment Treebank * [ ] my own task or dataset: (give details below) (I found this issue when working on sst2 but it is not particularly relevant to the issue). ## To reproduce Steps to reproduce the behavior: 1. Run this code: ```python from transformers import AutoModelForSequenceClassification gpt2_small_features = AutoModelForSequenceClassification.from_pretrained("gpt2").score.out_features gpt2_large_features = AutoModelForSequenceClassification.from_pretrained("gpt2-large").score.out_features print([gpt2_small_features, gpt2_large_features]) ``` This prints `[2, 1]`. ## Expected behavior `num_labels` should have a consistent default across different versions of gpt2. The source code for PretrainedConfig suggests that this should be 2.
06-28-2021 09:19:32
06-28-2021 09:19:32
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-large-config.json this still has `_num_labels` of 1 where https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-config.json lacks the entry and so inherits the default value.<|||||>I would argue that you should always manually specify the number of labels that you wish for when loading a pretrained model with no sequence classification head - the `gpt2-large` configuration shouldn't have a default number of labels set to 1, however.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-large-config.json this still has `_num_labels` of 1 where https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-config.json lacks the entry and so inherits the default value.<|||||>This is fixed for both `gpt2-large` and `gpt2-xl`
transformers
12,388
closed
Onnx export v2 fixes
Tiny fixes. To see just the fixes, check the first commit's changes. The second commit addresses code quality issues. Remaining TODOs before merging: - Add a test for all supported architectures - Write a small Usage doc
06-28-2021 07:26:07
06-28-2021 07:26:07
transformers
12,387
closed
Connot correctly fine-tune Bert for generation
# 📚 Migration ## Information <!-- Important information --> Model I am using Bert for generation (a Seq2Seq style): The problem arises when using: * The official example scripts from : [transformers/examples/legacy/seq2seq/finetune_trainer.py](https://github.com/huggingface/transformers/blob/master/examples/legacy/seq2seq/finetune_trainer.py) ``` config = AutoConfig.from_pretrained(model_args.config_name) tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_args.model_name_or_path) ``` The above is the official code to fine-tune a text generation model such as BART (i.e., just specify model_name_or_path as facebook/BART-base). **I am trying to use the BERTGEN instead of using BART of T5.** So, I modified the above code into the following one. * My own modified scripts: ``` config = BertConfig.from_pretrained("bert-large-uncased") tokenizer = BertTokenizer.from_pretrained("bert-large-uncased") encoder = BertGenerationEncoder.from_pretrained("bert-large-uncased", bos_token_id=101, eos_token_id=102) decoder = BertGenerationDecoder.from_pretrained("bert-large-uncased", add_cross_attention=True, is_decoder=True, bos_token_id=101, eos_token_id=102) model = EncoderDecoderModel(encoder=encoder, decoder=decoder) ``` The tasks I am working on is: * An official task: Gigaword, CNN/DailyMail ## Details Since the examples in the github repo do not contain `Bert for generation`, so I adopt the code from the documents [here](https://huggingface.co/transformers/model_doc/bertgeneration.html). The only modification has been shown as aboves. However, the model performance cannot reach the reported performance in their [paper](https://arxiv.org/pdf/1907.12461.pdf), even has been left by a large margin. I felt the model has not been correctly trained. **I am wondering whether there is an example code for using Bert for generation. Thank you very much for your help in advance.** ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.7.0 - Python version: 3.6 - PyTorch version (GPU?): 1.8 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes <!-- IMPORTANT: which version of the former library do you use? --> * `pytorch-transformers` or `pytorch-pretrained-bert` version (or branch): 4.7.0
06-28-2021 04:51:41
06-28-2021 04:51:41
Hi Stas, Suraj, Sylvain (@stas00 @patil-suraj @sgugger), Would you please to give some helps on using the `finetune_trainer.py` [file](https://github.com/huggingface/transformers/blob/master/examples/legacy/seq2seq/finetune_trainer.py) for BertGen? Thank you very much! <|||||>It looks like @patrickvonplaten ported this model, https://huggingface.co/transformers/model_doc/bertgeneration.html so he is probably the best person to ask. <|||||>Yes, I followed that document and changed the example script [transformers/examples/legacy/seq2seq/finetune_trainer.py](https://github.com/huggingface/transformers/blob/master/examples/legacy/seq2seq/finetune_trainer.py). However, it did not work.<|||||>@patrickvonplaten Hi Patrick, do you have any script for training BertGen model?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey@wyu97, Could you try to follow this blog post and the accompanying google colab to fine-tune a Beer for generation Seq2Seq model? https://huggingface.co/blog/warm-starting-encoder-decoder#warm-starting-encoder-decoder-models-with-%F0%9F%A4%97transformers-practice<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,386
open
A model or a config like 'transformer_iwslt_de_en' for machine translation
# 🌟 New model addition Does huggingface have some models like `transformer_iwslt_de_en` or `transformer_wmt_en_de` in fairseq for machine translation? I plan to write a model for machine translation on huggingface. It would be great to be able to compare directly with the baseline model on huggingface. @patil-suraj
06-28-2021 04:19:32
06-28-2021 04:19:32
I am also eager for a transformer base model to train from scratch with HuggingFace <|||||>In huggingface transformers it's called [FSMT](https://huggingface.co/docs/transformers/model_doc/fsmt), short for FairSeq Machine Translation.<|||||>I have the same need, looking for a transformer base model. Will try FSMT. Thanks!
transformers
12,385
closed
Rework LongFormer to make it compatible with ONNX
- [x] Remove implicit `bool` -> `int` conversion while padding attention_mask with `False` value. - [x] Remove call to `einsum` where `matmul` + `transpose` can be used (makes optimizations easier for ONNX) - [ ] Diagonal matrix computation try to avoid ScatterND with negative steps / index - [ ] Use `torch.div(a, b, rounding_mode='trunc')` instead of `floor_divide` - [ ] Check [function](https://github.com/huggingface/transformers/blob/master/src/transformers/models/longformer/modeling_longformer.py#L783) reporting `torch.Tensor` return type which doesn't return anything
06-27-2021 22:41:29
06-27-2021 22:41:29
I'm fine with this rework as long as all the slow tests are passing :-) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey github actions, I need to find some time to continue my work here ! :D
transformers
12,384
open
Request: New LM Adapted checkpoints for T5
# 🌟 New LM Adapted checkpoints for T5 ## Description Google released a new set of checkpoints for T5 v1.1. here: https://github.com/google-research/text-to-text-transfer-transformer/blob/master/released_checkpoints.md#t511 Especially interesting for most people will be the checkpoints below, as finetuning T5 with a prompt or using T5 for conditional text generation is quite common and these checkpoints promise better performance. The default T5 v1.1 checkpoints have never seen sequences without sentinel tokens. ### LM-Adapted: t5.1.1.lm100k (copied from the readme) These "LM adapted" models are initialized from t5.1.1 (above) and train for an additional 100K steps on the LM objective discussed in the [T5 paper][paper]. This adaptation improves the ability of the model to be used for [prompt tuning](https://arxiv.org/abs/2104.08691). * **t5.1.1.lm100k.small** (~77 million parameters): [gs://t5-data/pretrained_models/t5.1.1.lm100k.small](https://console.cloud.google.com/storage/browser/t5-data/pretrained_models/t5.1.1.lm100k.small/) * **t5.1.1.lm100k.base** (~250 million parameters): [gs://t5-data/pretrained_models/t5.1.1.lm100k.base](https://console.cloud.google.com/storage/browser/t5-data/pretrained_models/t5.1.1.lm100k.base/) * **t5.1.1.lm100k.large** (~800 million parameters): [gs://t5-data/pretrained_models/t5.1.1.lm100k.large](https://console.cloud.google.com/storage/browser/t5-data/pretrained_models/t5.1.1.lm100k.large/) * **t5.1.1.lm100k.xl** (~3 billion parameters): [gs://t5-data/pretrained_models/t5.1.1.lm100k.xl](https://console.cloud.google.com/storage/browser/t5-data/pretrained_models/t5.1.1.lm100k.xl/) * **t5.1.1.lm100k.xxl** (~11 billion parameters): [gs://t5-data/pretrained_models/t5.1.1.lm100k.xxl](https://console.cloud.google.com/storage/browser/t5-data/pretrained_models/t5.1.1.lm100k.xxl/) ## Open source status * [x] the model implementation is available: t5 v1.1. with geglu * [x] the model weights are available: see links above * [x] who are the authors: Brian Lester, Rami Al-Rfou, Noah Constant
06-27-2021 14:16:37
06-27-2021 14:16:37
transformers
12,383
closed
Size of tensors not matching even though using tweets (all same length)
# What I'm doing Trying to estimate the emotion of tweets using https://huggingface.co/cardiffnlp/twitter-roberta-base-emotion # The error `RuntimeError: The expanded size of the tensor (601) must match the existing size (514) at non-singleton dimension 1. Target sizes: [1, 601]. Tensor sizes: [1, 514]` Does anyone what the problem might be? I tried `truncating=True` as well # Code and data to reproduce ### View only https://deepnote.com/project/Code-to-reproduce-error-RuntimeError-The-expanded-size-of-the-tensor-601-must-match-the-existing-size-514-at-non-singleton-dimension-1-ZfBSsqbKQ7-XWrir593tKQ/%2Fnotebook.ipynb ### Interactive (can run and/or make changes) https://deepnote.com/project/Interactive-Code-to-reproduce-error-RuntimeError-The-expanded-size-of-the-tensor-601-must-match-the-existing-size-514-at-non-singleton-dimension-1-Duplicate-qJTy9jxRTPWhXhwytQjU4Q/%2Fnotebook.ipynb ### Environment "Deepnote projects run in containers on Debian Buster with Python 3.7"
06-27-2021 14:11:15
06-27-2021 14:11:15
Can you try changing this line: ``` encoded_input = tokenizer(text, return_tensors='pt') ``` to ``` encoded_input = tokenizer(text, return_tensors='pt', padding=True, truncation=True, max_length=512) ``` ? It will pad/truncate all sequences to 512 tokens. Feel free to adapt to the maximum size within the batch or to a smaller max length (512 is the maximum sequence length for that model)<|||||>That worked! Thank you @LysandreJik, much appreciated 😊 Btw how did you get that the max sequence length for that model is 512? I thought it was 514 based on the error message. I checked some tweets that were close to the character limit and their `num_tokens` (in encoded_input) was ~60. This is far less than the error message. Do you know why/how there could be such a difference? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,382
closed
About a error in retrain a xlm model
when i train a xlm model as following code, an error occurs that 'label' is a unexpected for forward. Anyway to solve it? `# -*- coding: UTF-8 -*- from transformers import XLMTokenizer, XLMModel, Trainer from datasets import load_dataset, Dataset from transformers import LineByLineTextDataset, TrainingArguments from transformers.data.data_collator import DataCollatorForLanguageModeling model = XLMModel.from_pretrained('xlm-mlm-tlm-xnli15-1024') tokenizer = XLMTokenizer.from_pretrained('xlm-mlm-tlm-xnli15-1024') # train_datasets=load_dataset('text',data_files={'train':'./tmp/xxx.train.txt','valitation':'./tmp/all_val_data.txt'}) # 将我们刚刚加载好的datasets ,通过tokenizer做映射,得到input_id,也就是实际输入模型的东西。 # def tokenize_function(examples): # # Remove empty lines # examples["text"] = [line for line in examples["text"] if len(line) > 0 and not line.isspace()] # return tokenizer( # examples["text"], # padding="max_length", # 进行填充 # truncation=True, # 进行截断 # max_length=256, # 设置句子的长度 # # We use this option because DataCollatorForLanguageModeling (see below) is more efficient when it # # receives the `special_tokens_mask`. # return_special_tokens_mask=True, # ) # 得到训练集和验证集 # train_dataset = tokenized_datasets["train"] # eval_dataset = tokenized_datasets["validation"] model.resize_token_embeddings(len(tokenizer)) train_dataset = LineByLineTextDataset(tokenizer=tokenizer, file_path='' block_size=512) datacollector = DataCollatorForLanguageModeling(tokenizer, mlm=True, mlm_probability=0.15) # train_method=Trainer(model=model,data_collator=datacollector,train_dataset=train_dataset) training_args = TrainingArguments(output_dir='./outputs/', overwrite_output_dir=True, num_train_epochs=20, learning_rate=6e-5, per_device_train_batch_size=128, save_total_limit=10) # save_steps=10000 trainer = Trainer( model=model, args=training_args, data_collator=datacollector, train_dataset=train_dataset) trainer.train() trainer.save_model('./outputs/') `
06-27-2021 08:45:09
06-27-2021 08:45:09
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,381
open
A fast tokenizer for BertJapaneseTokenizer
We would like a fast tokenizer for BertJapaneseTokenizer. This is because the current token classification model (run_ner.py) requires using the fast tokenizer but BertJapaneseTokenizer does not have it. Because of this, we cannot do token classification for Japanese using cl-tohoku's BERT models.
06-27-2021 08:29:37
06-27-2021 08:29:37
Hi @dkawahara I've just written tentative `BertJapaneseTokenizerFast`: ``` from transformers import BertJapaneseTokenizer class BertJapaneseTokenizerFast(BertJapaneseTokenizer): def __call__(self,text,text_pair=None,return_offsets_mapping=False,**kwargs): v=super().__call__(text=text,text_pair=text_pair,return_offsets_mapping=False,**kwargs) if return_offsets_mapping: import tokenizations if type(text)==str: z=zip([v["input_ids"]],[text],[text_pair] if text_pair else [""]) else: z=zip(v["input_ids"],text,text_pair if text_pair else [""]*len(text)) w=[] for a,b,c in z: a2b,b2a=tokenizations.get_alignments(self.convert_ids_to_tokens(a),b+c) x=[] for i,t in enumerate(a2b): if t==[]: s=(0,0) if a[i]==self.unk_token_id: j=[[-1]]+[t for t in a2b[0:i] if t>[]] k=[t for t in a2b[i+1:] if t>[]]+[[len(b+c)]] s=(j[-1][-1]+1,k[0][0]) elif t[-1]<len(b): s=(t[0],t[-1]+1) else: s=(t[0]-len(b),t[-1]-len(b)+1) x.append(s) w.append(list(x)) v["offset_mapping"]=w[0] if type(text)==str else w return v ``` But it requires [pytokenizations](https://github.com/explosion/tokenizations) module, and in fact it's not fast. See detail in [my diary](https://srad.jp/~yasuoka/journal/651897/) written in Japanese, and in next I will try to implement `BatchEncoding.encodings`<|||||>@KoichiYasuoka Thank you very much for providing this work around. Without "return_offsets_mapping" option, it was always a pain in Japanese token classification tasks. I would like to point out a little bug when processing text containing consecutive [UNK] tokens. e.g., ``` text = "𠮟られても平気なの☺ ☺☺" tokenizer=BertJapaneseTokenizerFast.from_pretrained("cl-tohoku/bert-base-japanese") d=tokenizer(text,return_offsets_mapping=True) for offset in d['offset_mapping']: print((offset[0], offset[1]), text[offset[0]:offset[1]]) ``` would print out results like below ``` (0, 0) (0, 1) 𠮟 (1, 3) られ (3, 4) て (4, 5) も (5, 6) 平 (6, 7) 気 (7, 8) な (8, 9) の (9, 13) ☺ ☺☺ (9, 13) ☺ ☺☺ (0, 0) ``` I still can't figure out any solutions to improve the mapping approach for each [UNK] token. I am just wondering if you have any ideas on this issue. Many thanks. <|||||>Hi @Ezekiel25c17 I've just written `BertMecabTokenizerFast`: ``` from transformers import BertTokenizerFast from transformers.models.bert_japanese.tokenization_bert_japanese import MecabTokenizer class MecabPreTokenizer(MecabTokenizer): def mecab_split(self,i,normalized_string): t=str(normalized_string) z=[] e=0 for c in self.tokenize(t): s=t.find(c,e) if s<0: z.append((0,0)) else: e=s+len(c) z.append((s,e)) return [normalized_string[s:e] for s,e in z if e>0] def pre_tokenize(self,pretok): pretok.split(self.mecab_split) class BertMecabTokenizerFast(BertTokenizerFast): def __init__(self,vocab_file,**kwargs): from tokenizers.pre_tokenizers import PreTokenizer,BertPreTokenizer,Sequence super().__init__(vocab_file=vocab_file,**kwargs) d=kwargs["mecab_kwargs"] if "mecab_kwargs" in kwargs else {"mecab_dic":"ipadic"} self._tokenizer.pre_tokenizer=Sequence([PreTokenizer.custom(MecabPreTokenizer(**d)),BertPreTokenizer()]) ``` derived from `MecabPreTokenizer` of [deberta-base-japanese-juman-ud-goeswith](https://huggingface.co/KoichiYasuoka/deberta-base-japanese-juman-ud-goeswith/blob/main/ud.py). Does it work well?<|||||>Hi @KoichiYasuoka Thank you for responding so quickly. This worked as a charm!<|||||>I've re-written `BertMecabTokenizer` to disable `do_lower_case` and `tokenize_chinese_chars`: ``` from transformers import BertTokenizerFast from transformers.models.bert_japanese.tokenization_bert_japanese import MecabTokenizer class MecabPreTokenizer(MecabTokenizer): def mecab_split(self,i,normalized_string): t=str(normalized_string) e=0 z=[] for c in self.tokenize(t): s=t.find(c,e) e=e if s<0 else s+len(c) z.append((0,0) if s<0 else (s,e)) return [normalized_string[s:e] for s,e in z if e>0] def pre_tokenize(self,pretok): pretok.split(self.mecab_split) class BertMecabTokenizerFast(BertTokenizerFast): def __init__(self,vocab_file,do_lower_case=False,tokenize_chinese_chars=False,**kwargs): from tokenizers.pre_tokenizers import PreTokenizer,BertPreTokenizer,Sequence super().__init__(vocab_file=vocab_file,do_lower_case=do_lower_case,tokenize_chinese_chars=tokenize_chinese_chars,**kwargs) d=kwargs["mecab_kwargs"] if "mecab_kwargs" in kwargs else {"mecab_dic":"ipadic"} self._tokenizer.pre_tokenizer=Sequence([PreTokenizer.custom(MecabPreTokenizer(**d)),BertPreTokenizer()]) ``` and now `BertMecabTokenizerFast` tokenizes "平気" into "平" and "##気". See detail in [my diary](https://srad.jp/~yasuoka/journal/660181/) written in Japanese.
transformers
12,380
closed
Module version identification problem
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.8.1 - Platform: Windows 10 x64 - Python version: 3.7.3 - PyTorch version (GPU?): 1.7.1 - Tensorflow version (GPU?): 2.0.0 - Using GPU in script?: GTX1060 ## Module version identification problem caused by "importlib_metadata" ![image](https://user-images.githubusercontent.com/41822468/123531202-1d50b880-d735-11eb-9d15-7223c587148e.png)
06-27-2021 02:54:55
06-27-2021 02:54:55
Hello! What command did you run to get the first error?<|||||>I just executed `python xxx.py` from the command, which references this package, and the specific project is [this](https://github.com/yangjianxin1/GPT2-chitchat) ![image](https://user-images.githubusercontent.com/41822468/123634048-88d97980-d84c-11eb-97ab-c5f8b40f58ef.png) Detailed local error message (which contains some of my debugging data) ```python Traceback (most recent call last): File "interact.py", line 1, in <module> import transformers File "C:\Users\gaowanliang\Miniconda3\lib\site-packages\transformers\__init__.py", line 43, in <module> from . import dependency_versions_check File "C:\Users\gaowanliang\Miniconda3\lib\site-packages\transformers\dependency_versions_check.py", line 41, in <module> require_version_core(deps[pkg]) File "C:\Users\gaowanliang\Miniconda3\lib\site-packages\transformers\utils\versions.py", line 125, in require_version_core return require_version(requirement, hint) File "C:\Users\gaowanliang\Miniconda3\lib\site-packages\transformers\utils\versions.py", line 119, in require_version _compare_versions(op, got_ver, want_ver, requirement, pkg, hint) File "C:\Users\gaowanliang\Miniconda3\lib\site-packages\transformers\utils\versions.py", line 50, in _compare_versions f"{requirement} is required for a normal functioning of this module, but found {pkg}=={got_ver}.{hint}" ImportError: tqdm>=4.27 is required for a normal functioning of this module, but found tqdm==4.26.0. Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git master ``` <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I have a similar issue after upgrading to transformers 4.9.1: ``` >>> import transformers >>> transformers.__version__ '4.9.1' >>> from transformers.utils.versions import require_version >>> require_version("torch>=1.5.0") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/reimers/miniconda3/envs/sbert/lib/python3.7/site-packages/transformers/utils/versions.py", line 114, in require_version _compare_versions(op, got_ver, want_ver, requirement, pkg, hint) File "/home/reimers/miniconda3/envs/sbert/lib/python3.7/site-packages/transformers/utils/versions.py", line 50, in _compare_versions f"{requirement} is required for a normal functioning of this module, but found {pkg}=={got_ver}.{hint}" ImportError: torch>=1.5.0 is required for a normal functioning of this module, but found torch==1.2.0. >>> import torch >>> torch.__version__ '1.7.1' ``` On an Ubuntu 20.04 system with Python 3.7.6 using miniconda. Not sure why the wrong torch version is detected. The issue happens when I want to train something with the AdamW optimizer: File "/home/reimers/miniconda3/envs/sbert/lib/python3.7/site-packages/transformers/optimization.py", line 300<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,379
closed
Tracking variables other than loss during training
# 🚀 Feature request Allow to track other variables during training with the [trainer](https://huggingface.co/transformers/main_classes/trainer.html). <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation Often during training one wants to track variables other than just the loss. For example, the loss may be consisting of two different components and the user may want to track the two separately. As of now, the trainer can only track loss. It would be great if a user could simply pass the list of keys of auxiliary losses that they may want to track. <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution I am happy to discuss and contribute code for this. <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md --> @sgugger
06-27-2021 00:38:27
06-27-2021 00:38:27
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,378
closed
TypeError: new(): invalid data type 'numpy.str_'
Facing the below error while running ``` # Setting up training trainer = Seq2SeqTrainer( model=model, args=args, train_dataset=tokenized_datasets['train'], eval_dataset=tokenized_datasets['validation'], ) ``` on Kaggle Notebooks; while the same code runs fine in Colab Notebooks. Below is the error log. > ``` > --------------------------------------------------------------------------- > TypeError Traceback (most recent call last) > <ipython-input-23-e9826d90c0df> in <module> > 1 # This will take around 20-25 minutes > ----> 2 trainer.train() > > /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs) > 1032 self.control = self.callback_handler.on_epoch_begin(self.args, self.state, self.control) > 1033 > -> 1034 for step, inputs in enumerate(epoch_iterator): > 1035 > 1036 # Skip past any already trained steps if resuming training > > /opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py in __next__(self) > 433 if self._sampler_iter is None: > 434 self._reset() > --> 435 data = self._next_data() > 436 self._num_yielded += 1 > 437 if self._dataset_kind == _DatasetKind.Iterable and \ > > /opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self) > 473 def _next_data(self): > 474 index = self._next_index() # may raise StopIteration > --> 475 data = self._dataset_fetcher.fetch(index) # may raise StopIteration > 476 if self._pin_memory: > 477 data = _utils.pin_memory.pin_memory(data) > > /opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) > 42 def fetch(self, possibly_batched_index): > 43 if self.auto_collation: > ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] > 45 else: > 46 data = self.dataset[possibly_batched_index] > > /opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0) > 42 def fetch(self, possibly_batched_index): > 43 if self.auto_collation: > ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] > 45 else: > 46 data = self.dataset[possibly_batched_index] > > /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in __getitem__(self, key) > 1482 format_columns=self._format_columns, > 1483 output_all_columns=self._output_all_columns, > -> 1484 format_kwargs=self._format_kwargs, > 1485 ) > 1486 > > /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in _getitem(self, key, format_type, format_columns, output_all_columns, format_kwargs) > 1471 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) > 1472 formatted_output = format_table( > -> 1473 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns > 1474 ) > 1475 return formatted_output > > /opt/conda/lib/python3.7/site-packages/datasets/formatting/formatting.py in format_table(table, key, formatter, format_columns, output_all_columns) > 417 else: > 418 pa_table_to_format = pa_table.drop(col for col in pa_table.column_names if col not in format_columns) > --> 419 formatted_output = formatter(pa_table_to_format, query_type=query_type) > 420 if output_all_columns: > 421 if isinstance(formatted_output, MutableMapping): > > /opt/conda/lib/python3.7/site-packages/datasets/formatting/formatting.py in __call__(self, pa_table, query_type) > 189 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]: > 190 if query_type == "row": > --> 191 return self.format_row(pa_table) > 192 elif query_type == "column": > 193 return self.format_column(pa_table) > > /opt/conda/lib/python3.7/site-packages/datasets/formatting/torch_formatter.py in format_row(self, pa_table) > 57 def format_row(self, pa_table: pa.Table) -> dict: > 58 row = self.numpy_arrow_extractor().extract_row(pa_table) > ---> 59 return self.recursive_tensorize(row) > 60 > 61 def format_column(self, pa_table: pa.Table) -> "torch.Tensor": > > /opt/conda/lib/python3.7/site-packages/datasets/formatting/torch_formatter.py in recursive_tensorize(self, data_struct) > 53 > 54 def recursive_tensorize(self, data_struct: dict): > ---> 55 return map_nested(self._recursive_tensorize, data_struct, map_list=False) > 56 > 57 def format_row(self, pa_table: pa.Table) -> dict: > > /opt/conda/lib/python3.7/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types) > 202 if num_proc <= 1 or len(iterable) <= num_proc: > 203 mapped = [ > --> 204 _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) > 205 ] > 206 else: > > /opt/conda/lib/python3.7/site-packages/datasets/utils/py_utils.py in <listcomp>(.0) > 202 if num_proc <= 1 or len(iterable) <= num_proc: > 203 mapped = [ > --> 204 _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) > 205 ] > 206 else: > > /opt/conda/lib/python3.7/site-packages/datasets/utils/py_utils.py in _single_map_nested(args) > 140 # Singleton first to spare some computation > 141 if not isinstance(data_struct, dict) and not isinstance(data_struct, types): > --> 142 return function(data_struct) > 143 > 144 # Reduce logging to keep things readable in multiprocessing with tqdm > > /opt/conda/lib/python3.7/site-packages/datasets/formatting/torch_formatter.py in _recursive_tensorize(self, data_struct) > 50 if data_struct.dtype == np.object: # pytorch tensors cannot be instantied from an array of objects > 51 return [self.recursive_tensorize(substruct) for substruct in data_struct] > ---> 52 return self._tensorize(data_struct) > 53 > 54 def recursive_tensorize(self, data_struct: dict): > > /opt/conda/lib/python3.7/site-packages/datasets/formatting/torch_formatter.py in _tensorize(self, value) > 42 default_dtype = {"dtype": torch.float32} > 43 > ---> 44 return torch.tensor(value, **{**default_dtype, **self.torch_tensor_kwargs}) > 45 > 46 def _recursive_tensorize(self, data_struct: dict): > > TypeError: new(): invalid data type 'numpy.str_' > ```
06-26-2021 21:45:38
06-26-2021 21:45:38
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @pn12 - the issue is likely that you have columns that are strings (not encoded). Make sure to drop all columns that aren't encoded and pass the new object to the model.
transformers
12,377
closed
conversion wav2vec2 model from fairseq to huggingface
# 📚 Migration Hi, trained wav2vec2 model in fairseq for my own dataset. Now i need to finetune pretrained fairseq wav2vec2 model. To train hugging face model it is asking following files. 1. config.json 2. preprocessor_config.json 3. pytorch_model.bin 4. special_tokens_map.json 5. tokenizer_config.json 6. vocab.json To get this files i used following command for wav2vec2 base model cp transformers/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py . wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/wav2vec_small_960h.pt -O ./wav2vec_small_960h.pt mkdir dict wget https://dl.fbaipublicfiles.com/fairseq/wav2vec/dict.ltr.txt mkdir outputs python convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py --pytorch_dump_folder_path ./outputs --checkpoint_path ./wav2vec_small_960h.pt --dict_path dict.ltr.txt This following commands are working fine for wav2vec_small_960h.pt model and able to generate config.json, preprocessor_config.json, pytorch_model.bin, special_tokens_map.json, tokenizer_config.json, vocab.json But i trained model using following command fairseq-hydra-train \ task.data=/path/to/data \ --config-dir /path/to/fairseq-py/examples/wav2vec/config/pretraining \ --config-name wav2vec2_base_librispeech i got checkpoint_mine.pt Then i used python convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py --pytorch_dump_folder_path ./outputs --checkpoint_path ./checkpoint_mine.pt --dict_path dict.ltr.txt I'm getting following error File "convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py", line 255, in <module> convert_wav2vec2_checkpoint( File "env/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py", line 240, in convert_wav2vec2_checkpoint recursively_load_weights(model, hf_wav2vec, not is_finetuned) File "convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py", line 134, in recursively_load_weights set_recursively(hf_model, mapped_key, value, name, weight_type) File "convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py", line 71, in set_recursively hf_pointer = getattr(hf_pointer, attribute) File "env/lib/python3.8/site-packages/torch/nn/modules/module.py", line 947, in __getattr__ raise AttributeError("'{}' object has no attribute '{}'".format( Please can any one suggest what i need to change why wav2vec_small_960h.pt and checkpoint_mine.pt are differ. I saw previous discuusions but i didn't get proper solution. Thanks in advance
06-26-2021 16:46:30
06-26-2021 16:46:30
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,376
closed
Issue in layer-drop implementation in TensorFlow models in graph mode
## Environment info - `transformers` version: 4.8.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.9.0+cu102 (False) - Tensorflow version (GPU?): 2.5.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @Rocketknight1 ## Information Model I am using: TFBartForConditionalGeneration The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce ```python from transformers import TFBartForConditionalGeneration, BartConfig # keeping layerdrop to be very high value for demonstration error model = TFBartForConditionalGeneration(BartConfig(encoder_layerdrop=0.5)) import tensorflow as tf import numpy as np array = np.random.randint(1, 300, size=(4, 256)) dataset = tf.constant(array, dtype=tf.int32) # following cell works perfectly when `tf.function(...)` is removed @tf.function def train_step(tensor): return model(tensor, training=True) from tqdm.auto import tqdm for tensor in tqdm(dataset, total=len(dataset)): tensor = tf.expand_dims(tensor, 0) output = train_step(tensor) ``` You can checkout this [small Colab notebook](https://colab.research.google.com/drive/1ACfyQcSUtv0pQGD_mZEvhjEO0tOzS2zR?usp=sharing) also for reproducing the error. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ```python ValueError: in user code: <ipython-input-5-ca2e97b30313>:4 train_step * return model(tensor, training=True) /usr/local/lib/python3.7/dist-packages/transformers/models/bart/modeling_tf_bart.py:1393 call * outputs = self.model( /usr/local/lib/python3.7/dist-packages/transformers/models/bart/modeling_tf_bart.py:1125 call * inputs["encoder_outputs"] = self.encoder( /usr/local/lib/python3.7/dist-packages/transformers/models/bart/modeling_tf_bart.py:764 call * hidden_states, attn = encoder_layer( /usr/local/lib/python3.7/dist-packages/transformers/models/bart/modeling_tf_bart.py:305 call * hidden_states, self_attn_weights, _ = self.self_attn( /usr/local/lib/python3.7/dist-packages/transformers/models/bart/modeling_tf_bart.py:178 call * query_states = self.q_proj(hidden_states) * self.scaling /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer.py:1023 __call__ ** self._maybe_build(inputs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer.py:2625 _maybe_build self.build(input_shapes) # pylint:disable=not-callable /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/layers/core.py:1198 build trainable=True) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer.py:655 add_weight caching_device=caching_device) /usr/local/lib/python3.7/dist-packages/tensorflow/python/training/tracking/base.py:815 _add_variable_with_custom_getter **kwargs_for_getter) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/base_layer_utils.py:139 make_variable shape=variable_shape if variable_shape else None) /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py:260 __call__ return cls._variable_v1_call(*args, **kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py:221 _variable_v1_call shape=shape) /usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/variables.py:67 getter return captured_getter(captured_previous, **kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/def_function.py:769 invalid_creator_scope "tf.function-decorated function tried to create " ValueError: tf.function-decorated function tried to create variables on non-first call. ``` Side note: I have checked this same thing for TFWav2Vec2 also, but same issue is happening. So, possibly all TF model using layer-drop needs to be fixed. ## Expected behavior layer drop should work perfectly in graph mode.
06-26-2021 14:21:17
06-26-2021 14:21:17
Confirmed that the issue is reproducible at my end, we're investigating!<|||||>On investigation, I'm pretty sure the issue is caused by the way we're doing layerdrop: https://github.com/huggingface/transformers/blob/master/src/transformers/models/bart/modeling_tf_bart.py#L755-L772 This code is correct for eager execution, but I suspect in graph mode that this leads to the creation of new variables and graph edges whenever a layer is skipped for the first time. I can see some workarounds, but unfortunately no perfect ones - this seems like a fundamental limitation of the way graph mode works in TF. You're welcome to investigate and try to find a solution if you like, but we're probably just going to explicitly disable layer drop in graph mode for now.<|||||>Yeah sure, I am also looking for some solution on this. Will keep you updated (or will make a PR) if I get some solution.<|||||>Cool! We'll hold off on disabling it for now - if you find a solution, let us know, and don't panic if it turns out to be impossible - just say so and we'll close this issue and disable layerdrop in graph mode instead. Thanks for your help!<|||||>@Rocketknight1, I think I got a solution to this: ```python # we will define this in the end of __init__(...) self.step_0 = True # then we will replace layer-drop condition with this: if (not self.step_0) and inputs["training"] and (dropout_probability < self.layerdrop): # skip the layer continue # in the end of layer (just before return), we will do this self.step_0 = False ``` Code works without any error after adding above stuff with `layer-drop > 0`. Checkout this for complete code: https://github.com/vasudevgupta7/transformers/commit/acf69cea945ebe97293621ba8730a7d988f2c2aa @Rocketknight1, do think it's correct?? Like I am not sure but if graph is built with all the layers in the first step then will `continue` in the next steps work??? Thanks!<|||||>Hi, firstly I'm extremely sorry for the slow response! I was working on another project and had to drop my Github issues for a while. I'm not sure this works, though - I *think* the value of `self.step_0` will just be treated as a constant at compilation time. As a result, this code won't cause errors, but it will never skip any layers either. Can you test it with a high value for the layerdrop probability and see if you get different answers when you run the same batch multiple times?<|||||>@Rocketknight1 I will test it the way you suggested. Thanks for your reply!!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,375
closed
model.generate occurs error: generation_beam_search
## Environment info - `transformers` version: 4.2.1 - Platform: Linux-3.10.0-1127.18.2.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyTorch version (GPU?): 1.5.0 (True) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help @patrickvonplaten, @patil-suraj ## Information Model I am using (Bert, XLNet ...): bart-base The problem arises when using: ``` /opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [39,0,0], thread: [0,3,0] Ass ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed. /opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [23,0,0], thread: [0,1,0] Ass ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed. /opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [29,0,0], thread: [0,2,0] Ass ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed. /opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [39,0,0], thread: [0,0,0] Ass ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed. /opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [23,0,0], thread: [0,2,0] Ass ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed. /opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [39,0,0], thread: [0,1,0] Ass ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed. /opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [23,0,0], thread: [0,3,0] Ass ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed. /opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [39,0,0], thread: [0,2,0] Ass ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed. /opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [23,0,0], thread: [0,0,0] Ass ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed. /opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [28,0,0], thread: [0,1,0] Ass ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed. /opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [22,0,0], thread: [0,1,0] Ass ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed. /opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [56,0,0], thread: [0,0,0] Ass ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed. /opt/conda/conda-bld/pytorch_1587428207430/work/aten/src/ATen/native/cuda/MultinomialKernel.cu:87: binarySearchForMultinomial: block: [44,0,0], thread: [0,3,0] Ass ertion `cumdist[size - 1] > static_cast<scalar_t>(0)` failed. ....... similar as above Traceback (most recent call last): [56/1924] File "run_eval.py", line 172, in <module> run_generate(verbose=True) File "run_eval.py", line 133, in run_generate runtime_metrics = generate_summaries_or_translations( File "run_eval.py", line 67, in generate_summaries_or_translations summaries = model.generate( File "/home/xxx/anaconda3/envs/xxx/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context return func(*args, **kwargs) File "/data/xxx/transformers/src/transformers/generation_utils.py", line 986, in generate return self.beam_sample( File "/data/xxx/transformers/src/transformers/generation_utils.py", line 1894, in beam_sample beam_outputs = beam_scorer.process( File "/data/xxx/transformers/src/transformers/generation_beam_search.py", line 218, in process if self._done[batch_idx]: RuntimeError: CUDA error: device-side assert triggered ``` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: seq2seq ## To reproduce Steps to reproduce the behavior: ``` python run_eval.py \ --model_name ${MODEL_DIR} \ --input_path $DATA_DIR/val.src \ --save_path $DATA_DIR/xxx.txt \ --task summarization \ --device cuda:0 \ --bs 50 \ --min_length 2 \ --max_length 32 \ --do_sample True \ --top_k 10 \ --num_return_sequences 5 ```
06-26-2021 14:12:06
06-26-2021 14:12:06
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I meet the same problem, have you solved it? @Albert-Ma <|||||>@qshi95 - could you provide the full command to reproduce the error?<|||||>I find the reason causing this error is that the size of input ids is out of range. Sorry for disturb.
transformers
12,374
closed
ImportError: cannot import name 'BertEncoder' from 'transformers'
``` from transformers import BertEncoder ``` A week ago this import was working normally, but this morning I ran my code and got this error. ``` ImportError: cannot import name 'BertEncoder' from 'transformers' (unknown location) ``` How to import BertEncoder? ```'BertEncoder' in dir(transformers)``` is `False` ######################################## - `transformers` version: 4.8.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.9.0+cu102 (False) - Tensorflow version (GPU?): 2.5.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
06-26-2021 12:17:48
06-26-2021 12:17:48
Hello! I'm failing to find a version where this worked, going back to version 1.0.0. If you have it handy, could you point me to the version that had it? If you want to import `BertEncoder` you can do it as such: ``` from transformers.models.bert.modeling_bert import BertEncoder ```
transformers
12,373
closed
Added .lower() method to label
Same labels with different cases (like "Hindi", "hindi", "hIndi") can be passed and the predicted scores vary a lot. So if we add .lower() method we can solve that. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-26-2021 12:05:06
06-26-2021 12:05:06
Thanks for the proposal. It's a good thought, but I actually am not sure this is what we want to do. While it may seem intuitively to make sense that different cases shouldn't affect the scores, I don't think we want to unilaterally send everything to lower case. One reason is that casing actually can provide an important signal. For example, capitalizing the A in "Apple" might be useful for the model to determine whether you mean the fruit or the tech giant. I think in general, it would be best to leave this to the user to decide how they want to pass the candidate labels' casing.<|||||>Ok
transformers
12,372
closed
Wav2vec2 Dataset
I have been trying the code described in the huggingface blog https://huggingface.co/blog/fine-tune-wav2vec2-english. While in the blog, when seeing random text from the dataset looks as shown below: ![image](https://user-images.githubusercontent.com/82436706/123511281-0ca13380-d69e-11eb-88b4-6e585f4f789e.png) For me, trying the same code shows as below(same sentence) ![image](https://user-images.githubusercontent.com/82436706/123511305-25a9e480-d69e-11eb-892d-4692a13dbc7a.png) Could you help to know what is this issue
06-26-2021 11:17:56
06-26-2021 11:17:56
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,371
closed
[Documentation] Warn that DataCollatorForWholeWordMask is limited to BertTokenizer-like tokenizers
# What does this PR do? Currently, the `DataCollatorForWholeWordMasking` added with #7925 only works for one specific family of tokenizers, but the documentation does not mention this nor is the user warned when using this data collator with an incompatible tokenizer. Since the data collator will run with all tokenizers, just not produce the desired output, this is very misleading for users. This PR adds a note to the documentation and a warning that is issued when a user attempts to create the whole word mask with a (presumably) incompatible tokenizer. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #11768 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-26-2021 10:46:08
06-26-2021 10:46:08
transformers
12,370
closed
[WIP] DataCollatorForTextInfilling
# What does this PR do? A DataCollator for the BART "Text Infilling" pre-training task. The implementation borrows ideas from `fairseq`'s more complex [DenoisingDataset](https://github.com/pytorch/fairseq/blob/1bba712622b8ae4efb3eb793a8a40da386fe11d0/fairseq/data/denoising_dataset.py). <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #5428 (Addresses #5096) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-26-2021 10:02:58
06-26-2021 10:02:58
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>It's still on my agenda to brush this up<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This is a wonderful effort. any update on this? also if you can add TF call that would be great.<|||||>@salrowili Sadly, I didn't find time for it. I'm also not sure whether this still fits with the library, there might have been some updates to the data collators in the meantime. I'm still interested in working on this but realistically I won't have time to do that unless I need it for an ongoing project. Would be up for a collaboration?<|||||>@ionicsolutions Thanks for replying back. What about BartForConditionalGeneration? is it enough to train BART from scratch like in this example https://github.com/huggingface/transformers/blob/main/examples/flax/language-modeling/run_mlm_flax.py#L241 . However, as you can see it uses FlaxDataCollatorForLanguageModeling which i am not sure if it uses text in filling task? maybe you can check this repo also https://github.com/cosmoquester/transformers-bart-pretrain . He already implemented text in filling task but with tensorflow dataset. However, this repo does not work with HF >4.11 because of some logit issue. Maybe you can contact the author of this repo and asks his permission to use his function and collaboration if he is willing to do. He is probably better than me in pushing this project forward. However, what I can help is that I can test any function you develop in this project in scale (e.g. pre-training it on BART-large from scratch) and see how it will perform and share colab example with research community. What i like about BART over T5 is the inference time and memory usage during fine-tuning and it also can achieve SOTA on SQuAD and GLUE in addition to generative tasks (e.g. summarization) so i think this project is much needed from research community.<|||||>@salrowili I'm also interested in infilling generation and was wondering if you've made any progress? I see your last post was three weeks ago, so I'm wondering if maybe you found an alternative approach?<|||||>@jbmaxwell I try out BART implementation of FLAX, XLA with TPU and Keras BART @ https://github.com/cosmoquester/transformers-bart-pretrain . Keras BART is my best model among those and hence that why i was looking for textinfliing. I think also the implementation of BART is not optimal with the hugging face library, especially for BART large. I am also working with fairseq now and torch xla and I think this will be the best among all variety that I tried out. I suggest for you ask for TPU access from google https://sites.research.google/trc/ and try out fairseq xla with BART but fix the dynamic shape by using pre-defined input shape in my frok https://github.com/salrowili/fairseq. You can see latest commits to see what changes I made. with TPUv3-8 and BART will get a speed of ~100k wps but you need to keep the log interval 10 and num_bucket=5. I run BART on my 3090 and it gives me a speed of 30K wps. 100k wps translate to ~20K steps/day which is slow compared to BERT with TF (~125K stepts/day) with batch size of 256 and max. seq. length of 512. which means it will take you around one month to finish 500K steps with BART (: If you find an alternative solution or you are willing to improve BART implementation with text filling and JAX, TF it would be good if you share your solution as i share mine (:<|||||>I hadn't seen this before—thanks or the link! I'll give it a try. I'm working with compact, non-natural language inputs and small datasets (for now), and generally reduce model sizes significantly from the stock versions, so I'm not too worried about training resources. Faster is better, of course, but not a deal-breaker for me.
transformers
12,369
closed
[Trainer.py] when --load_best_model_at_end is set in Distributed Training
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4, not very sure - Platform: linux - Python version: 3.7 - PyTorch version (GPU?): 1.8 - Tensorflow version (GPU?): //// - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Distributed training in a single node with multi-gpus ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger ## Information Model I am using (Bert, XLNet ...): Bart The problem arises when using: trainer.py The tasks I am working on is: my own task or dataset ## To reproduce I'm not sure if it's a bug or I misunderstood. So I am here for help. Steps to reproduce the behavior: 1. set `--load_best_model_at_end` and use distributed training mode with multi-gpus. 2. When the best model appears in the last step, the main process needs to save the model in `self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)` after training. However, the subprocess may run `self.model = self.model.from_pretrained(self.state.best_model_checkpoint)` before the best model is completely saved. 3. So the subprocess will occur `OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5', 'model.ckpt.index'] found in directory <best model dir>......` Evaluate: 1. I print the time after `self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)` and time before `self.model = self.model.from_pretrained(self.state.best_model_checkpoint)`. It seems that the result has confirmed my guess. 2. I add `dist.barrier()` after `self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)` and the error doesn't appear anymore. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
06-26-2021 08:37:39
06-26-2021 08:37:39
Could you try again on the latest version? This bug has normally been fixed (see [this comment](https://github.com/huggingface/transformers/blob/9a7545943dd611b0d4887a5ab3a7ef7f53b8e76f/src/transformers/trainer.py#L1351) and the four line below).<|||||>> Could you try again on the latest version? This bug has normally been fixed (see [this comment](https://github.com/huggingface/transformers/blob/9a7545943dd611b0d4887a5ab3a7ef7f53b8e76f/src/transformers/trainer.py#L1351) and the four line below). Yes, it looks good. I download the source code to the project, so I can't update it in time. I'm sorry to take up your time。
transformers
12,368
closed
[Examples] Replace `print` statement with `logger.info` in QA example utils
# What does this PR do? Earlier in `utils_qa.py`, `run_qa_beam_search.py` was using `print()` for showing states saving file paths, while `run_qa.py` using `logger.info()` which seems more appropriate. This PR replace `print()` with `logger.info()`. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. Discussed in [Issue](https://github.com/huggingface/transformers/issues/12363) - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @stas00 @sgugger
06-26-2021 08:05:41
06-26-2021 08:05:41
Thanks!
transformers
12,367
closed
[Examples] Added context manager to datasets map
# What does this PR do? Fixes #12363 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @stas00 @sgugger
06-26-2021 06:54:44
06-26-2021 06:54:44
transformers
12,366
closed
Tokens Jumbling
I have trained a "bert-base-uncased" model (with lesser layers) on my dataset with my tokenizer for sentence similarity task. The final sentence embedding is formed using mean pooling strategy. Now during inference if my tokens for sentence1 is [t1,t2,t3,t4,t5] and for senetence2, I randomly shuffle these tokens example [t3,t1,t2,t5,t4], the score is really high, but sentence2 does't make any sense. I tested the pretrained bert-base-uncased model and found out same problem, as shown below Enter text1:a man is riding a horse Enter text2:a riding man is a horse Tokens1: [101, 1037, 2158, 2003, 5559, 1037, 3586, 102] Tokens2: [101, 1037, 5559, 2158, 2003, 1037, 3586, 102] Similarity: 0.703365683555603 Enter text1:A boy can throw a stone up to a maximum height Enter text2:A stone up to a boy can maximum throw a height Tokens1: [101, 1037, 2879, 2064, 5466, 1037, 2962, 2039, 2000, 1037, 4555, 4578, 102] Tokens2: [101, 1037, 2962, 2039, 2000, 1037, 2879, 2064, 4555, 5466, 1037, 4578, 102] Similarity: 0.9277969598770142 ![aa](https://user-images.githubusercontent.com/47495143/123504257-b3250e80-d675-11eb-90b6-99bf8743e6ad.jpeg)
06-26-2021 06:27:47
06-26-2021 06:27:47
Which model are you using exactly? `bert-base-uncased` is not a `sentence-transformers` model<|||||>i used "bert-base-uncased" architecture from here: https://www.sbert.net/docs/training/overview.html to train for sentence embedding tak. I modified the number of layers and hidden size. Also as shown in above image, i also tested "bert-base-nli-cls-token" (from https://huggingface.co/sentence-transformers/bert-base-nli-cls-token) and also tested (https://huggingface.co/sentence-transformers/bert-base-nli-mean-tokens). But all shows same issue <|||||>Pinging @nreimers <|||||>Hi @PhenomenalOnee the two models you linked are deprecated, I recommend to use the paraphrase v2 models. But they will show the same behavior. That is how it is. The models don't check if a sentence makes sense or is grammatically correct. They try to infer the semantics of the sentence and they try to be robust for spelling mistakes, grammatical errors, word shuffleing etc. If this is undesired for your task, you must create training data that teaches the network that these examples should not be close in the vector space. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,365
closed
[Examples] Update Example Template for `--log_level` feature
# What does this PR do? Fixes #12295 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. Discussed on this [PR](https://github.com/huggingface/transformers/pull/12359) - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @stas00 @sgugger
06-26-2021 03:38:04
06-26-2021 03:38:04
transformers
12,364
closed
[CI] add dependency table sync verification
Sometimes version is being modified in `setup.py` but not updated in the autogenerated versions file `src/transformers/dependency_versions_table.py` and then all devs start getting this uncommitted change in their clone on `make fixup/style`. This PR adds a new make target to do the checking and adds it to the `check_code_quality` CI job. Expecting it to fail in this PR initially as I didn't sync the table. TODO: backout the setup.py version changes before merging. @sgugger, @LysandreJik
06-25-2021 23:42:19
06-25-2021 23:42:19
So now when the table is out of sync it fails with: ![snapshot_4](https://user-images.githubusercontent.com/10676103/123495350-3bed6d00-d5d8-11eb-8a88-272ac40a1ae4.png) Let me know if this is good and I will re-test with when it's in sync.
transformers
12,363
closed
[examples] add `main_process_first` context manager to datasets map calls
We need to replay this addition that has been modelled in `run_translation.py` in https://github.com/huggingface/transformers/pull/12351 to all other pytorch examples The actual changes for the model example are: https://github.com/huggingface/transformers/pull/12351/files#diff-09777f56cee1060a535a72ce99a6c96cdb7f330c8cc3f9dcca442b3f7768237a (just `run_translation.py`) Here is a time-saver: ``` find examples/pytorch -type f -exec perl -0777 -pi -e 's|^(\s+)(train_dataset = train_dataset.map\(.*?\))|x($1, $2)|msge; BEGIN {sub x {($p, $t) = @_ ; $t =~ s/^/ /msg; return qq[${p}with training_args.main_process_first(desc="train dataset map pre-processing"):\n$p$t] } }' {} \; find examples/pytorch -type f -exec perl -0777 -pi -e 's|^(\s+)(eval_dataset = eval_dataset.map\(.*?\))|x($1, $2)|msge; BEGIN {sub x {($p, $t) = @_ ; $t =~ s/^/ /msg; return qq[${p}with training_args.main_process_first(desc="validation dataset map pre-processing"):\n$p$t] } }' {} \; find examples/pytorch -type f -exec perl -0777 -pi -e 's|^(\s+)(predict_dataset = predict_dataset.map\(.*?\))|x($1, $2)|msge; BEGIN {sub x {($p, $t) = @_ ; $t =~ s/^/ /msg; return qq[${p}with training_args.main_process_first(desc="prediction dataset map pre-processing"):\n$p$t] } }' {} \; git checkout examples/pytorch/translation/run_translation.py make fixup ``` I noticed other scripts may have other `datasets.map` calls, which get automatically rewritten by the scripts above, so please review the changes to see if the `desc` needs to be modified. But we want to use the context manager on all of these calls, it's possible that the perl rewrite scripts didn't catch some. - also this template needs to have this change as well: `templates/adding_a_new_example_script/\{\{cookiecutter.directory_name\}\}/run_\{\{cookiecutter.example_shortcut\}\}.py` can do via perl or manually or whatever other way works for you. And please validate that scripts still work, by either running: ``` RUN_SLOW=1 pytest examples/pytorch/test_examples.py ``` or running each script manually as explained in its corresponding `README.md` file. This issue is open to all and should be very simple to complete, the main effort is to validate. And thank you for your contribution!
06-25-2021 22:29:26
06-25-2021 22:29:26
Can I take this? Since it will not take much time for me<|||||>Yes, thank you, @bhadreshpsavani <|||||>Hi @stas00 and @sgugger, In the earlier PR, I wanted to ask one thing in the below code, https://github.com/huggingface/transformers/blob/9a7545943dd611b0d4887a5ab3a7ef7f53b8e76f/examples/pytorch/question-answering/utils_qa.py#L416-L425 Shall we use `logger.info()` instead `print()` like we did in below code https://github.com/huggingface/transformers/blob/9a7545943dd611b0d4887a5ab3a7ef7f53b8e76f/examples/pytorch/question-answering/utils_qa.py#L228-L237 or is it intensionally written like this? Because of this when we run the `run_qa_beam_search.py` script we get the below kind of prints for the train, eval, and test stage even when we pass `--log_level error` ``` Saving predictions to /tmp/debug_squad/predict_predictions.json. | 0/5 [00:00<?, ?it/s] Saving nbest_preds to /tmp/debug_squad/predict_nbest_predictions.json. Saving null_odds to /tmp/debug_squad/predict_null_odds.json. ```<|||||>good catch, @bhadreshpsavani! `logger.info()` please as you suggested. Please feel free to make a separate PR if you don't want to mix this with this particular change. <|||||>Hi @stas00 and @sgugger, There is a minor thing, at this line https://github.com/huggingface/transformers/blob/9a7545943dd611b0d4887a5ab3a7ef7f53b8e76f/examples/pytorch/text-classification/run_glue.py#L529 we are getting ``` examples/pytorch/text-classification/run_glue.py:530: FutureWarning: remove_columns_ is deprecated and will be removed in the next major version of datasets. Use Dataset.remove_columns instead. predict_dataset.remove_columns_("label") ``` fix is, ```python predict_dataset.remove_columns("label") ``` shall we change it? it is also present at below line https://github.com/huggingface/transformers/blob/9a7545943dd611b0d4887a5ab3a7ef7f53b8e76f/tests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py#L506<|||||>yes, except you now need to assign the return value since this is no longer an inplace edit. Therefore in both places it'll be now be: ``` x = x.remove_columns("label") ``` with the right x of course. thank you for fixing it. reference: https://huggingface.co/docs/datasets/processing.html#removing-one-or-several-columns-remove-columns <|||||>I have committed changes in the open PR for the fix of this warning!
transformers
12,362
closed
fixed multiplechoice tokenization
# What does this PR do? The model would have seen two sequences: 1. [CLS]prompt[SEP]prompt[SEP] 2. [CLS]choice0[SEP]choice1[SEP] That is not correct as we want a contextualized embedding of prompt and choice. This PR fixes the documentation ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Documentation: @sgugger
06-25-2021 20:48:02
06-25-2021 20:48:02
Thanks for your PR but the documentation is correct: when tokenizing several pairs of sentences, the tokenizer API takes the list of first sentences, then the list of second sentences. Here the first sentences are the prompt (twice) and the second sentences are the choices.<|||||>@sgugger that doesn't seem to be correct: ```python from transformers import BertTokenizer, BertForMultipleChoice import torch tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = BertForMultipleChoice.from_pretrained('bert-base-uncased') prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." choice0 = "It is eaten with a fork and a knife." choice1 = "It is eaten while held in the hand." labels = torch.tensor(0).unsqueeze(0) # choice0 is correct (according to Wikipedia ;)), batch size 1 encoding = tokenizer([[prompt, prompt], [choice0, choice1]], return_tensors='pt', padding=True, return_attention_mask=False, return_token_type_ids=False) print(tokenizer.decode(encoding.input_ids[0])) print(tokenizer.decode(encoding.input_ids[1])) ``` Output: ``` [CLS] in italy, pizza served in formal settings, such as at a restaurant, is presented unsliced. [SEP] in italy, pizza served in formal settings, such as at a restaurant, is presented unsliced. [SEP] [CLS] it is eaten with a fork and a knife. [SEP] it is eaten while held in the hand. [SEP] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] [PAD] ```<|||||>An alternative fix is to: ```python encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors='pt', padding=True, return_attention_mask=False, return_token_type_ids=False) print(tokenizer.decode(encoding.input_ids[0])) print(tokenizer.decode(encoding.input_ids[1])) ``` Output: ``` [CLS] in italy, pizza served in formal settings, such as at a restaurant, is presented unsliced. [SEP] it is eaten with a fork and a knife. [SEP] [CLS] in italy, pizza served in formal settings, such as at a restaurant, is presented unsliced. [SEP] it is eaten while held in the hand. [SEP] [PAD] ``` The problem is you are currently not giving the tokenizer two lists. You are only giving him one list (first sentence).<|||||>Yes your second fix is the good one! I missed the extra pair of brackets.<|||||>@sgugger: pushed!<|||||>Thanks a lot!
transformers
12,361
closed
Easily train a new fast tokenizer from a given one
# What does this PR do? This PR does two different things at the same time: - it allows to instantiate a subclass of a `PreTrainedTokenizerFast` with just the tokenizer object by making arguments like vocab or merges optional (only done for three models here but can complete if the design is accepted) - adds a method to train a new fast tokenizer from an existing one, using the same normalizer, pre-tokenizers and post-processors. With this done, one can do: ``` from transformers import AutoTokenizer checkpoint = "bert-base-cased" # or any checkpoint that has a fast tokenizer. tokenizer = AutoTokenizer.from_pretrained(checkpoint) assert tokenizer.is_fast, "This only works for fast tokenizers." # Should be a generator of list of texts. training_corpus = [ ["This is the first sentence.", "This is the second one."], ["This sentence (contains #) over symbols and numbers 12 3.", "But not this one."], ] new_tokenizer = tokenizer.train_new_from_iterator(training_corpus, vocab_size=25000) ``` The new tokenizer can then be used, saved, pushed to the hub. It has the same type as `tokenizer`.
06-25-2021 20:22:36
06-25-2021 20:22:36
Failure is spurious, so merging!
transformers
12,360
closed
[examples] remove extra white space from log format
This PR removes the extraneous triple white space from log format in all examples. @sgugger
06-25-2021 19:49:15
06-25-2021 19:49:15
transformers
12,359
closed
[Examples] Replicates the new --log_level feature to all trainer-based pytorch
# What does this PR do? Fixes #12295 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Notes: - The required changes are added for all the trainer-based examples except `run_generation.py` since it seems very different. - Please let me know if any modification needed. ## Who can review? @stas00 @sgugger
06-25-2021 19:01:59
06-25-2021 19:01:59
Also as mentioned earlier, let's add: ``` transformers.utils.logging.enable_default_handler() transformers.utils.logging.enable_explicit_format() ``` everywhere they are missing - I see there are quite a few places. Thank you!<|||||>@bhadreshpsavani, we forgot to take care of the template `templates/adding_a_new_example_script/\{\{cookiecutter.directory_name\}\}/run_\{\{cookiecutter.example_shortcut\}\}.py` if you don't mind adding this change there too in another PR. Thank you! <|||||>Sure I will add it, Ya, i totally forgot!
transformers
12,358
closed
Tensorflow LM examples
CLM and MLM examples for Tensorflow - despite the TF docs' insistence, I think we can use the dataset-generator methods here to stream data too large for memory to a multi-GPU or TPU setup!
06-25-2021 18:13:12
06-25-2021 18:13:12
transformers
12,357
closed
Replace NotebookProgressReporter by ProgressReporter in Ray Tune run
# What does this PR do? This PR replaces the local trainer NotebookProgressReporter callback by a ProgressReporter. Generally we cannot guarantee correct display of IPython renderables in remote processes (e.g. because they get redirected into files), so it's better to replace these by text based reporters. Otherwise, these are produced: `<IPython.core.display.HTML object>` See also https://github.com/ray-project/ray/issues/16197 <!-- Remove if not applicable --> Fixes https://github.com/ray-project/ray/issues/16197 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
06-25-2021 16:34:04
06-25-2021 16:34:04
Has this fix been packaged with the latest release? I see that the fix was merged into the master branch 18 days ago and the latest release (v4.8.2) was 13 days ago but then I don't see the issue mentioned in the patch release notes. I assume it hasn't since I am still getting the same output.
transformers
12,356
closed
Fixed a typo in readme
# What does this PR do? Fixes a simple typo in the readme file from "pr" to "or". ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-25-2021 11:40:51
06-25-2021 11:40:51
transformers
12,355
closed
[Flax] Add T5 pretraining script
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds T5 pretraining in Flax. Thanks to @craffel a lot of the preprocessing code was copied from . Tokenizer training code was largely copied from @SaulLu . Also took this PR as a chance to integrate the new `push_to_hub` functionality that includes tensorboard logs to test out the new tensorboard functionality (cc @sgugger @LysandreJik @julien-c). The tensorboard logs aren't correctly displayed though :-/- an example can be seen [here](https://huggingface.co/patrickvonplaten/dummy-t5-test). Code is working, and the model seems to train. Will test it on a full training run over the weekend! ## Who can review? @LysandreJik @sgugger - would be great if you could check the README.md and the `push_to_hub=True` logic / process to see if the workflow fits @SaulLu - would be great if you could take a look at the tokenizer code, since it's 99% copied from yours :-) (it seems to work well) @patil-suraj @sgugger - would be awesome if you could make a more general review to see if code is written according to examples and you are fine with having a rather model-specific training script in the general examples.
06-25-2021 11:07:50
06-25-2021 11:07:50
I just wanted to ask one last question about the tokenizer for norwegian. :slightly_smiling_face: It seems to me that the tokenizer vocabularies of the "original" T5 models have `extras_ids` (100 for `T5-small`). It seems to me that in the proposed version of the tokenizer for norwegian, no extras_ids are introduced and I'm not sure where the [`extras_ids`](https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/tokenization_t5_fast.py#L112) argument would be redefined when initializing `T5TokenizerFast`.<|||||>> I just wanted to ask one last question about the tokenizer for norwegian. > > It seems to me that the tokenizer vocabularies of the "original" T5 models have `extras_ids` (100 for `T5-small`). It seems to me that in the proposed version of the tokenizer for norwegian, no extras_ids are introduced and I'm not sure where the [`extras_ids`](https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/tokenization_t5_fast.py#L112) argument would be redefined when initializing `T5TokenizerFast`. Those additional tokens will be added when the tokenizer is loaded with `T5TokenizerFast` as follows: ```python from transformers import T5TokenizerFast tokenizer = T5TokenizerFast.from_pretrained("patrickvonplaten/t5-small-norwegian") ``` When you print out the tokenzier: ```python print(tokenizer) ``` you should see that the extra ids have been added :-) It is done automatically [here](https://github.com/huggingface/transformers/blob/e27707488911a4bae5936a1bdad0cfdb2018cebd/src/transformers/models/t5/tokenization_t5_fast.py#L117) <|||||>@patrickvonplaten , run_t5_mlm_flax.py uses different lr schedule than paper. Any specific reason for that?<|||||>Hey @danshirron! Usually, it shouldn't make a big difference. Original T5 model was trained using an inverse square root scheduler. From my experiments, linear scheduler happens to be slightly more robust for faster convergence. Either way, there are various inverse square root scheduler implementations (e.g., [optax](https://github.com/formermagic/git-t5/blob/main/git_t5/core/schedulers.py#L124) or [pytorch](https://github.com/pytorch/fairseq/blob/master/fairseq/optim/lr_scheduler/inverse_square_root_schedule.py)).
transformers
12,354
closed
Input structure has type class tuple while shallow structure has type class transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput
## Environment info - transformers version: 4.5.1 - Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.5 - PyTorch version (GPU?): 1.8.1+cu102 (False) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help - tensorflow: @Rocketknight1 - maintained examples (not research project or legacy): @sgugger, @patil-suraj ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) Official tensorflow example: https://huggingface.co/transformers/custom_datasets.html#question-answering-with-squad-2-0 The tasks I am working on is: * [x] an official GLUE/SQUaD task: Squad2 * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: Executed the official example in [my notebook](https://github.com/seahrh/kaggle-coleridge-initiative/blob/8087c6189d8e96679ec9a60816910ad86ed20480/hf_tf_squad2_finetune_example.ipynb) but encountered the following error on `model.fit`: ``` TypeError: The two structures don't have the same sequence type. Input structure has type <class 'tuple'>, while shallow structure has type <class 'transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput'>. ``` In cell 11, I made sure to `model.distilbert.return_dict = False` (if using 🤗 Transformers >3.02, make sure outputs are tuples) and had mapped the `start_positions` and `end_positions` as tuples in cell 9 (Keras will expect a tuple when dealing with labels). I noted the following warning emitted by `model.fit`. If `return_dict` is always set to True at training time, then wouldn't this conflict with the "labels as tuple" requirement? ``` WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. ``` ## Expected behavior Official example is working.
06-25-2021 08:17:03
06-25-2021 08:17:03
Hi, our Tensorflow examples are in flux right now, because we're in the process of updating them and generally trying to replace TFTrainer with native Keras code. The examples on that page may be outdated, but you can see an up-to-date example of using TF for QA here: https://github.com/huggingface/transformers/blob/master/examples/tensorflow/question-answering/run_qa.py That said, thank you for the report - I'll make a point of taking a look at that page at some point and ensuring our examples there are also up-to-date!<|||||>Thank you! Studying the code in the link helped me to set up a minimal working example. We should probably close this issue after the TF example docs have been updated. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,353
closed
ERROR: Failed building wheel for tokenizers
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.8.1 - Platform: Macbook pro M1 16G - Python version: 3.8.10 - PyTorch version (GPU?): - Tensorflow version (GPU?): 2.5.0 ### Who can help @LysandreJik When I install transformers in my mac, this error occurred! `ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly`
06-25-2021 06:32:42
06-25-2021 06:32:42
Hi! Could you open an issue on `tokenizers` instead? https://github.com/huggingface/tokenizers
transformers
12,352
closed
[WIP][FIX] Prevent output some config files when using fast tokenizer
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #12308 One of the candidates of this issue's cause is the crash of configurations of fast tokenizer and tokenizer. With fast tokenizer, `tokenizer.json` do all things and `special_tokens_map.json` and `tokenizer_config.json` are not needed. When using fast tokenizer, this PR avoids output the two files. This is a quick fix and I have not written test and checked the effects of this code on other programs. I'd like to receive your reviews and criticism. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
06-25-2021 03:49:54
06-25-2021 03:49:54
Hello! Does this resolve #12308?<|||||>Hello! Sorry if I may have been unclear. Assuming the difference between `spm.SentencePieceTrainer` and `ReformerTokenizerFast` is anticipated, this PR tentatively resolve the difference between `ReformerTokenizerFast` and `AutoTokenizer.from_pretrained('test')`. Output is below. ``` sentencepiece 9: ['L', 'o', '<mask>', 'm▁', 'I', 'p', 's', 'um', '▁'] transformers 10: ['▁', 'L', 'o', '<mask>', 'm', '▁', 'I', 'p', 's', 'um'] AutoTokenizer 10: ['▁', 'L', 'o', '<mask>', 'm', '▁', 'I', 'p', 's', 'um'] ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,351
closed
[trainer] add main_process_first context manager
This PR - [x] implements `main_process_first` context manager as discussed in https://github.com/huggingface/transformers/issues/12345 This make the `datasets` pre-processing much faster in the distributed environment as only one process is doing it and not all replicas too at once. - [x] modifies `run_translation.py` example to use it as a model. - [x] starts using `log.debug` - since now we have the new shiny `--log_level` trainer arg, so many too-much-information `log.info` can be switched to `log.debug`, and the user can run `--log_level debug` to generate a lot more info when debugging or when filing a bug report. **Question: not sure what should be done on multi-node setups, since one may or may not use a shared filesystem.** TODO: - once merged replicate to other examples **Kudos for the cool function name goes to @sgugger** Fixes: https://github.com/huggingface/transformers/issues/12345 @sgugger
06-25-2021 03:13:04
06-25-2021 03:13:04
So I won't lose it a magic perl to rewrite all other examples: ``` find examples -type f -exec perl -0777 -pi -e 's|^(\s+)(train_dataset = train_dataset.map\(.*?\))|x($1, $2)|msge; BEGIN {sub x {($p, $t) = @_ ; $t =~ s/^/ /msg; return qq[${p}with training_args.main_process_first(desc="train dataset map pre-processing"):\n$p$t] } }' {} \; ```
transformers
12,350
closed
Fix exception in prediction loop occurring for certain batch sizes
# What does this PR do? Fixes #12349 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger
06-25-2021 00:17:17
06-25-2021 00:17:17
Thanks, I can't push to your branch, so could you push an empty commit since circleCI decided to not run on your PR? ``` git commit --allow-empty -m "Trigger CI" ``` and then a push.<|||||>Ok, there are lots of failures that seem related to the PyTorch release. Could you rebase on master (I really want to make sure this does not break any example before merging)? Thank you.<|||||>> Ok, there are lots of failures that seem related to the PyTorch release. Could you rebase on master (I really want to make sure this does not break any example before merging)? Thank you. Done, I must have based this off from a fork that was a few weeks old....<|||||>I'll be off now for the weekend, but the box "Allow edits by maintainers" is checked, so feel free to adapt as necessary.<|||||>We're all good, thanks for walking through this with me!
transformers
12,349
closed
Prediction fails for certain batch sizes
## Environment info - `transformers` version: 4.6.1 - Platform: IBM open-ce / ppcle64 - Python version: 3.8.8 - PyTorch version (GPU?): 1.7.1 NVIDIA V100 - Tensorflow version (GPU?): N/A - Using GPU in script?: yes - Using distributed or parallel set-up in script?: distributed (but only 1 rank per process group) ### Who can help @sgugger ## Information Model I am using (Bert, XLNet ...): Bert The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) Custom-implementation combining two BERT models with a multilayer perceptron that implements a regression on the last hidden layer outputs The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) a regression on pairs of protein sequence and molecule SMILES strings, plus binding affinity ## To reproduce Steps to reproduce the behavior: 1. Call `trainer.predict()` on the encoded inputs 2. If the number of inputs is 1+`per_device_eval_batch_size`, the last batch has just one member and is treated as a scalar 3. the prediction loop fails with ``` File "../affinity_pred/infer_mpi.py", line 189, in main df_pred = predict(df) File "../affinity_pred/infer_mpi.py", line 180, in predict out = trainer.predict(x) File "/gpfs/alpine/world-shared/bip214/opence-env/lib/python3.8/site-packages/transformers/trainer.py", line 2065, in predict output = eval_loop( File "/gpfs/alpine/world-shared/bip214/opence-env/lib/python3.8/site-packages/transformers/trainer.py", line 2162, in evaluation_loop logits = self._nested_gather(logits) File "/gpfs/alpine/world-shared/bip214/opence-env/lib/python3.8/site-packages/transformers/trainer.py", line 2252, in _nested_gather tensors = distributed_concat(tensors) File "/gpfs/alpine/world-shared/bip214/opence-env/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 158, in distributed_concat concat = torch.cat(output_tensors, dim=0) RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated ``` **Example**: last two batches (`per_device_eval_batch_size` is 32) ``` [tensor([ 0.5737, -0.1372, -0.5283, -0.0641, -0.0641, 0.6353, 0.1035, -0.4148, 0.2314, -0.3879, -0.4431, -0.3931, -0.2642, 0.5039, -0.4187, 0.0679, 0.0679, -0.3167, -0.4783, -0.6724, -0.6724, -0.3257, 0.4922, 0.4922, -0.4189, -0.3652, -0.4468, -0.2358, -0.3696, 0.1646, -0.2004, -1.0234], device='cuda:0', dtype=torch.float16)] [tensor(0.7144, device='cuda:0', dtype=torch.float16)] RuntimeError('zero-dimensional tensor (at position 0) cannot be concatenated') ``` ## Expected behavior No exception
06-25-2021 00:14:26
06-25-2021 00:14:26
transformers
12,348
open
Generate text until condition
Is there a simple way to have the model generate text until a condition is met? I'm interested in data memorization and want to prompt the model with some tokens from the training data and then have it generate text until it makes a mistake (aka deviates from the training data). The naive approach with a while loop has significant overhead, and I was wondering if there was something smarter I can be doing.
06-24-2021 23:36:15
06-24-2021 23:36:15
Maybe of interest to @patrickvonplaten @patil-suraj <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Related PR: https://github.com/huggingface/transformers/pull/12219
transformers
12,347
closed
TypeError: __init__() got an unexpected keyword argument 'report_to'
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.6.1 - Platform: Linux-4.15.0-135-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.9.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes, 8 Tesla V100 - Using distributed or parallel set-up in script?: training_args.parallel_mode = ParallelMode.NOT_DISTRIBUTED ### Who can help @sgugger @LysandreJik <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using BERT - 'bert-base-multilingual-cased' The problem arises when using: * [x] the official example scripts. The tasks I am working on is: * [x] an official GLUE/SQUaD task: Masked Language Modelling ## To reproduce Steps to reproduce the behavior: 0. Install wandb and tensorboard via Command Line ``` pip install wandb wandb login WANDB_PROJECT=mlm_project pip install tf-nightly pip install tensorboard ``` 1. Initialize training_args ```python training_args = TrainingArguments( output_dir='My_lovely_mlm_model', overwrite_output_dir=True, do_train=True, do_eval=True, per_device_train_batch_size=100, per_device_eval_batch_size=50, evaluation_strategy='steps', logging_steps=10_000, eval_steps=None, prediction_loss_only=True, num_train_epochs=50, save_steps=10_000, save_total_limit=10, report_to=['wandb', 'tensorboard'] ) ``` 2. Run script with button in PyCharm or in console with python `run_mlm.py --report_to wandb --run_name new_run_name` 3. Enjoy error message ``` Traceback (most recent call last): File "<input>", line 1, in <module> File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py", line 197, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/Users/LysandreJik_is_the_best/Documents/PyCharmProjects/in_sgugger_we_trust/mlm/run_mlm.py", line 21, in <module> from arguments_for_mlm import model_data_args File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/Users/LysandreJik_is_the_best/Documents/PyCharmProjects/in_sgugger_we_trust/mlm/arguments_for_mlm.py", line 299, in <module> report_to=['wandb', 'tensorboard'] TypeError: __init__() got an unexpected keyword argument 'report_to' ``` Also, i tried to remove 'tensorboard' or 'wandb', but caught same error again and again ## Expected behavior Script must run without this error. Wandb folder must be created.
06-24-2021 22:54:00
06-24-2021 22:54:00
Hello! Do you mind sharing a reproducible code example in colab? I ran the following on the bare `run_mlm.py` script: ``` python run_mlm.py --output_dir=here --report_to=tensorboard --dataset_name=wikitext --dataset_config_name=wikitext-2-raw-v1 --model_name_or_path=bert-base-cased --do_train ``` which works without issue. Maybe @sgugger has more insights. Nice username and Pycharm project :)<|||||>@LysandreJik I found problem: I used `report_to=` in `TrainingArguments` without `run_name=` Even if I add `--run_name=` in command `python3 run_mlm.py --report_to wandb --run_name new_run_name` it didn't work. I think this might be fixed quite easy: Throw exception, if `report_to=` is 'initialized' in `TrainingArguments` without `run_name=` My solution. Also tested not only with 'wandb', but with 'tensorboard' and both of them works very well. (All imports are above) ```python training_args = TrainingArguments( output_dir='My_lovely_mlm_model', overwrite_output_dir=True, do_train=True, do_eval=True, per_device_train_batch_size=100, per_device_eval_batch_size=50, evaluation_strategy='steps', logging_steps=10_000, eval_steps=None, prediction_loss_only=True, num_train_epochs=50, save_steps=10_000, save_total_limit=10, report_to='wandb' run_name="new_run" #ADDED THIS. ) ``` P.S. If I want to continue training my model from checkpoint, i should change `model = AutoModelForMaskedLM.from_pretrained('bert-base-multilingual-cased')` to `model = AutoModelForMaskedLM.from_pretrained('project/pretrained_model_name/checkpoint-190000')`right? <|||||>If you use the Trainer, just use `resume_from_checkpoint=path_to_checkpoint` when calling `trainer.train`.
transformers
12,346
closed
All evaluation processes overload one GPU, when other 7 are available. While Training process fine and is distributed across all 8 cards
## Environment info - `transformers` version: 4.6.1 - Platform: Linux-4.15.0-135-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.9.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes, 8 Tesla V100 - Using distributed or parallel set-up in script?: training_args.parallel_mode = ParallelMode.NOT_DISTRIBUTED ### Who can help @sgugger @LysandreJik <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using BERT - 'bert-base-multilingual-cased' The problem arises when using: * [ ] the official example scripts. The tasks I am working on is: * [ ] an official GLUE/SQUaD task: Masked Language Modelling ## To reproduce 0. Enter `nvidia-smi` `export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7` <img width="557" alt="Снимок экрана 2021-06-24 в 18 35 25" src="https://user-images.githubusercontent.com/43710369/123291660-f9b22600-d51a-11eb-8972-0c2cbdfc1503.png"> 1. Initialize training_args ```python training_args = TrainingArguments( output_dir='My_lovely_mlm_model', overwrite_output_dir=True, do_train=True, do_eval=True, per_device_train_batch_size=100, per_device_eval_batch_size=50, evaluation_strategy='steps', logging_steps=10_000, eval_steps=None, prediction_loss_only=True, learning_rate=5e-5, weight_decay=0, adam_epsilon=1e-8, max_grad_norm=1.0, num_train_epochs=50, save_steps=10_000, save_total_limit=10 ) ``` 2. Catch RuntimeError: CUDA out of memory ``` RuntimeError: CUDA out of memory. Tried to allocate 17.81 GiB (GPU 0; 31.72 GiB total capacity; 10.49 GiB already allocated; 3.57 GiB free; 26.52 GiB reserved in total by PyTorch) ``` 3. Change batch size and catch another Error `per_device_train_batch_size = 80` `per_device_eval_batch_size = 4` ``` RuntimeError: CUDA out of memory. Tried to allocate 14.25 GiB (GPU 0; 31.72 GiB total capacity; 8.96 GiB already allocated; 8.80 GiB free; 21.29 GiB reserved in total by PyTorch) ``` 4. Change batch size and only then trainin was started `per_device_train_batch_size = 64` `per_device_eval_batch_size = 4` <img width="560" alt="Снимок экрана 2021-06-25 в 1 24 39" src="https://user-images.githubusercontent.com/43710369/123340021-42d29c00-d554-11eb-9849-9aa682ef97e3.png"> **As you see, first GPU in list is loaded with 30.95 GB and other seven with only 10.5 GB. I have no idea how to fix this with default Transformers functions (or PyTorch lib).** Before each run i deleted cache with `torch.cuda.empty_cache()` Also, This might be helpful: ```python print(training_args.n_gpu) print(training_args.parallel_mode) print(training_args.train_batch_size) print(training_args.eval_batch_size) print(training_args.device) ``` Got ``` 8 ParallelMode.NOT_DISTRIBUTED 512 32 cuda:0 ``` ## Expected behavior Equal distribution to all 8 GPUs. Normal training and evaluation process. P.S. Also, continued training from checkpoint ( `AutoModelWithLMHead.from_pretrained(path_to_checkpoint_model)`) cause exception. Should I write another issue?
06-24-2021 22:37:36
06-24-2021 22:37:36
Hey, I think you are using `nn.DataParallel` and not `nn.DistributedDataParallel` and hence 1 GPU is taking more memory. In case of `torch.nn.DistributedDataParallel`, `training_args.parallel_mode` will be `ParallelModel.Distributed`. In order to use `nn.DistributedDataParallel`, launch with this CMD: `python3 -m torch.distributed.launch --nproc_per_node=8 <your-script>.py`<|||||>@vasudevgupta7 Thank you for interesting idea. Sadly, it doesn't work well. May I ask you to inspect my error messages on pastebin? First one for `python3 -m torch.distributed.launch --nproc_per_node=8 run_mlm_copy.py > log_BERTtugan_pretrained_5.txt &` https://pastebin.com/MLkR8MQp Second one for `python3 -m torch.distributed.launch --nproc_per_node=8 run_mlm_copy.py &` https://pastebin.com/UeVg0YpV<|||||>### UPD: First of all, I remove all arguments for evaluation: ```python training_args = TrainingArguments( output_dir='My_lovely_mlm_model', overwrite_output_dir=True, do_train=True, per_device_train_batch_size=150, logging_steps=10_000, prediction_loss_only=True, num_train_epochs=50, save_steps=10_000, save_total_limit=10 ) ``` And all worked fine, but this **not a solution** Secondly, when I run script on all 8 GPUs, I caught an Error (error message below). But when I choose only 7 with `export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6`, everything works fine. How can I solve these problems? Code in "for_github" folder in https://github.com/TatProg/bertugan_sample/tree/master/mlm/for_github ```bash RuntimeError: Timed out initializing process group in store based barrier on rank: 1, for key: store_based_barrier_key:1 (world_size=8, worker_count=16, timeout=0:30:00) ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 5977) of binary: /usr/bin/python3 ERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed INFO:torch.distributed.elastic.agent.server.api:[default] Worker group FAILED. 2/3 attempts left; will restart worker group INFO:torch.distributed.elastic.agent.server.api:[default] Stopping worker group INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvousing worker group INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result: restart_count=2 master_addr=127.0.0.1 master_port=29500 group_rank=0 group_world_size=1 local_ranks=[0, 1, 2, 3, 4, 5, 6, 7] role_ranks=[0, 1, 2, 3, 4, 5, 6, 7] global_ranks=[0, 1, 2, 3, 4, 5, 6, 7] role_world_sizes=[8, 8, 8, 8, 8, 8, 8, 8] global_world_sizes=[8, 8, 8, 8, 8, 8, 8, 8] ```<|||||>Hey, not sure why that's happening. May be someone else can help.<|||||>Your first pastebin shows that the 8 processes are not properly initialized and joined, so this is more of a PyTorch error. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,345
closed
[examples] [distributed] process datasets.map only on main process in
# 🚀 Feature request Switch examples to process `datasets.map` only once on a main process and have the other processes wait, which would make pre-processing much faster. This is based on: https://huggingface.co/docs/datasets/processing.html?highlight=map#mapping-in-a-distributed-setting @sgugger suggested to add a new context manager `training_args.main_process_first()` to make it simple. context: https://discuss.huggingface.co/t/slow-processing-with-map-when-using-deepspeed-or-fairscale/7229/ (albeit the title is misleading, the issue the user experienced is with any distributed framework).
06-24-2021 19:57:07
06-24-2021 19:57:07
transformers
12,344
closed
Update run_mlm.py
Before the code could not be used for validation only because of this line: extension = data_args.train_file.split(".")[-1] was assuming that extension must be extracted from the training dataset. This line would run regardless of the training or validation options of the user. This would lead to an error if the user only wants to run an evaluation only and does not want to do train (because the training file does not exist). I modified it to extract extension from the training file if the user wants to do train and extract it from the validation file if the user wants to run eval. This way the code can be used for both training and validation separately. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-24-2021 19:56:05
06-24-2021 19:56:05
Anytime! Thanks for accepting it!
transformers
12,343
closed
[trainer] fix label smoothing for default compute_loss
# What does this PR do? Keeps 'labels' in the inputs which are passed to model. Without this change, the model I'm using (PegasusForConditionalGeneration) can't calculate loss and generate outputs. Change assumes that all other models also need 'labels' in inputs. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Who can review? trainer: @sgugger
06-24-2021 11:59:59
06-24-2021 11:59:59
No, if you leave the labels in the outputs, the model will then compute the loss without label smoothing which is inefficient (since it will re-compute the proper loss afterwards).<|||||>Oh you are right, I guess I was overeager with the PR, sorry. After looking more into the way arguments are passed to forward, I'll just need to shift tokens in 'labels' and add them to inputs as 'decoder_input_ids' (my confusion was from the fact that, for the model I'm using it is done automatically, but inside forward) Thanks for the quick answer!<|||||>No worries! And let us know if there is a problem with the generation of `decoder_input_ids` for Pegasus as we would need to fix it. :-)
transformers
12,342
closed
Add flax/jax quickstart
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-24-2021 10:40:00
06-24-2021 10:40:00
transformers
12,341
closed
[examples/Flax] move the examples table up
# What does this PR do? move the examples table up in the readme
06-24-2021 10:33:05
06-24-2021 10:33:05
transformers
12,340
closed
[Flax] Move up examples
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Move up examples for better visibility. @marcvanzee @avital ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-24-2021 10:32:20
06-24-2021 10:32:20
Closing - I was too slow :-/
transformers
12,339
closed
How to get offset mapping then decoding wav2vec?
06-24-2021 10:15:09
06-24-2021 10:15:09
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,338
closed
[ray] try fixing import error
# What does this PR do? Addresses a tabulate import error for Ray Tune integration. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-24-2021 07:51:42
06-24-2021 07:51:42
transformers
12,337
closed
ValueError: expected sequence of length 133 at dim 1 (got 80)
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.8.0 - Platform: - Python version: - PyTorch version (GPU?): 1.5.0 - Tensorflow version (GPU?): - Using GPU in script?: non - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik Library: - tokenizers: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> Models: albert, bert, xlm: @LysandreJik Library: tokenizers: @LysandreJik ## Information Model I am using (Bert, XLNet ...): FlauBert The problem arises when using: * [x]my own modified scripts: (give details below) ``` input_ids = [] attention_masks = [] for sent in texte: encoded_sent = flaubert_tokenizer.encode_plus(sent, add_special_tokens=True, truncation=True, padding=True, return_attention_mask=True) # Add the outputs to the lists input_ids.append(encoded_sent.get('input_ids')) attention_masks.append(encoded_sent.get('attention_mask')) # Convert lists to tensors print("len", len(input_ids)) input_ids = torch.tensor(input_ids) attention_mask = torch.tensor(attention_masks) hidden_state = flaubert(input_ids=input_ids, attention_mask=attention_mask) # Extract the last hidden state of the token `[CLS]` for classification task last_hidden_state_cls = outputs[0][:, 0, :] print(last_hidden_state_cls) ``` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) extracting first hidden state/embeddings produce by the model and give it to a classic classifier (smv) ## To reproduce Steps to reproduce the behavior: 1. install transformers, pandas, numpy and torch (1.5.0 or others ) 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> stacktrace error : ``` ---Filename in processed................ corpus_ix_originel_FMC_train etiquette : [2 1 0] Embeddings bert model used.................... : small_cased Some weights of the model checkpoint at flaubert/flaubert_small_cased were not used when initializing FlaubertModel: ['pred_layer.proj.weight', 'pred_layer.proj.bias'] - This IS expected if you are initializing FlaubertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing FlaubertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). <class 'numpy.ndarray'> len 34 Traceback (most recent call last): File "/16uw/test/expe_5/train/test.py", line 63, in <module> main() File "/16uw/test/expe_5/train/test.py", line 46, in main dic_acc, dic_report, dic_cm, s = cross_validation(data_train, data_label_train, models_list, name, language_model_dir) File "/16uw/test/expe_5/train/../traitements/processin_test.py", line 197, in cross_validation features, s = get_flaubert_layer(features, lge_model) File "16uw/test/expe_5/train/../traitements/processin_test.py", line 107, in get_flaubert_layer input_ids = torch.tensor(input_ids) ValueError: expected sequence of length 133 at dim 1 (got 80) ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> I expect to get the inputs_ids and attention_mask to pass it to the model to get the cls_embedding
06-24-2021 06:53:01
06-24-2021 06:53:01
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,336
closed
Fix torchscript tests
With the new non persistent buffers, the TorchScript tests fail. This PR updates the TorchScript tests to allow for non-persistent buffers.
06-24-2021 05:31:22
06-24-2021 05:31:22
transformers
12,335
closed
[WIP] FNet
This PR adds FNet in PyTorch. - Paper: https://arxiv.org/pdf/2105.03824v2.pdf - Code and Checkpoints: https://github.com/google-research/google-research/tree/master/f_net - Authors: @jtainslie @ilyaeck @santiaontanon-google
06-24-2021 03:18:14
06-24-2021 03:18:14
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I am still working on this PR in a branch and will create another PR when it's somewhat ready.
transformers
12,334
closed
Add additional variables without shape
These additional training variables without shape are present in the NVIDIA implementation for training BERT models: https://github.com/NVIDIA/DeepLearningExamples/tree/master/TensorFlow/LanguageModeling/BERT . The conversion works as expected after this change. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-24-2021 00:56:07
06-24-2021 00:56:07
Hi, thanks for opening a PR! Could you run `make fixup` at the root of your clone to apply to code quality fixes?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,333
closed
Missing tokenizer_class for `mbart-large-50-many-to-one-mmt` model
Hi @patil-suraj, I noticed all `mbart-50` models have their tokenizer_class set to "MBart50Tokenizer" in their config file except for `mbart-large-50-many-to-one-mmt`. This causes the wrong tokenizer to be loaded for this model (`tokenization_mbart` instead of `tokenization_mbart50`). Please check. Thanks!
06-24-2021 00:38:53
06-24-2021 00:38:53
Good catch @Mehrad0711 ! Thank you for reporting this. I just fixed it https://huggingface.co/facebook/mbart-large-50-many-to-one-mmt/blob/main/config.json
transformers
12,332
closed
Cast logits from bf16 to fp32 at the end of TF_T5
# What does this PR do? This change enables tf.keras.mixed_precision with bf16 I found that T5 model does not follow the [official TF guidelines regarding mixed precision](https://www.tensorflow.org/guide/mixed_precision). Therefore it's impossible to use `tf.keras.mixed_precision.set_global_policy('mixed_bfloat16')` which is the recommended way of training on bfloat16. I took a notebook [snapthat/TF-T5-text-to-text](https://github.com/snapthat/TF-T5-text-to-text/blob/master/snapthatT5/notebooks/TF-T5-%20Training.ipynb) and added `tf.keras.mixed_precision.set_global_policy('mixed_bfloat16')`. Experiments were done on tensorflow-cpu == 2.4.1, datasets == 1.8.0, transformers == 4.6.1. The first issue is with the loss curve: ![loss_t5](https://user-images.githubusercontent.com/37601244/123172656-b81f6d80-d47d-11eb-886a-f3974321f9d8.PNG) And the second is with inference (also included in the notebook): ``` File "bf16_experiment.py", line 136, in <module> max_length=decoder_max_len, top_p=0.95, top_k=50, repetition_penalty=2) File "/home/mszutenberg/venv24/lib/python3.6/site-packages/transformers/generation_tf_utils.py", line 417, in generate use_cache=use_cache, File "/home/mszutenberg/venv24/lib/python3.6/site-packages/transformers/generation_tf_utils.py", line 472, in _generate_no_beam_search next_token_logits = tf.math.multiply(next_token_logits, next_token_logits_penalties) File "/home/mszutenberg/venv24/lib/python3.6/site-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper return target(*args, **kwargs) File "/home/mszutenberg/venv24/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py", line 518, in multiply return gen_math_ops.mul(x, y, name) File "/home/mszutenberg/venv24/lib/python3.6/site-packages/tensorflow/python/ops/gen_math_ops.py", line 6068, in mul _ops.raise_from_not_ok_status(e, name) File "/home/mszutenberg/venv24/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 6862, in raise_from_not_ok_status six.raise_from(core._status_to_exception(e.code, message), None) File "<string>", line 3, in raise_from tensorflow.python.framework.errors_impl.InvalidArgumentError: cannot compute Mul as input #1(zero-based) was expected to be a bfloat16 tensor but is a float tensor [Op:Mul] ``` This fix solves both issues. I see that the problem may also occur in other models. I can check and prepare fixes if this PR is approved. ## Who can review? @LysandreJik @patrickvonplaten
06-23-2021 21:56:49
06-23-2021 21:56:49
cc @Rocketknight1 <|||||>Hey! Firstly, good catch with this issue - this seems like a good PR. Two questions before we merge it, though: - You tested with tensorflow-cpu - I'm guessing this means you were running on TPU, correct? (It's not like most CPUs support `bfloat16`, after all) - The code only checks for the dtype `bfloat16` and not `float16`. I'm guessing the same issue might occur on GPUs with float16 dtype, so we should probably cast to float32 in either case. If you don't have access to a GPU or GPU instance, would you like to send me your code or a notebook so I can test it?<|||||>Hi @Rocketknight1 , I used Kaggle Code to run my script on TPU and: - performance was the same on fp32 and bf16 (_TPU uses bf16 under the hood_) - accuracy issue did not occur (I suspect that TPU modifies SparseSoftmaxCrossEntropyWithLogits precision: bf16->fp32) - inference issue was reproduced with mixed_bfloat16 I used `os.environ['TF_XLA_FLAGS'] = '--tf_xla_enable_xla_devices'` in order to run bfloat16 on CPU. The loss curve in my PR comes from such execution. I guess that adding cast to `float32` is required for `float16` too but I was getting `loss = nan` while attempting to run my script with `mixed_float16`. Maybe something is still broken or I do the conversion to fp16 incorrectly. You can find my script in https://gist.github.com/szutenberg/80f30b980c15e384200d86ae242a1067 Output on TPU: ``` 1/10 [==>...........................] - ETA: 14:11 - accuracy: 0.2245 - loss: 12.9219 step 1: 94563.1 ms 2/10 [=====>........................] - ETA: 0s - accuracy: 0.3733 - loss: 7.3379 step 2: 104.5 ms 3/10 [========>.....................] - ETA: 0s - accuracy: 0.4637 - loss: 5.2188 step 3: 84.6 ms 4/10 [===========>..................] - ETA: 0s - accuracy: 0.5263 - loss: 4.1115 step 4: 86.1 ms 5/10 [==============>...............] - ETA: 0s - accuracy: 0.5731 - loss: 3.4062 step 5: 84.6 ms 6/10 [=================>............] - ETA: 0s - accuracy: 0.6100 - loss: 2.9387 step 6: 85.4 ms 7/10 [====================>.........] - ETA: 0s - accuracy: 0.6399 - loss: 2.5959 step 7: 85.9 ms 8/10 [=======================>......] - ETA: 0s - accuracy: 0.6644 - loss: 2.3573 step 8: 85.7 ms 9/10 [==========================>...] - ETA: 0s - accuracy: 0.6849 - loss: 2.1528 step 9: 85.7 ms 10/10 [==============================] - 95s 88ms/step - accuracy: 0.7167 - loss: 1.9842 ... InvalidArgumentError: cannot compute Mul as input #1(zero-based) was expected to be a bfloat16 tensor but is a float tensor [Op:Sub] ``` Output on XLA_CPU: ``` 1/10 [==>...........................] - ETA: 10:50 - accuracy: 0.2245 - loss: 15.4375 step 1: 72235.4 ms 2/10 [=====>........................] - ETA: 5:28 - accuracy: 0.3718 - loss: 9.4609 step 2: 41093.1 ms 3/10 [========>.....................] - ETA: 4:43 - accuracy: 0.4576 - loss: 7.0469 step 3: 40026.9 ms 4/10 [===========>..................] - ETA: 4:02 - accuracy: 0.5160 - loss: 5.7227 step 4: 39995.5 ms 5/10 [==============>...............] - ETA: 3:20 - accuracy: 0.5595 - loss: 4.8281 step 5: 38984.9 ms 6/10 [=================>............] - ETA: 2:39 - accuracy: 0.5933 - loss: 4.3073 step 6: 38997.0 ms 7/10 [====================>.........] - ETA: 1:59 - accuracy: 0.6205 - loss: 3.9721 step 7: 39454.5 ms 8/10 [=======================>......] - ETA: 1:19 - accuracy: 0.6420 - loss: 3.7764 step 8: 38780.1 ms 9/10 [==========================>...] - ETA: 39s - accuracy: 0.6598 - loss: 3.5469 step 9: 39538.0 ms 10/10 [==============================] - 428s 40s/step - accuracy: 0.6877 - loss: 3.3266 ... tensorflow.python.framework.errors_impl.InvalidArgumentError: cannot compute Mul as input #1(zero-based) was expected to be a bfloat16 tensor but is a float tensor [Op:Mul] ``` You can see that loss behaves differently. After applying my patch everything is ok: ``` 1/10 [==>...........................] - ETA: 10:00 - accuracy: 0.2245 - loss: 12.3656 step 1: 66688.5 ms 2/10 [=====>........................] - ETA: 5:13 - accuracy: 0.3773 - loss: 7.0029 step 2: 39146.4 ms 3/10 [========>.....................] - ETA: 4:32 - accuracy: 0.4685 - loss: 4.9963 step 3: 38585.8 ms 4/10 [===========>..................] - ETA: 3:51 - accuracy: 0.5314 - loss: 3.9416 step 4: 38006.6 ms 5/10 [==============>...............] - ETA: 3:12 - accuracy: 0.5789 - loss: 3.2643 step 5: 37900.7 ms 6/10 [=================>............] - ETA: 2:33 - accuracy: 0.6164 - loss: 2.8159 step 6: 38225.8 ms 7/10 [====================>.........] - ETA: 1:55 - accuracy: 0.6465 - loss: 2.4948 step 7: 38605.7 ms 8/10 [=======================>......] - ETA: 1:16 - accuracy: 0.6711 - loss: 2.2820 step 8: 38495.9 ms 9/10 [==========================>...] - ETA: 38s - accuracy: 0.6914 - loss: 2.0957 step 9: 38342.7 ms 10/10 [==============================] - 413s 38s/step - accuracy: 0.7229 - loss: 1.9338 ... We went on a trip to Europe. We had our breakfast at 7 am in the morning at the nearby coffee shop. Wore a dark blue over coat for our first visit to Louvre Museum to experience history and art. At what time did we had breakfast? Answer: <pad> 7 am in the morning</s> ```<|||||>Thanks for the detailed log! I have a 30-series GPU here, so I'll try mixed_float16 and mixed_bfloat16 with your script when I get a chance and see if I get the same issues.<|||||>Hi @Rocketknight1, Any updates? I managed to get rid of nans on mixed_float16 by adding fp32 casts in: * TFT5MainLayer for calculating extended_attention_mask: ``` if extended_attention_mask.dtype == tf.float16: extended_attention_mask = tf.cast(extended_attention_mask, tf.float32) extended_attention_mask = (1.0 - extended_attention_mask) * -1e9 ``` * TFT5Attention before softmax: ``` if scores.dtype == tf.float16: scores = tf.cast(scores, tf.float32) scores += position_bias weights = tf.nn.softmax(scores, axis=-1) ``` but it's still not training (accuracy is 1.0 and loss around 10 - they are the same in each step, it seems that forward part is broken). What do you think about merging my fix for bf16 and fixing fp16 later, by a separate PR?<|||||>I tried testing this. 'mixed_bfloat16' doesn't actually get run on the GPU on Keras, even though I believe 30-series GPUs support it. Very few CPUs support bfloat16 arithmetic, so I presume that no bfloat16 operations are used on CPU either, and that 'mixed_bfloat16' only actually runs bfloat16 operations on TPUs. As such, I'm confused about what's going on here - I suspect the differences you see with the 'mixed_bfloat16' policy on CPU are caused by some other side effect rather than true bfloat16 computation. I'd like to resolve this before approving the bfloat16 PR - if it turns out that TPUs already handle this issue and no other hardware actually runs bfloat16, then this PR isn't necessary, although your effort is appreciated anyway! Also the float16 fix might still be very useful - if you get it working, and you notice a difference on GPU with and without the cast to float32, please let us know!<|||||>@Rocketknight1, bfloat16 is not supported by GPU in TF. Ampere supports bfloat16 but the support for this datatype wasn't added to TensorFlow. For example [this article](https://developer.nvidia.com/blog/accelerating-ai-training-with-tf32-tensor-cores/) says only about adding TensorFloat32 and using bf16 directly from CUDA, not TF. I'm working on a custom accelerator which does support bf16 and I have exactly the same issue as on CPU (broken loss and broken inference). Inference is broken on a graph level (the model does not follow the rule _make sure the (model) output is float32_). Did you run my reproducer on CPU? It should work on tensorflow-cpu==2.4.1. You can decrease batch_size to 1 and still be able to see the difference. In order to run on TPU (you can use free Kaggle code) set "use_tpu=True" in the script.<|||||>Hi, I'm sorry for the delay here! I'm probably explaining myself badly, though - what I want to know before I merge anything here is what exactly your reproducer is doing on CPU. The reason I'm asking is that bfloat16 is not supported except by a few rare CPUs, and I don't think bfloat16 on CPU is supported by TF/Keras at all. So I don't really understand what's happening when you run that code on CPU - I realize something changes, but I don't know what or why!<|||||>Hi @Rocketknight1 Sorry for the delay caused by the summer season ;) My reproducer allows using bfloat16 by enabling XLA_CPU device which registers kernels for bfloat16 too. In the current TF version, bfloat16 kernels are not being registered for the CPU device. This is just to show that something is wrong also with the training accuracy. In my opinion, proof that something is wrong with inference is enough to accept this change. Other models and templates should be reviewed as well. What do you think?<|||||>Hi @szutenberg - after the conversation in #12898 I think I'm happy to accept this PR. Could you change it to check if the dtype is either `float16` or `bfloat16`? Alternatively, you could just run the casts without an `if` statement - it will do nothing if the dtype is already `float32`.<|||||>Hi @Rocketknight1 , thanks! This PR is ready for merge.<|||||>Done! Thank you for this, and thanks for your patience with the review process too!
transformers
12,331
closed
Default Parameters for training DistillBERT and DistillGPT2
https://github.com/huggingface/transformers/blob/cf3c9198aad5e2ea02e778aa9b04d27c216d1a35/examples/research_projects/distillation/train.py#L188 Hi @VictorSanh , I was going through your distillation code. Can you share what are the most suitable hyperparameters for training Distillation models, mainly DistillGPT2? Are the default parameters best to use? I am confused because the default batch size of 5 and 50 gradient accumulation step (i.e., 5X8X50 = 2000 examples) do not align with number reported in the [paper](https://arxiv.org/abs/1910.01108) (4K examples).
06-23-2021 21:50:43
06-23-2021 21:50:43
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
12,330
closed
Fixing the pipeline optimization by reindexing targets (V2)
# What does this PR do? Linked to #12329 Other version that keeps the scores original (meaning you could have mostly 0.0 for unprobable tokens. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-23-2021 16:51:40
06-23-2021 16:51:40
@LysandreJik pulled your tests, thanks !<|||||>Yeah it looks good, thanks!<|||||>Thanks @guyrosin!
transformers
12,329
closed
Fixing the pipeline optimization by rescaling the logits first.
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
06-23-2021 16:39:13
06-23-2021 16:39:13
Chosen https://github.com/huggingface/transformers/pull/12330 instead
transformers
12,328
closed
UpdateDescription of TrainingArgs param save_strategy
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #12315 TrainingArguments parameter docs: mention in `save_strategy` param description that `load_best_model_at_end` can override. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? N/A ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
06-23-2021 16:26:17
06-23-2021 16:26:17
transformers
12,327
closed
[Flax T5] Fix weight initialization and fix docs
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Apply fixes to Flax T5 according to comments on https://github.com/huggingface/transformers/pull/12150
06-23-2021 16:01:27
06-23-2021 16:01:27
The failing hub test is unrelated IMO
transformers
12,326
closed
Changed modeling_fx_utils.py to utils/fx.py for clarity
Moved **modeling_fx_utils.py** to **utils/fx.py** to make it clear that it is not "modeling_flax_utils.py". Since there are a modeling_utils.py for PyTorch, and a modeling_tf_utils.py for TensorFlow, modeling_fx_utils.py could be believed to be the Flax counterpart, but it is actually related to the torch.fx feature, hence the pull request.
06-23-2021 15:54:07
06-23-2021 15:54:07