repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 12,626 | closed | can't load flax weights in PyTorch if flax model is saved with dtype `bfloat16` | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: linux
- Python version: 3,8
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
- research_projects/r./run_clm_flax.py : @patrickvonplaten @patil-suraj
-->
## Information
Model I am using (gpt2):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Script:
1. run conversion script
2. receive error
`path = "./romanian-gpt2_80000/ckpt-80000"config = AutoConfig.from_pretrained(path)
model = AutoModelForCausalLM.from_config(config)
load_flax_checkpoint_in_pytorch_model(model, path+"/flax_model.msgpack")
model.save_pretrained("./romanian-gpt2-large_80000"). `
Converting FLAX model to Pytorch gives the following error:
TypeError: can't convert np.ndarray of type bfloat16. The only supported types are: float64, float32, float16, complex64, complex128, int64, int32, int16, int8, uint8, and bool.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Convert flax to pytorch
<!-- A clear and concise description of what you would expect to happen. -->
Adding argument --dtype="bfloat16" to run_clm_flax.py converts some of parameters to fp16, but it gives error when trying to convert the saved flax model to pytorch. A workaround is to convert all parameters of flax model to fp32 and convert then convert the flax model to pytorch.
`def to_f32(t):
return jax.tree_map(lambda x: x.astype(jnp.float32) if x.dtype == jnp.bfloat16 else x, t)
model.params = to_f32(model.params)` | 07-10-2021 12:37:26 | 07-10-2021 12:37:26 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,625 | closed | Flax Wav2Vec2 - Add venv section and fix training script | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Training Facebook's Wav2Vec2 as explained in this README, I encountered several issues related to:
- libraries that should be installed to support audio data and
- the proposed training script.
The purpose of this PR is to explain which libraries could be helpful to work with audio data and to update the current training script so it can be used out-of-the-box to train a Wav2Vec2 model.
The error messages that motivated each of the changes in this PR are listed [here](https://github.com/nlp-en-es/wav2vec2-spanish/blob/main/differences_from_original.md).
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
I think @patrickvonplaten would be the right person to review this PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-10-2021 10:11:22 | 07-10-2021 10:11:22 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,624 | closed | Add tokenizer_file parameter to PreTrainedTokenizerFast docstring | ## What does this PR do?
- Add tokenizer_file parameter to PreTrainedTokenizerFast docstring
- References [this](https://github.com/huggingface/transformers/issues/12583#issuecomment-876613898) comment from @sgugger
| 07-10-2021 09:27:35 | 07-10-2021 09:27:35 | |
transformers | 12,623 | closed | Inconsistent shapes between value and initializer for parameter: FlaxGPT2LMHeadModel | Hello!
I was trying to finetune GPT-2 medium model through flax on a custom (tokenized) dataset and I encoountered this error: `Inconsistent shapes between value and initializer for parameter "scale" in "/transformer/ln_f": (1024,), (0,)`
Edit: The whole traceback is quite long and is reported [here](https://pastebin.com/UMa3BbxP). A very short version is mentioned at the end here.
I'm using a PyTorch Dataset(with 1024 tokens per batch) and DataLoader(`batch_size=64`) with a `numpy_collate` function as mentioned at https://jax.readthedocs.io/en/latest/notebooks/Neural_Network_and_Data_Loading.html and then I'm yielding a "superbatch" of shape (8, 64, 1024) for multi-tpu using a custom function.
I'm using pre-trained gpt-2 tokenizer along with FlaxGPT2LMHeadModel.
Here is the code for the training loop:
```
for epoch in tqdm(range(1, num_epochs + 1), desc=f"Epoch ...", position=0, leave=True):
rng, input_rng = jax.random.split(rng)
# -- Train --
train_loader = make_superbatch()
with tqdm(total=len(script_dataset), desc="Training...", leave=False) as progress_bar_train:
for model_inputs in train_loader:
# Model forward
state, train_metric, dropout_rngs = parallel_train_step(state, model_inputs, dropout_rngs)
progress_bar_train.update(1)
progress_bar_train.write(
f"Train... ({epoch}/{num_epochs} | Loss: {round(train_metric['loss'].mean(), 3)}, Learning Rate: {round(train_metric['learning_rate'].mean(), 6)})"
)
```
Here is the error that I'm encountering and the whole traceback:
```
Epoch ...: 0%| | 0/10 [00:00<?, ?it/s]
Training...: 0%| | 0/1470930 [00:00<?, ?it/s]
---------------------------------------------------------------------------
UnfilteredStackTrace Traceback (most recent call last)
<ipython-input-29-5c831c772fc6> in <module>()
8 # Model forward
----> 9 state, train_metric, dropout_rngs = parallel_train_step(state, model_inputs, dropout_rngs)
10
47 frames
UnfilteredStackTrace: flax.errors.ScopeParamShapeError: Inconsistent shapes between value and initializer for parameter "scale" in "/transformer/ln_f": (1024,), (0,). (https://flax.readthedocs.io/en/latest/flax.errors.html#flax.errors.ScopeParamShapeError)
The stack trace below excludes JAX-internal frames.
The preceding is the original exception that occurred, unmodified.
The above exception was the direct cause of the following exception:
ScopeParamShapeError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/flax/core/scope.py in param(self, name, init_fn, *init_args)
618 if jnp.shape(val) != jnp.shape(abs_val):
619 raise errors.ScopeParamShapeError(name, self.path_text,
--> 620 jnp.shape(val), jnp.shape(abs_val))
621 else:
622 if not self.is_mutable_collection('params'):
ScopeParamShapeError: Inconsistent shapes between value and initializer for parameter "scale" in "/transformer/ln_f": (1024,), (0,). (https://flax.readthedocs.io/en/latest/flax.errors.html#flax.errors.ScopeParamShapeError)
```
I'm guessing this `1024` comes from the number of tokens per batch. How do I resolve this error?
Any help would be much appreciated.
Thank You.
| 07-10-2021 08:19:25 | 07-10-2021 08:19:25 | I don't exactly know how but this issue went away on its own after appearing unexpectedly out of nowhere. Closing the issue. <|||||>> I don't exactly know how but this issue went away on its own after appearing unexpectedly out of nowhere. Closing the issue.
Dear thisis-nkul, have you sovled this problem? How? |
transformers | 12,622 | closed | unclear `prepare_seq2seq_batch` deprecation | When using `prepare_seq2seq_batch` the user now gets:
> transformers-master/src/transformers/tokenization_utils_base.py:3277: FutureWarning: `prepare_seq2seq_batch` is deprecated and will be removed in version 5 of 🤗 Transformers. Use the regular `__call__` method to prepare your inputs and the tokenizer under the `with_target_tokenizer` context manager to prepare your targets. See the documentation of your specific tokenizer for more details.
It's very hard to act on as, I'm not sure what "regular `__call__` method" refers to and I could find any tokenizer documentation that ever mentions `with_target_tokenizer`.
Perhaps this is an unintended typo? was it meant to be `with target_tokenizer`? `with FooTokenizer`?
Please kindly suggest a more user-friendly deprecation and at least one example or a link to such.
Thank you.
@sgugger, @LysandreJik | 07-10-2021 04:43:14 | 07-10-2021 04:43:14 | Why is `__call__` hard to understand? It's the regular Python method for when the tokenizer is called directly on inputs. How would you formulate that better?
For the `with_target_tokenizer` it's a typo indeed, it should be `as_target_tokenizer`.
As for an example, this is what is used in every example script, see for instance the [run_translation](https://github.com/huggingface/transformers/blob/9adff7a0f49f88a6cc718a1d30088988dc78bb6a/examples/pytorch/translation/run_translation.py#L414) script.
I'm curious, where did you still find a reference to this method? It's been removed from all examples and documentation normally (and has been deprecated five months ago).<|||||>It's totally obvious once I see an example that I was now able to find since you gave the correct context manager name, it is so not obvious from the warning message. Moreover, none of the tokenizers document that as suggested by the warning's message. They do document the specifics of its usage.
I made an attempt at another version here: https://github.com/huggingface/transformers/pull/12669<|||||>> I'm curious, where did you still find a reference to this method? It's been removed from all examples and documentation normally (and has been deprecated five months ago).
In several of the scripts I used in the past to make tiny models.
I'm curious in turn why was this wrapper deprecated? To make things more explicit? Looks like a lot more code to write instead of the wrapper.
<|||||>I stumbled upon this issue when googling the warning. For the translation task this
`tokenized_text = tokenizer.prepare_seq2seq_batch([text], return_tensors='pt')`
has to be replaced by this:
```
with tokenizer.as_target_tokenizer():
tokenized_text = tokenizer(text, return_tensors='pt')
```
Which is much clearer than using `prepare_seq2seq_batch`, but for anyone coming from other languages but python, the concept of `__call__` might not be transparent in first place :)<|||||>I'm getting the same text, not the translated one when I change from `prepare_seq2seq_batch` to `as_target_tokenizer` |
transformers | 12,621 | closed | can't pickle <class 'types.AutoModelForCausalLM'> |
Hi, a new problem has arisen
we can pickle "LazyModule" now, but can't pickle <class 'types.AutoModelForCausalLM'>
@stas00 @patrickvonplaten, @LysandreJik
Traceback (most recent call last):
File "/usr/local/anaconda3/envs/py38/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/local/anaconda3/envs/py38/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/media/cfs/gonglixing/9Nctl/gpt_v2/run_clm_v3.py", line 509, in init_process
fn(rank, size)
File "/media/cfs/gonglixing/9Nctl/gpt_v2/run_clm_v3.py", line 367, in main
tokenized_datasets = raw_datasets.map(
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 471, in map
{
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 472, in <dictcomp>
k: dataset.map(
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1736, in map
transformed_shards = [r.get() for r in results]
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1736, in <listcomp>
transformed_shards = [r.get() for r in results]
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/pool.py", line 771, in get
raise self._value
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/pool.py", line 537, in _handle_tasks
put(task)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/connection.py", line 209, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/reduction.py", line 54, in dumps
cls(buf, protocol, *args, **kwds).dump(obj)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py",line 498, in dump
StockPickler.dump(self, obj)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 487, in dump
self.save(obj)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 901, in save_tuple
save(element)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py",line 990, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 971, in save_dict
self._batch_setitems(obj.items())
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 997, in _batch_setitems
save(v)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py",line 1493, in save_function
pickler.save_reduce(_create_function, (obj.__code__,
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 692, in save_reduce
save(args)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 901, in save_tuple
save(element)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py",line 990, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 971, in save_dict
self._batch_setitems(obj.items())
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 997, in _batch_setitems
save(v)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py",line 1439, in save_type
StockPickler.save_global(pickler, obj, name=name)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 1070, in save_global
raise PicklingError(
_pickle.PicklingError: Can't pickle <class 'types.AutoModelForCausalLM'>: it's notfound as types.AutoModelForCausalLM
_Originally posted by @lancekung in https://github.com/huggingface/transformers/issues/12549#issuecomment-877537851_ | 07-10-2021 02:06:06 | 07-10-2021 02:06:06 | Hello! Could you provide a code example that yields this error? Thank you!<|||||>```
import pickle
from transformers import AutoModelForCausalLM
pickle.dumps(AutoModelForCausalLM)
```
I think it's comes from the fact those are autogenerated.<|||||>> ```
> import pickle
> from transformers import AutoModelForCausalLM
>
> pickle.dumps(AutoModelForCausalLM)
> ```
>
> I think it's comes from the fact those are autogenerated.
thanks for your help, but I tested based on your modification in #12654, a new problem arises:
Traceback (most recent call last):
File "/usr/local/anaconda3/envs/py38/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/local/anaconda3/envs/py38/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/media/cfs/gonglixing/9Nctl/gpt_v2/run_clm_v3.py", line 509, in init_process
fn(rank, size)
File "/media/cfs/gonglixing/9Nctl/gpt_v2/run_clm_v3.py", line 456, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/media/cfs/gonglixing/9Nctl/opensource/transformers-master/src/transformers/trainer.py", line 1275, in train
tr_loss += self.training_step(model, inputs)
File "/media/cfs/gonglixing/9Nctl/opensource/transformers-master/src/transformers/trainer.py", line 1778, in training_step
self.scaler.scale(loss).backward()
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/torch/tensor.py", line 245, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/torch/autograd/__init__.py", line 145, in backward
Variable._execution_engine.run_backward(
SystemError: <built-in method run_backward of torch._C._EngineBase object at 0x7f06bfae6b30> returned NULL without setting an error
|
transformers | 12,620 | closed | [doc] fix anchor | mixed rst with md, fixing the anchor | 07-10-2021 01:04:50 | 07-10-2021 01:04:50 | |
transformers | 12,619 | closed | Add tokenizers class mismatch detection between `cls` and checkpoint | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #12416
This PR detects a mismatch between `cls` and a checkpoint a user intends to load.
However, It can't find a mismatch when a config doesn't contain the tokenizer's information.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-09-2021 21:35:25 | 07-09-2021 21:35:25 | I revised the code based on your reviews. <|||||>I want to ask you to refactor the logic.
Thank you for offering!<|||||>@SaulLu could you confirm you're happy with the changes? I think this is good to be merged on my side, thanks for the adjustments @europeanplaice.<|||||>@SaulLu @sgugger
We made a excellent job! Thank you very much for your help! |
transformers | 12,618 | closed | validation metrics not being logged by Trainer | ### Who can help
- trainer: @sgugger
## Information
Model I am using (Bert, XLNet ...): BigBird
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Details here (https://discuss.huggingface.co/t/no-loss-being-logged-when-running-mlm-script-colab/8134)
The validation/eval loss is not being logged at all when using wandb or tensorboard - suffice to say its not being logged by Trainer.
Tried different settings for the script, none of which yield any results. | 07-09-2021 19:27:34 | 07-09-2021 19:27:34 | You are not providing a reproducer (which would include the data on which to run the script) so we can reproduce your problem. Re-tested the script on TPU and it does run evaluation every eval_steps, as provided. A few things that could be the problem in your case:
- since you set gradient_accumulation_steps = 500, your evaluation will only happen every 500 x eval_steps, so make sure you have enough training samples to get to that point (you do not provide the size of your trainingset)
- you could have an empty dataset (less than one batch), which would make the evaluation phase empty as well.<|||||>```py
!touch dataset.txt
import random
f = open('./dataset.txt', 'w')
for lines in range(50):
f.write(' '.join(m for m in [str(random.randint(0, 40000)) for i in range(16000)]) + '\n') #16000 words/(numbers) in one line, with random numbers from 0-40000 only.
f.close()
```
should create 50 sequences; I am using 22,500 for my training and 2,500 for validation. My batch size is `1` due to the long length of sequences, hence I don't believe that I have <1.
Attached is my validation [file](https://drive.google.com/file/d/1-6-db2cM-jpN7rpzXxWg_MpLspr6alWb/view?usp=sharing) of (uncompressed) 116MB. My training file is about 1.3GB.
Perhaps it may be a problem with my validation dataset, but I don't spot it on the surface.<|||||>Additionally, I had put a run for 5 hours ~ (`5 epochs`). In the logs from wandb, it had obviously completed in 5 hours - however, the script wouldn't stop running for some reason which in turn wouldn't trigger `wand.finish()`. From the logs, it seems that the script was running for 3 hours more after training, which seems pretty mysterious.
I don't understand why I am getting weird behaviour. For reference, this is my cell:-
```py
%%bash
python xla_spawn.py --num_cores=8 ./run_mlm.py --output_dir="./results" \
--model_type="big_bird" \
--config_name="./config" \
--tokenizer_name="./tokenizer" \
--train_file="./dataset.txt" \
--validation_file="./val.txt" \
--line_by_line="True" \
--max_seq_length="16000" \
--weight_decay="0.01" \
--per_device_train_batch_size="1" \
--per_device_eval_batch_size="1" \
--learning_rate="3e-4" \
--tpu_num_cores='8' \
--warmup_steps="1000" \
--overwrite_output_dir \
--pad_to_max_length \
--num_train_epochs=5 \
--adam_beta1=0.9 \
--adam_beta2=0.98 \
--do_train \
--do_eval \
#--logging_steps=200 \
--evaluation_strategy="steps" \
--eval_steps=250 \
--eval_accumulation_steps=200 \
--report_to="all" \
--logging_dir='./logs' \
--skip_memory_metrics='False' \
--gradient_accumulation_steps=500 \
--use_fast_tokenizer='True' \
--logging_first_step='True' \
#1> >(tee -a ./content/drive/MyDrive/music_dataset/logs/stdout.log) \
#2> >(tee -a ./content/drive/MyDrive/music_dataset/logs/stderr.log >&2)
```
**EDIT:-** @sgugger Another thing I found, was that despite putting the flag the strategy adjusted by the model is "no" [`evaluation_strategy=IntervalStrategy.NO`] which should have been `steps`<|||||>Mmm, if the `evaluation_strategy` is set to no, the problem is that the bash command is badly interpreted. It seems you are running it in a notebook, I don't know how that usually works, but the problem is that all the arguments you typed are not properly consumed.
You should try this in a terminal.<|||||>I didn't think of that :100: but it still doesn't work :disappointed:
I am writing it to a bash file and running it that way; I also put it all the flags together in one single command but that doesn't seem to work either. Trying in a terminal yields same results.
The problem is that it's not logging the loss at all after the initial one, forget the eval loss.
```py
python3 xla_spawn.py --num_cores=8 ./run_mlm.py --output_dir="./results" --model_type="big_bird" --config_name="./config" --tokenizer_name="./tokenizer" --train_file="./dataset.txt" --validation_file="./val.txt" --line_by_line="True" --max_seq_length="16000" --weight_decay="0.01" --per_device_train_batch_size="1" --per_device_eval_batch_size="1" --learning_rate="3e-4" --tpu_num_cores='8' --warmup_steps="1000" --overwrite_output_dir --pad_to_max_length --num_train_epochs=5 --adam_beta1=0.9 --adam_beta2=0.98 --do_train --do_eval --logging_steps=200 --evaluation_strategy="steps" --eval_steps=200 --eval_accumulation_steps=200 --report_to="all" --logging_dir='./logs' --skip_memory_metrics='False' --gradient_accumulation_steps=150 --use_fast_tokenizer='True' --logging_first_step='True'
```
A better view:-
```py
per_device_eval_batch_size=1,
per_device_train_batch_size=1,
prediction_loss_only=False,
push_to_hub=False,
push_to_hub_model_id=results,
push_to_hub_organization=None,
push_to_hub_token=None,
remove_unused_columns=True,
report_to=['tensorboard', 'wandb'],
resume_from_checkpoint=None,
run_name=./results,
save_on_each_node=False,
save_steps=500,
save_strategy=IntervalStrategy.STEPS,
save_total_limit=None,
seed=42,
sharded_ddp=[],
skip_memory_metrics=False,
tpu_metrics_debug=False,
tpu_num_cores=8,
use_legacy_prediction_loop=False,
warmup_ratio=0.0,
warmup_steps=1000,
weight_decay=0.01,
)
```<|||||>hmm...finally got it to work; not sure what I did but removing `logging_steps`, disabling gradient accumulation and eval accumulation helps a lot - along with using `python3 ....[command]` than `python ...[cmd]` which shouldn't be an issue, but I really don't know why I have to sacrifice accuracy/features for logging to work :thinking: <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,617 | closed | TF summarization example | 07-09-2021 18:33:11 | 07-09-2021 18:33:11 | ||
transformers | 12,616 | closed | Weird outputs by `opus-mt-en-es` | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: TPU VM
- Python version: 3.8.10
### Who can help
@patrickvonplaten @patil-suraj
## Information
Model I am using: FlaxMarianMTModel
## To reproduce
I used this code for beam size 2 and 4. Funnily the outputs looked almost same just that Oh changed to no in beam_size=2
```
from transformers import MarianTokenizer, FlaxMarianMTModel
model = FlaxMarianMTModel.from_pretrained('Helsinki-NLP/opus-mt-en-es', from_pt=True)
tokenizer = MarianTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-es')
text = "Living Room, The Sheridan House! Your Minneapolis Home!"
input_ids = tokenizer(text, max_length=64, return_tensors='jax', truncation=True)
sequences = model.generate(**input_ids, early_stopping=True, max_length=64, num_beams=2).sequences
tokenizer.batch_decode(sequences, skip_special_tokens=True, max_length=64)
```
For num_beams = 2 output:
'Sala, ¡La Casa Sheridan, tu hogar de Minneapolis, ¡No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no'
For num_beams = 4 output:
'¡Sala de estar, la Casa Sheridan, tu hogar de Minneapolis, ¡Oh, ¡Oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh, oh,'
## Expected behavior
Shouldn't give 'oh' or 'no' in outputs. | 07-09-2021 17:30:15 | 07-09-2021 17:30:15 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Fixed via #12662 |
transformers | 12,615 | closed | [FLax] Fix marian docs 2 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Follow-up PR from #12614
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-09-2021 17:13:35 | 07-09-2021 17:13:35 | |
transformers | 12,614 | closed | [Flax Marian] Add marian flax example | This PR adds a better example and leaves a note that `early_stopping=True` should be used for FlaxMarian | 07-09-2021 16:34:04 | 07-09-2021 16:34:04 | |
transformers | 12,613 | closed | RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.HalfTensor [12, 4096, 1]], which is output 0 of ViewBackward, is at version 1; expected version 0 instead. | When I run trainer to fine-tune pertained long former for sequence classification I get the following error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.HalfTensor [12, 4096, 1]], which is output 0 of ViewBackward, is at version 1; expected version 0 instead.
I'm not sure how to debug this as the error points me to internal processes handled by the trainer:
Traceback (most recent call last):
File "finetune_longformer_3.py", line 126, in <module>
trainer.train()
File "/......./conda/envs/diss/lib/python3.8/site-packages/transformers/trainer.py", line 1269, in train
tr_loss += self.training_step(model, inputs)
File "/....../conda/envs/diss/lib/python3.8/site-packages/transformers/trainer.py", line 1772, in training_step
self.scaler.scale(loss).backward()
File "/......../conda/envs/diss/lib/python3.8/site-packages/torch/_tensor.py", line 255, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/........./conda/envs/diss/lib/python3.8/site-packages/torch/autograd/__init__.py", line 147, in backward
Variable._execution_engine.run_backward(
any help would be much appreciated! | 07-09-2021 15:44:02 | 07-09-2021 15:44:02 | Hello! Could you provide the information required by the template, please? Especially the code that you used, as it's hard to help without it. Thanks<|||||>I have a similar problem during Finetuning LED for Summarization Task in Colab, with the following error message:
-----------------------------
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [12, 4096, 1]], which is output 0 of ViewBackward, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
------------------------------
The Settings for the Training are as follow:
Training Set: 17 Samples, each with less than 4000 tokens.
As for environment, I ran !pip install -r requirements.txt, where requirements come from the latest master branch of longformer.
----------------------
transformers @ git+http://github.com/ibeltagy/transformers.git@longformer_encoder_decoder#egg=transformers
pytorch-lightning @ git+http://github.com/ibeltagy/[email protected]_fixes#egg=pytorch-lightning
torch>=1.6.0
tensorboardX
test-tube==0.7.5
nlp
rouge_score
-----------------------------------
CUDA for the colab session was:
Sun Jul 18 03:58:07 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.42.01 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla P100-PCIE... Off | 00000000:00:04.0 Off | 0 |
| N/A 44C P0 30W / 250W | 0MiB / 16280MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
Other Training Configurations are as follow:
loaded from pretrained is the "allenai/led-base-16384" with HuggingFace.
max_input_length = 4096
min_output_length = 256
max_output_length = 512
batch_size = 2
# set generate hyperparameters
led.config.encoder_layers=6
led.config.decoder_layers=6
led.config.attention_window=128 # left and right so total 256
led.config.num_beams = 2
led.config.length_penalty = 2.0
led.config.early_stopping = True
led.config.no_repeat_ngram_size = 3
# adjust output length according to training and val datasets
led.config.max_length = max_output_length # now at 256
led.config.min_length = min_output_length # now at 512
# enable fp16 apex training
training_args = Seq2SeqTrainingArguments(
predict_with_generate=True,
evaluation_strategy="epoch",
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
fp16=True,
output_dir=path_models,
logging_steps=5,
eval_steps=10,
save_steps=10,
save_total_limit=4,
load_best_model_at_end=True,
gradient_accumulation_steps=4,
num_train_epochs=6,
)
trainer = Seq2SeqTrainer(
model=led,
tokenizer=tokenizer,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_dataset,
eval_dataset=val_dataset,
)
Enabling "torch.autograd.set_detect_anomaly(True)", point to the following:
/led/modeling_led.py", line 589, in _compute_attn_output_with_global_indices
attn_probs_only_global.transpose(1, 2), value_vectors_only_global.transpose(1, 2)
It seems that the global attention calculation made changes to torch and somehow created some conflicts with gradient computation in terms tracking steps.
I had successfully trained larger samples (600+ samples) with up to 8192 input tokens, with generate length between 256 and 512 , attention window size = 512 (1024 total from both side), using the led-base checkpoint. So seeing this error message is a bit frustrating. Any help is highly appreciated. Let me know if you need more information. Thank you.
-----------------------------
***** Running training *****
Num examples = 17
Num Epochs = 6
Instantaneous batch size per device = 2
Total train batch size (w. parallel, distributed & accumulation) = 8
Gradient Accumulation steps = 4
Total optimization steps = 12
[ 3/12 00:06 < 00:57, 0.16 it/s, Epoch 0.89/6]
Epoch Training Loss Validation Loss
**/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py:149: UserWarning: Error detected in BmmBackward0. Traceback of forward call that caused the error:**
File "/usr/local/lib/python3.7/dist-packages/torch/autograd/function.py", line 87, in apply
return self._forward_cls.backward(self, *args) # type: ignore[attr-defined]
File "/usr/local/lib/python3.7/dist-packages/torch/utils/checkpoint.py", line 122, in backward
outputs = ctx.run_function(*detached_inputs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/led/modeling_led.py", line 1816, in custom_forward
return module(*inputs, is_global_attn, output_attentions)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/led/modeling_led.py", line 915, in forward
output_attentions=output_attentions,
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/led/modeling_led.py", line 726, in forward
output_attentions=output_attentions,
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/led/modeling_led.py", line 282, in forward
is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero,
File "/usr/local/lib/python3.7/dist-packages/transformers/models/led/modeling_led.py", line 589, in _compute_attn_output_with_global_indices
attn_probs_only_global.transpose(1, 2), value_vectors_only_global.transpose(1, 2)
(Triggered internally at /pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:104.)
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py:149: UserWarning:
Previous calculation was induced by CheckpointFunctionBackward. Traceback of forward call that induced the previous calculation:
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "/usr/local/lib/python3.7/dist-packages/traitlets/config/application.py", line 845, in launch_instance
app.start()
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelapp.py", line 499, in start
self.io_loop.start()
File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 132, in start
self.asyncio_loop.run_forever()
File "/usr/lib/python3.7/asyncio/base_events.py", line 541, in run_forever
self._run_once()
File "/usr/lib/python3.7/asyncio/base_events.py", line 1786, in _run_once
handle._run()
File "/usr/lib/python3.7/asyncio/events.py", line 88, in _run
self._context.run(self._callback, *self._args)
File "/usr/local/lib/python3.7/dist-packages/tornado/platform/asyncio.py", line 122, in _handle_events
handler_func(fileobj, events)
File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 451, in _handle_events
self._handle_recv()
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 480, in _handle_recv
self._run_callback(callback, msg)
File "/usr/local/lib/python3.7/dist-packages/zmq/eventloop/zmqstream.py", line 434, in _run_callback
callback(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tornado/stack_context.py", line 300, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 233, in dispatch_shell
handler(stream, idents, msg)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/kernelbase.py", line 399, in execute_request
user_expressions, allow_stdin)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/ipkernel.py", line 208, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python3.7/dist-packages/ipykernel/zmqshell.py", line 537, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2718, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2828, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python3.7/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-74-3b02fb48d903>", line 1, in <module>
trainer.train()
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1269, in train
tr_loss += self.training_step(model, inputs)
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1762, in training_step
loss = self.compute_loss(model, inputs)
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1794, in compute_loss
outputs = model(**inputs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/led/modeling_led.py", line 2362, in forward
return_dict=return_dict,
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/led/modeling_led.py", line 2206, in forward
return_dict=return_dict,
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/led/modeling_led.py", line 1826, in forward
is_index_global_attn,
File "/usr/local/lib/python3.7/dist-packages/torch/utils/checkpoint.py", line 211, in checkpoint
return CheckpointFunction.apply(function, preserve, *args)
(Triggered internally at /pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:109.)
allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-74-3b02fb48d903> in <module>()
----> 1 trainer.train()
2 #resume_from_checkpoint=True
6 frames
/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
147 Variable._execution_engine.run_backward(
148 tensors, grad_tensors_, retain_graph, create_graph, inputs,
--> 149 allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
150
151
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [12, 4096, 1]], which is output 0 of ViewBackward, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!<|||||>> /led/modeling_led.py", line 589, in _compute_attn_output_with_global_indices
> attn_probs_only_global.transpose(1, 2), value_vectors_only_global.transpose(1, 2)
I seem to fix the problem by changing the following, by detaching the torch before transpose operation:
from
/led/modeling_led.py", line 589, in _compute_attn_output_with_global_indices
attn_probs_only_global.transpose(1, 2), value_vectors_only_global.transpose(1, 2)
to
/led/modeling_led.py", line 589, in _compute_attn_output_with_global_indices
attn_probs_only_global.detach().transpose(1, 2), value_vectors_only_global.detach().transpose(1, 2)<|||||>I'm getting exactly the same issue and it works fine if i don't specify a global attention mask, which leads me to believe its in the merge function in forward.<|||||>@Herais Detach would remove the tensors from the computation graph, wouldn't it be preferable to use .clone() instead? <|||||>> @Herais Detach would remove the tensors from the computation graph, wouldn't it be preferable to use .clone() instead?
I think you are right, I was wondering about what detach does to the computational map, especially with the gradient accumulation set to True. Using clone() also solves the versioning problem, I would like to see how it does to predictions, will update. Thank you=)
I was testing global attention at the beginning of the document and the global attention at the beginning of each paragraph..<|||||>Hi, I also encountered this exact same bug when using the longformer for sequence classification. I had successfully trained this model previously before oversampling as well as a LED for summarization so I was thrown off at first when I got it. I realized that the model kept throwing an error at the last batch and when comparing the length of my data to my total batch size (batch_size=2 and gradient_accumulation=4) I realized that my last batch was a batch size of 1. I dropped a single row and then I was able to train the model successfully. I recently turned on gradient_checkpointing and ran it again (batch_size=7 and gradient_accumulation=4) and the error was triggered again when my last batch was 22/28 if you count gradient accumulation, so once again the batch size of 1 created the error.<|||||>Hi - is there a preferred fix for this? I'm blocked on it right now. I can just clone the offending tensor but want to make sure that's the preferred behavior.<|||||>Sorry I'm a bit lost on this issue. Could someone add a **minimum** reproducible code snippet that allows us to reproduce the error?<|||||>I think most people here are running into issues on the backward pass of the Longformer E-D.
I will share my code in a bit but I'm curious if the provided colab works. If I were to reproduce my bug, it would be similar to the colab.
https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v<|||||>I tried cloning the offending tensor but it didn't seem to resolve it . Here's my stack trace
`(fresh) griadams@ip-172-31-19-18:~/CompMedDsumEval/src/comp_med_dsum_eval/baselines/longformer$ pythons main.py -debug
Using GPUS --> 4...
Num GPUs --> 1
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
Using native 16bit precision.
Starting training...
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1,2,3,4,5,6,7]
wandb: W&B syncing is set to `offline` in this directory. Run `wandb online` or set WANDB_MODE=online to enable cloud syncing.
| Name | Type | Params
------------------------------------------------------
0 | model | LEDForConditionalGeneration | 161 M
------------------------------------------------------
161 M Trainable params
0 Non-trainable params
161 M Total params
647.378 Total estimated model params size (MB)
Validation sanity check: 0it [00:00, ?it/s]/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/data_loading.py:102: UserWarning: The dataloader, val dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 64 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
rank_zero_warn(
/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/data_loading.py:102: UserWarning: The dataloader, train dataloader, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` (try 64 which is the number of cpus on this machine) in the `DataLoader` init to improve performance.
rank_zero_warn(
Epoch 0: 0%| | 0/16512 [00:00<?, ?it/s]/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/autograd/__init__.py:147: UserWarning: Error detected in BmmBackward0. Traceback of forward call that caused the error:
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/autograd/function.py", line 87, in apply
return self._forward_cls.backward(self, *args) # type: ignore[attr-defined]
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/utils/checkpoint.py", line 122, in backward
outputs = ctx.run_function(*detached_inputs)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/transformers/models/led/modeling_led.py", line 1816, in custom_forward
return module(*inputs, is_global_attn, output_attentions)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/transformers/models/led/modeling_led.py", line 908, in forward
attn_outputs = self.self_attn(
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/transformers/models/led/modeling_led.py", line 719, in forward
self_outputs = self.longformer_self_attn(
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/transformers/models/led/modeling_led.py", line 277, in forward
attn_output = self._compute_attn_output_with_global_indices(
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/transformers/models/led/modeling_led.py", line 588, in _compute_attn_output_with_global_indices
attn_output_only_global = torch.matmul(
(Triggered internally at /pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:104.)
Variable._execution_engine.run_backward(
/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/autograd/__init__.py:147: UserWarning:
Previous calculation was induced by CheckpointFunctionBackward. Traceback of forward call that induced the previous calculation:
File "/home/griadams/CompMedDsumEval/src/comp_med_dsum_eval/baselines/longformer/main.py", line 137, in <module>
run(args)
File "/home/griadams/CompMedDsumEval/src/comp_med_dsum_eval/baselines/longformer/main.py", line 101, in run
trainer.fit(model)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 460, in fit
self._run(model)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 758, in _run
self.dispatch()
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 799, in dispatch
self.accelerator.start_training(self)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py", line 96, in start_training
self.training_type_plugin.start_training(trainer)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 144, in start_training
self._results = trainer.run_stage()
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 809, in run_stage
return self.run_train()
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 871, in run_train
self.train_loop.run_training_epoch()
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 499, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 738, in run_training_batch
self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 434, in optimizer_step
model_ref.optimizer_step(
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/core/lightning.py", line 1403, in optimizer_step
optimizer.step(closure=optimizer_closure)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py", line 214, in step
self.__optimizer_step(*args, closure=closure, profiler_name=profiler_name, **kwargs)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py", line 134, in __optimizer_step
trainer.accelerator.optimizer_step(optimizer, self._optimizer_idx, lambda_closure=closure, **kwargs)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py", line 325, in optimizer_step
make_optimizer_step = self.precision_plugin.pre_optimizer_step(
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/native_amp.py", line 93, in pre_optimizer_step
result = lambda_closure()
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 732, in train_step_and_backward_closure
result = self.training_step_and_backward(
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 823, in training_step_and_backward
result = self.training_step(split_batch, batch_idx, opt_idx, hiddens)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 290, in training_step
training_step_output = self.trainer.accelerator.training_step(args)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py", line 204, in training_step
return self.training_type_plugin.training_step(*args)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 155, in training_step
return self.lightning_module.training_step(*args, **kwargs)
File "/home/griadams/CompMedDsumEval/src/comp_med_dsum_eval/baselines/longformer/model.py", line 36, in training_step
output = self.model(**batch, use_cache=False)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/transformers/models/led/modeling_led.py", line 2346, in forward
outputs = self.led(
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/transformers/models/led/modeling_led.py", line 2198, in forward
encoder_outputs = self.encoder(
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/transformers/models/led/modeling_led.py", line 1820, in forward
layer_outputs = torch.utils.checkpoint.checkpoint(
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/utils/checkpoint.py", line 211, in checkpoint
return CheckpointFunction.apply(function, preserve, *args)
(Triggered internally at /pytorch/torch/csrc/autograd/python_anomaly_mode.cpp:109.)
Variable._execution_engine.run_backward(
[W python_anomaly_mode.cpp:104] Warning: Error detected in CheckpointFunctionBackward. Traceback of forward call that caused the error:
File "/home/griadams/CompMedDsumEval/src/comp_med_dsum_eval/baselines/longformer/main.py", line 137, in <module>
run(args)
File "/home/griadams/CompMedDsumEval/src/comp_med_dsum_eval/baselines/longformer/main.py", line 101, in run
trainer.fit(model)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 460, in fit
self._run(model)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 758, in _run
self.dispatch()
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 799, in dispatch
self.accelerator.start_training(self)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py", line 96, in start_training
self.training_type_plugin.start_training(trainer)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 144, in start_training
self._results = trainer.run_stage()
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 809, in run_stage
return self.run_train()
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 871, in run_train
self.train_loop.run_training_epoch()
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 499, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 738, in run_training_batch
self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 434, in optimizer_step
model_ref.optimizer_step(
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/core/lightning.py", line 1403, in optimizer_step
optimizer.step(closure=optimizer_closure)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py", line 214, in step
self.__optimizer_step(*args, closure=closure, profiler_name=profiler_name, **kwargs)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py", line 134, in __optimizer_step
trainer.accelerator.optimizer_step(optimizer, self._optimizer_idx, lambda_closure=closure, **kwargs)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py", line 325, in optimizer_step
make_optimizer_step = self.precision_plugin.pre_optimizer_step(
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/native_amp.py", line 93, in pre_optimizer_step
result = lambda_closure()
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 732, in train_step_and_backward_closure
result = self.training_step_and_backward(
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 823, in training_step_and_backward
result = self.training_step(split_batch, batch_idx, opt_idx, hiddens)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 290, in training_step
training_step_output = self.trainer.accelerator.training_step(args)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py", line 204, in training_step
return self.training_type_plugin.training_step(*args)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 155, in training_step
return self.lightning_module.training_step(*args, **kwargs)
File "/home/griadams/CompMedDsumEval/src/comp_med_dsum_eval/baselines/longformer/model.py", line 36, in training_step
output = self.model(**batch, use_cache=False)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/transformers/models/led/modeling_led.py", line 2346, in forward
outputs = self.led(
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/transformers/models/led/modeling_led.py", line 2198, in forward
encoder_outputs = self.encoder(
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/transformers/models/led/modeling_led.py", line 1820, in forward
layer_outputs = torch.utils.checkpoint.checkpoint(
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/utils/checkpoint.py", line 211, in checkpoint
return CheckpointFunction.apply(function, preserve, *args)
(function _print_stack)
Traceback (most recent call last):
File "/home/griadams/CompMedDsumEval/src/comp_med_dsum_eval/baselines/longformer/main.py", line 137, in <module>
run(args)
File "/home/griadams/CompMedDsumEval/src/comp_med_dsum_eval/baselines/longformer/main.py", line 101, in run
trainer.fit(model)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 460, in fit
self._run(model)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 758, in _run
self.dispatch()
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 799, in dispatch
self.accelerator.start_training(self)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py", line 96, in start_training
self.training_type_plugin.start_training(trainer)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 144, in start_training
self._results = trainer.run_stage()
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 809, in run_stage
return self.run_train()
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 871, in run_train
self.train_loop.run_training_epoch()
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 499, in run_training_epoch
batch_output = self.run_training_batch(batch, batch_idx, dataloader_idx)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 738, in run_training_batch
self.optimizer_step(optimizer, opt_idx, batch_idx, train_step_and_backward_closure)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 434, in optimizer_step
model_ref.optimizer_step(
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/core/lightning.py", line 1403, in optimizer_step
optimizer.step(closure=optimizer_closure)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py", line 214, in step
self.__optimizer_step(*args, closure=closure, profiler_name=profiler_name, **kwargs)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/core/optimizer.py", line 134, in __optimizer_step
trainer.accelerator.optimizer_step(optimizer, self._optimizer_idx, lambda_closure=closure, **kwargs)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py", line 325, in optimizer_step
make_optimizer_step = self.precision_plugin.pre_optimizer_step(
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/native_amp.py", line 93, in pre_optimizer_step
result = lambda_closure()
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 732, in train_step_and_backward_closure
result = self.training_step_and_backward(
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 836, in training_step_and_backward
self.backward(result, optimizer, opt_idx)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/trainer/training_loop.py", line 869, in backward
result.closure_loss = self.trainer.accelerator.backward(
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/accelerators/accelerator.py", line 308, in backward
output = self.precision_plugin.backward(
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/native_amp.py", line 62, in backward
closure_loss = super().backward(model, closure_loss, optimizer, opt_idx, should_accumulate, *args, **kwargs)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 79, in backward
model.backward(closure_loss, optimizer, opt_idx)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/pytorch_lightning/core/lightning.py", line 1275, in backward
loss.backward(*args, **kwargs)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/_tensor.py", line 255, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/autograd/__init__.py", line 147, in backward
Variable._execution_engine.run_backward(
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/autograd/function.py", line 87, in apply
return self._forward_cls.backward(self, *args) # type: ignore[attr-defined]
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/utils/checkpoint.py", line 138, in backward
torch.autograd.backward(outputs_with_grad, args_with_grad)
File "/home/griadams/miniconda3/envs/fresh/lib/python3.9/site-packages/torch/autograd/__init__.py", line 147, in backward
Variable._execution_engine.run_backward(
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.HalfTensor [12, 6144, 1]], which is output 0 of ViewBackward, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
wandb: Waiting for W&B process to finish, PID 125448
wandb: Program failed with code 1.
wandb: Find user logs for this run at: /efs/griadams/weights/default/wandb/offline-run-20210809_103548-2aq43v1n/logs/debug.log
wandb: Find internal logs for this run at: /efs/griadams/weights/default/wandb/offline-run-20210809_103548-2aq43v1n/logs/debug-internal.log
wandb: You can sync this run to the cloud by running:
wandb: wandb sync /efs/griadams/weights/default/wandb/offline-run-20210809_103548-2aq43v1n`<|||||>First time I see an error message from PyTorch that says "Good luck!" haha. This will be complex then I guess<|||||>Okey, but I still don't have a code example that let's me reproduce this error I'm afraid :D
The official colab here: https://colab.research.google.com/drive/12LjJazBl7Gam0XBPy_y0CTOJZeZ34c2v?usp=sharing seems to work just fine<|||||>I'm getting this error as well using Longformer. This seems to be happening at the very end of my training. I'm assuming that it might be happening because there is a batch that has fewer number of examples than batch size. Maybe that could be something that should be tried? I'm currently investigating this issue on my end and I'll share more information if I find something.<|||||>Similar problem here. It happens at the end of the first epoch in my case, when the batch size is smaller.
`File "/home/user/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1269, in train
tr_loss += self.training_step(model, inputs)
File "/home/user/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1780, in training_step
loss.backward()
File "/home/user/.conda/envs/transformers/lib/python3.8/site-packages/torch/_tensor.py", line 255, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/user/.conda/envs/transformers/lib/python3.8/site-packages/torch/autograd/__init__.py", line 147, in backward
Variable._execution_engine.run_backward(
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [12, 4096, 1]], which is output 0 of ViewBackward, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).`<|||||>This has to do with is_global_attn=True, else there is no problem.
EDIT : downgrading to torch 1.7 works for me<|||||>@patrickvonplaten @ibeltagy could you please advise?
Thanks,
Alessandro<|||||>Hi all,
The very same issue `RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation` occurred for me during a continued pre-training, i.e., warm-start a Lonformer model from the miniLMv2 checkpoint and contiue training the model with an MLM objective. I use the standard HF script, i.e., `run_mlm.py` provided in the examples. I have an ugly temporary solution down the lines, so please read, if interested.
I personally altered the tokenization pre-processing to provide custom global attention masks in a every separator token `</s>`, which I aim to use as a paragraph separator:
```python
def tokenize_function(examples):
# Remove empty lines
examples[text_column_name] = [
line for line in examples[text_column_name] if len(line) > 0 and not line.isspace()
]
batch = tokenizer(
examples[text_column_name],
padding=padding,
truncation=True,
max_length=max_seq_length,
# We use this option because DataCollatorForLanguageModeling (see below) is more efficient when it
# receives the `special_tokens_mask`.
return_special_tokens_mask=True,
)
# provide custom global attention mask
batch.data['global_attention_mask'] = [[1 if token_id in [tokenizer.cls_token_id, tokenizer.sep_token_id]
else 0 for token_id in seq] for seq in batch.data['input_ids']]
return batch
```
After 1186 training steps, the aforementioned error occurred...
# Solution
In order to be able to train the model -until there is a proper solution- I "hacked" the `Trainer` class in the `train` function, wrapping this part of the code in a try-except block:
https://github.com/huggingface/transformers/blob/010965dcde8ce9526f6a7e6e2c3f36276c153708/src/transformers/trainer.py#L1277-L1286
I copy-pasted the `trainer.py` in a new personal file `mytrainer.py` and did the following minor update, which moves to the next mini batch (step), while it also zero-out the gradients:
```python
try:
if (
((step + 1) % args.gradient_accumulation_steps != 0)
and args.local_rank != -1
and args._no_sync_in_gradient_accumulation
):
# Avoid unnecessary DDP synchronization since there will be no backward pass on this example.
with model.no_sync():
tr_loss += self.training_step(model, inputs)
else:
tr_loss += self.training_step(model, inputs)
except:
tr_loss += 0
logger.warning(f'Issue at training step {step} !!! Training continues...')
model.zero_grad()
continue
```
I re-run the code, which started from the latest checkpoint `checkpoint-1100` and passed the tricky part successfully:
```
09/11/2021 20:03:34 - WARNING - mytrainer - Issue at training step 1187 !!! Training continues...
```
So far there is not further issue and the training loss is keep decreasing 😄
```
{'loss': 4.12, 'learning_rate': 9.724264705882353e-06, 'epoch': 2.19}
{'loss': 4.0383, 'learning_rate': 9.632352941176471e-06, 'epoch': 2.36}
{'loss': 3.8487, 'learning_rate': 9.448529411764707e-06, 'epoch': 2.7}
{'eval_loss': 3.653672456741333, 'eval_runtime': 61.6433, 'eval_samples_per_second': 8.111, 'eval_steps_per_second': 1.022, 'epoch': 3.0}
```
<|||||>@iliaschalkidis thanks for the update. Even thought this goes around the issue, it looks like there is something fundamentally wrong with the current implementation? I hope that @patrickvonplaten or @ibeltagy could comment on this 🙏<|||||>@aleSuglia that's absolutely true and that's why I describe my solution as a "dirty" hack trying to avoid seg faults by skipping a few param updates when this weird error occur.
Let's hope for a real solution in the underlying issue.<|||||>@iliaschalkidis actually, now that you have a try/except in place for that issue, why don't you serialise the faulty batch and share it in a Colab so that @patrickvonplaten or @ibeltagy can play around with it? I think that would be terribly useful to debug!<|||||>The problem comes from LongformerSelfAttention for longformer. If this happens for another model, its probably from its SelfAttention module too.<|||||>@iliaschalkidis any chances to get the faulty batch out of your training?<|||||>Not yet, sorry. I'm currently (pre-)training the models. I'll try to add a save functionality in the `except` handling and save a tricky batch later this week.
FWIW I agree with @benderama3 ; I also have a feeling that this inconsistency is a by-product of the really complicated attention code, i.e., there are multiple `reshape` and `gather` -like computations with dynamically inferred shapes :P <|||||>Some other edge cases that I've spotted:
```
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.HalfTensor [12, 1024, 46]], which is output 0 of ViewBackward, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Variable._execution_engine.run_backward(
Variable._execution_engine.run_backward(
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.HalfTensor [12, 1024, 37]], which is output 0 of ViewBackward, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.HalfTensor [12, 1024, 43]], which is output 0 of ViewBackward, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
Variable._execution_engine.run_backward(
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.HalfTensor [12, 1536, 73]], which is output 0 of ViewBackward, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
```<|||||>@patrickvonplaten ,
Here's the Colab that I got this problem. Finally got a chance o strip down the notebook code. The error comes up 5 to 10 minutes into training.
[https://colab.research.google.com/drive/1ZoYJaJZmhygKBEAb5gPm2MaySdFOqgbo?usp=sharing](url)
Error message was:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.HalfTensor [12, 16384, 16]], which is output 0 of ViewBackward, is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!<|||||>@Herais thanks for sharing your notebook. I've simplified it to make it easier for people to reproduce the bug and dissect the actual model code: https://colab.research.google.com/drive/13rKxs6Ype0kDEBlnywsGynE2zpzv2CR-#scrollTo=h7k8m9OV8xIR<|||||>cool, thank you.<|||||>@patrickvonplaten @ibeltagy I'm happy to send a PR with the fix. There are some in-place operations that require `clone` to work. Let me know if you're interested!<|||||>@aleSuglia and @Herais thanks for diving into this issue! We would happily welcome a PR to see the code changes and what needs to be fixed.
Thank you!<|||||>@LysandreJik just sent it. I hope it can be merged asap :) |
transformers | 12,612 | closed | [Flax] Fix mt5 auto | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Correctly loads MT5 in Flax from auto model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-09-2021 15:26:27 | 07-09-2021 15:26:27 | |
transformers | 12,611 | closed | Better heuristic for token-classification pipeline. | # What does this PR do?
Relooking at the problem makes thing actually much simpler,
when we look at ids from a tokenizer, we have no way in **general**
to recover if some substring is part of a word or not.
However, within the pipeline, with offsets we still have access to the
original string, so we can simply look if previous character (if it
exists) of a token, is actually a space. This will obviously be wrong
for tokenizers that contain spaces within tokens, tokenizers where
offsets include spaces too (Don't think there are a lot).
If will incorrectly fuse any punctuation too ! (" I am a robot!"). But that is already much better than what currently happens.
This heuristic hopefully is fully bc and still can handle non-word based
tokenizers.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/11887
Fixes https://github.com/huggingface/transformers/issues/12593
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 07-09-2021 15:00:40 | 07-09-2021 15:00:40 | |
transformers | 12,610 | closed | Unable to load mT5 with FlaxAutoModelForSeq2SeqLM | ## Environment info
- `transformers` version: 4.9.0.dev0
- Platform: TPU VM
- Python version: 3.8.10
### Who can help
@patrickvonplaten
##
Trying to load mT5 ('google/mt5-small') with FlaxAutoModelForSeq2SeqLM leads to the following error:
```
>>> import transformers
>>> model = FlaxAutoModelForSeq2SeqLM.from_pretrained('google/mt5-small')
ValueError: Unrecognized configuration class <class 'transformers.models.mt5.configuration_mt5.MT5Config'> for this kind of AutoModel: FlaxAutoModelForSeq2SeqLM.
Model type should be one of BartConfig, T5Config.
```
Loading the same model with FlaxT5ForConditionalGeneration works fine. @patrickvonplaten suggested in Slack #flax-jax-community-week that the issue might be caused by missing MT5Config. | 07-09-2021 14:02:45 | 07-09-2021 14:02:45 | |
transformers | 12,609 | closed | Fix arg count for partial functions | # What does this PR do?
As pointed out in #12605, the count for the number of arguments in the `model_init` was not working for partial functions. This PR fixes that. | 07-09-2021 13:08:06 | 07-09-2021 13:08:06 | |
transformers | 12,608 | closed | [Flax] Fix cur step flax examples | Thanks a mille @m3hrdadfi ! | 07-09-2021 12:51:03 | 07-09-2021 12:51:03 | |
transformers | 12,607 | closed | T5 mlm Flax streaming example | # Added T5 mlm Flax streaming example
This PR adds an example script for T5 MLM Pretraining using 🤗 Datasets streaming feature. A new script `run_mlm_t5_flax_stream.py` is added in the `jax-projects/dataset-streaming` folder, and the `README.md` is updated accordingly with a training example for `t5-small` on the `mc4/en` corpus in streaming mode.
As mentioned in the Slack channel, I ran some preliminary tests on mc4/it and I get pretty weird results (train loss converges very early, eval metrics remain very low), possibly due to some problem with the adapted collating/tokenization, so this PR would greatly benefit from reviewing before merging.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Who can review?
@patrickvonplaten @patil-suraj
| 07-09-2021 11:35:06 | 07-09-2021 11:35:06 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,606 | closed | Remote process received SIGTERM on 96 core tpu-vm during group_text map on datasets | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.0.dev0
- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (tpu)
- Jax version: 0.2.16
- JaxLib version: 0.1.68
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: yes
### Who can help
t5: @patrickvonplaten
I am using the t5 model with sentencepiece tokenizer trained from scratch :
https://huggingface.co/flax-community/t5-small-dutch/blob/main/tokenizer.json
## Information
Model I am using (T5):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
Following training the tokenizer for t5 and running t5_mlm from:
https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/README.md
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Pre-training T5 on Dutch oscar deduplicated nl
## To reproduce
Steps to reproduce the behavior:
(unfortunately this script will download and process the complete oscar deduplicated nl corpus which will take ~1 hour, apologies!)
1. setup transformer env on 96 core machine like in the example flax language-modeling example README linked above on the latest master
2. git clone https://huggingface.co/flax-community/t5-small-dutch
3. cd t5-small-dutch
4. ln -s ~/transformers/examples/flax/language-modeling/run_t5_mlm_flax.py run_t5_mlm_flax.py
5. ln -s ~/transformers/examples/flax/language-modeling/t5_tokenizer_model.py t5_tokenizer_model.py
6. ./run_t5_oscar.sh and capture output.
7. In the output, look for SIGTERM
During preprocessing, this text looks like a child process has crashed.
```
https://symbolize.stripped_domain/r/?trace=526cb0,7f98cb33820f,9222bf&map=
*** SIGTERM received by PID 326216 (TID 326216) on cpu 51 from PID 323401; stack trace: ***
PC: @ 0x526cb0 (unknown) (unknown)
@ 0x7f969b419800 976 (unknown)
@ 0x7f98cb338210 (unknown) (unknown)
@ 0x9222c0 (unknown) (unknown)
https://symbolize.stripped_domain/r/?trace=526cb0,7f969b4197ff,7f98cb33820f,9222bf&map=2a762cd764e70bc90ae4c7f9747c08d7:7f968e4d7000-7f969b758280
E0709 10:28:22.655698 326216 coredump_hook.cc:250] RAW: Remote crash gathering disabled for SIGTERM.
E0709 10:28:22.689142 326216 process_state.cc:771] RAW: Raising signal 15 with default behavior
```
## Expected behavior
No processes are expected to crash.
| 07-09-2021 11:01:44 | 07-09-2021 11:01:44 | I added a notebook that shows that the error occurs during the map of the group_texts function on https://huggingface.co/flax-community/t5-base-dutch/blob/main/Load_token_group_dataset.ipynb<|||||>When executing the above notebook with 8 processing threads on my local machine, there are no errors.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,605 | closed | 🐛 `model_init` fails when its a partially evaluated funtion. | https://github.com/huggingface/transformers/blob/65e27215ba991450e30aac1bf06f7f4e889e77fb/src/transformers/trainer.py#L908
Inspect module usage here does not take into account the cases when the user provides a partially evaluated function, causing an exception.
Since the issue is related to the `trainer` API I tag you @sgugger
Example:
```python
import functools
import inspect
checkpoint = "..."
fn = lambda: AutoModel.from_pretrained(checkpoint )
print(len(inspect.signature(fn).parameters))
# Outpus: 0, then no trial expected everything works fine
def fn1(model_base_checkpoint):
return AutoModel.from_pretrained(model_base_checkpoint)
model_init_fn = functools.partial(fn1, model_base_checkpoint=checkpoint )
print(len(inspect.signature(fn).parameters))
# Outputs 1, then the call_model_init tries to pass the ray or optuna trial which results in
# model_init() got multiple values for argument 'model_base_checkpoint' exception
``` | 07-09-2021 11:00:06 | 07-09-2021 11:00:06 | I can reproduce. I would argue that this is more a bug in Python than the `Trainer` but will try to find a way to fix this. In the meantime, you should avoid using partials for `model_init` :-)<|||||>Thanks for the quick answer! I'll avoid using partials for now 👍🏼<|||||>Should be fixed in the PR above! |
transformers | 12,604 | closed | Add LayoutLMv2 + LayoutXLM | # What does this PR do?
This PR adds Microsoft's [LayoutLMv2](https://arxiv.org/abs/2012.14740) and [LayoutXLM](https://arxiv.org/abs/2104.08836) models, in PyTorch. The latter is a multilingual version of LayoutLMv2. For now, I have not yet added any documentation related to LayoutXLM, I'm not sure whether we need a new model directory + documentation page for that one, since one can load a LayoutXLM model like so:
`model = LayoutLMv2Model.from_pretrained("microsoft/layoutxlm-base")`.
LayoutLMv2 is an improvement of [LayoutLM](https://huggingface.co/transformers/model_doc/layoutlm.html) (improves SOTA across several benchmarks, including new ones), by incorporating visual, text and layout information to understand scanned documents. [Detectron2](https://github.com/facebookresearch/detectron2) is used for its visual backbone (which is a ResNeXt-FPN).
The original repo only has `LayoutLMv2Model` and `LayoutLMv2ForTokenClassification`. However, in the paper they also use the model to classify document images (on RVL-CDIP), and perform visual question answering (on DocVQA). Therefore, I've added `LayoutLMv2ForSequenceClassification` and `LayoutLMv2ForQuestionAnswering`. I've modelled them like they were described in the paper, but there's no official implementation to be found.
Fixes #11932 #12194
## Who can review?
@LysandreJik @sgugger
To do:
- [x] fix tests (there's still one test failing, namely `test_initialization`) => Lysandre would be great if you can help me fix that one. It has to do with one of the layers of the backbone. Integration test is also added.
- [x] install Detectron2 + pytesseract to run all tests on CircleCI.
- [x] perhaps define custom `ModelOutputs,` as the length of the hidden states and attentions is actually `seq_length + config.image_feature_pool_shape[0] * config.image_feature_pool_shape[1]` instead of just `seq_length`-> update: will add a comment to the "Tips" section in the documentation instead.
- [x] write documentation about `LayoutLMv2FeatureExtractor`, `LayoutLMv2Tokenizer` and `LayoutLMv2Processor`
- [x] make some more demo notebooks.
Notes:
- [x] I know some variable names could maybe be named better (like for example `rel_pos_bias` in the configuration). However, if we update the names, then people will not longer be able to easily convert models from the original repo to HuggingFace and vice versa. The authors did use HuggingFace for their entire codebase (they used Transformers, the Trainer, Datasets,...). The model is already uploaded by the authors on the [hub](https://huggingface.co/microsoft/layoutlmv2-base-uncased).
- [x] There is still some code included in the modeling file for distributed training, namely to convert to SyncBatchNorm instead of BatchNorm when distributed training is available. I guess these are to be removed? UPDATE: moved to separate method.
| 07-09-2021 07:17:15 | 07-09-2021 07:17:15 | > Thanks a lot for adding this model!
> For LayoutXLM, I don't think we need a new page if we can use the same architecture and tokenizer without changes. Just mention on the doc page the architecture does both.
>
> Don't forget to add the model to the main README!
@sgugger
Just want to point out that LayoutLMv2's tokenizer is subclass of `BertTokenizer `, while LayoutXLM's tokenizer is subclass on `XLMRobertaTokenizer` (and this make LayoutLMv2 cross-lingual)
As far as I know, this is the only difference between LayoutLMv2 and LayoutXLM's<|||||>@jasonkit thanks for pointing that out, I will create a separate `LayoutXLMTokenizer` which inherits from `XLMRobertaTokenizer`.
<|||||>Note that is the tokenizer is the same as a `XLMRobertaTokenizer`, you don't need to create a new class, you can just the set the right `tokenizer_class` in the config.<|||||>Hmm ok, I see that this wasn't done for [`LayoutLMTokenizer`](https://github.com/huggingface/transformers/blob/c07334c12e95f18a404d448e6c7d1eee05b8a61e/src/transformers/models/layoutlm/tokenization_layoutlm.py#L46), which was created, but is actually just `BertTokenizer`. Can you point to an example where this was done?<|||||>Sure: there is `BigBirdPegasus` for instance that uses the same tokenizer as `BigBird`: [here](https://huggingface.co/google/bigbird-pegasus-large-arxiv/blob/main/config.json) is an example of config file for a checkpoint of `BigBirdPegasus` that sets the tokenizer class.<|||||>Can't wait to test this ;) Thanks for the community effort! <|||||>@sgugger after internal discussion, I have created a new `LayoutLMv2Processor`. A `Processor` combines a `FeatureExtractor` (which handles the image-related stuff) and a `Tokenizer` (which handles the text-related stuff). So this is ideal for multi-modal models. Processors have previously been defined for Wav2Vec2 and CLIP.
However, there's a difference between the processors defined for Wav2Vec2/CLIP and the one for LayoutLMv2. The former processors can either be a feature extractor or tokenizer at one particular moment (they are just a wrapper around both). The processor for LayoutLMv2 on the other hand applies both in a sequence, since it first uses the feature extractor to apply OCR on the document images to get words + bounding boxes, which are then provided to the tokenizer, which converts them to token-level `input_ids`, `attention_mask`, `token_type_ids` and `bbox`. By combining the feature extractor and the tokenizer, the processor really does everything for the user: you just give it a document image as input, and the inputs required for the model come out. Also note that one can initialize the feature extractor with either `apply_ocr` to `True` or `False`, depending on whether the user wants to apply OCR himself on the document images, or whether he wants to use PyTesseract (which the feature extractor uses by default). For now, there are 5 different use cases for the processor, see the integration tests in `test_processor_layoutlmv2.py` to see them all.
Also, an additional feature (which I think people will like), is that one can optionally also provide word-level labels to the processor, and these will then automatically be converted to token-level `labels`. You could see it a bit as if `tokenize_and_align` function is incorporated into the processor (actually in the tokenizer - but I assume people could just use the processor).
Happy to get your review :) as you will see, `LayoutLMv2FeatureExtractor` is fairly minimal, it does two things: 1) resize images to 224x224 and optionally, 2) apply OCR to get words + boxes. `LayoutLMv2Tokenizer` is a bit more extensive (it also handles padding/truncation of token-level bounding boxes etc.). Finally, `LayoutLMv2Processor` makes everything more simple by just having one front-facing API.<|||||>@NielsRogge from what I can tell, the fast tokenizer is no longer supported in this PR. When using the existing impl of LayoutLMv2Tokenizer in the context of token classification/sequence labeling, I've been following the original repos arguments:
```python
padding="max_length",
pad_to_multiple_of=8,
max_length=512,
truncation=True,
return_overflowing_tokens=True,
is_split_into_words=True,
```
as a means of creating multiple sequences from longer input samples. I believe `return_overflowing_tokens` is unsupported by the tokenizer in this PR without a Fast implementation. Is there a different way to achieve multiple sequences per input sample with the new tokenizer?<|||||>Hi @dcyoung,
I'm currently working on implementing a fast tokenizer, but the slow tokenizer supports the `return_overflowing_tokens` argument.
The API of the tokenizer is a bit more extensive for LayoutLMv2. You can pass a list of words and corresponding (normalized) boxes, and the tokenizer will automatically turn everything into token-level `input_ids`, `attention_mask`, `token_type_ids` and `bbox`. It will also pad/truncate boxes if you specify the relevant arguments. Small example:
```
from transformers import LayoutLMv2Tokenizer
tokenizer = LayoutLMv2Tokenizer.from_pretrained("microsoft/layoutlmv2-base-uncased")
words = ["hello", "world"]
boxes = [[1,2,3,4], [5,6,7,8]]
encoded_inputs = tokenizer(words, boxes=boxes, return_tensors="pt")
```
Can you try it out? It will also return overflowing token boxes if you want it to. <|||||>> Can you try it out? It will also return overflowing token boxes if you want it to.
Yup. That works fine for me. Though, I'm wondering about trying to create batches of sequences from a single "long" input sample which overflows the 512 token limit. This is for SER tasks where I'd like to consider every token on a document, requiring splitting the original sequence into multiple 512 token sequences. Previously, the `tokenize_and_align_labels` and `DataCollatorForKeyValueExtraction` implementations accomplished this behavior. I'm curious how best to achieve the same behavior using this new setup.
```python
tokenizer = LayoutLMv2Tokenizer.from_pretrained(
"microsoft/layoutlmv2-base-uncased",
)
n = 2000
words = n * ["hello"]
boxes = n * [[1, 2, 3, 4]]
encoded_inputs = tokenizer(
words,
boxes=boxes,
padding="max_length",
pad_to_multiple_of=8,
max_length=512,
truncation=True,
return_overflowing_tokens=True,
is_split_into_words=True,
return_tensors="pt",
)
print(encoded_inputs.keys())
for k, v in encoded_inputs.items():
print(k, v.size())
```
```bash
dict_keys(['overflowing_tokens', 'overflowing_token_boxes', 'num_truncated_tokens', 'input_ids', 'bbox', 'token_type_ids', 'attention_mask'])
overflowing_tokens torch.Size([1, 1490])
overflowing_token_boxes torch.Size([1, 1490, 4])
num_truncated_tokens torch.Size([1])
input_ids torch.Size([1, 512])
bbox torch.Size([1, 512, 4])
token_type_ids torch.Size([1, 512])
attention_mask torch.Size([1, 512])
```
I see now from the outputs above, that the tokenizer does return overflow tokens. However, I don't see the `overflow_to_sample_mapping` KVP which was previously used by `tokenize_and_align_labels`. Does the current tokenizer support this behavior atm? If so, what arguments yield this batching behavior? And if not do you have a suggestion on the easiest way of achieving something similar?
Would this require splitting the `overflowing_tokens` and `overflowing_token_boxes` into new sequences and manually adding the special tokens, as well as pad the last sample < 512 tokens? Or alternatively, tokenizing without truncation... and use a data collator which splits, and pads? <|||||>@NielsRogge I took a pass at batching the overflow tokens. In the Processor, i added some logic to modify the `encoded_inputs` like so:
```python
class LayoutLMv2Processor:
...
def prepare_overflow(self, encoded_inputs: BatchEncoding) -> List[BatchEncoding]:
num_truncated_tokens = max(
0, int(encoded_inputs.get("num_truncated_tokens", [0])[0])
)
max_source_tokens_per_sample = 510
num_extra_samples = ceil(num_truncated_tokens / max_source_tokens_per_sample)
extra_encoded_inputs = []
for i in range(num_extra_samples):
start_idx = i * max_source_tokens_per_sample
tokens = encoded_inputs["overflowing_tokens"][0][
start_idx : start_idx + max_source_tokens_per_sample
].tolist()
boxes = encoded_inputs["overflowing_token_boxes"][0][
start_idx : start_idx + max_source_tokens_per_sample
].tolist()
labels = encoded_inputs["overflowing_labels"][0][
start_idx : start_idx + max_source_tokens_per_sample
].tolist()
seq_len = len(tokens)
padded = self.tokenizer._pad(
encoded_inputs={
"input_ids": [101] + tokens + [102],
"bbox": [[0, 0, 0, 0]] + boxes + [[1000, 1000, 1000, 1000]],
"token_type_ids": (2 + seq_len) * [0],
"labels": [-100] + labels + [-100],
"attention_mask": (2 + seq_len) * [1],
},
max_length=512,
padding_strategy=PaddingStrategy.MAX_LENGTH,
pad_to_multiple_of=8,
return_attention_mask=True,
)
extra_encoded_inputs.append(
{
"image": torch.clone(encoded_inputs["image"]),
**{k: torch.tensor(v).unsqueeze(0) for k, v in padded.items()},
}
)
return extra_encoded_inputs
```
However, this required adding an additional `overflowing_labels` during tokenization similar to the current calculation of `overflowing_token_boxes` or `overflowing_tokens`. This is a small change but easier accomplished in the tokenizer source than after the fact.
Using this processor, i am able to generate batches of sequences from a long input sequence. While I haven't had a chance to thoroughly test, I am able to run this batch through the model just fine to produce corresponding logits. Ex:
```python
encoded_inputs= processor(
img,
words,
boxes=bboxes,
word_labels=word_label_ids,
return_tensors="pt",
padding="max_length",
pad_to_multiple_of=8,
max_length=512,
truncation=True,
return_overflowing_tokens=True,
is_split_into_words=True,
batch_overflow=True,
)
extra_encoded_inputs = processor.prepare_overflow(encoded_inputs)
for model_inputs in [encoded_inputs] + extra_encoded_inputs:
outputs = model(**model_inputs)
print("Predicted Logits: ", outputs.logits.size())
```
Does this seem like a reasonable approach, and if so... would it be possible to add the `overflow_labels` changes to the tokenizer? Perhaps you can think of a better abstraction for batching process within the tokenizer itself?<|||||>> I see now from the outputs above, that the tokenizer does return overflow tokens. However, I don't see the overflow_to_sample_mapping KVP which was previously used by tokenize_and_align_labels. Does the current tokenizer support this behavior atm? If so, what arguments yield this batching behavior? And if not do you have a suggestion on the easiest way of achieving something similar?
The `overflow_to_sample_mapping` is something that is only supported by fast tokenizers. I'm currently working on `LayoutLMv2TokenizerFast`. I'll merge it with this branch once it's ready. Thanks for your feedback!
> Are you planning to add the LayoutLMv2/XLMForRelationExtraction models that we can find in the original repo?
Yes, but perhaps in a future PR, because it's not clear to me how they use the model at inference time.
If you have other questions, can you please post them elsewhere instead of on this thread? Just to keep this PR a bit clean :) perhaps we can set up a Slack channel to discuss this model. If you can give me your email address, I'll set it up.
Thanks!<|||||>> > I see now from the outputs above, that the tokenizer does return overflow tokens. However, I don't see the overflow_to_sample_mapping KVP which was previously used by tokenize_and_align_labels. Does the current tokenizer support this behavior atm? If so, what arguments yield this batching behavior? And if not do you have a suggestion on the easiest way of achieving something similar?
>
> The `overflow_to_sample_mapping` is something that is only supported by fast tokenizers. I'm currently working on `LayoutLMv2TokenizerFast`. I'll merge it with this branch once it's ready. Thanks for your feedback!
>
> > Are you planning to add the LayoutLMv2/XLMForRelationExtraction models that we can find in the original repo?
>
> Yes, but perhaps in a future PR, because it's not clear to me how they use the model at inference time.
>
> If you have other questions, can you please post them elsewhere instead of on this thread? Just to keep this PR a bit clean :) perhaps we can set up a Slack channel to discuss this model. If you can give me your email address, I'll set it up.
>
> Thanks!
You're right about redirecting me to a dedicated channel. Here is my email: [email protected].
Thank you!<|||||>> Just wondering whether the model can be used in fp16?
Yes, the model can be used in fp16 (just added a [notebook](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/FUNSD/Fine_tuning_LayoutLMv2ForTokenClassification_on_FUNSD_using_HuggingFace_Trainer.ipynb) which uses fp16 with HuggingFace's Trainer). |
transformers | 12,603 | closed | Facing Issue while loading pytorch model as flax model | I am trying to convert pytorch model in flax so i can train in downstream task
I wrote converting script like this
```python
from transformers import AutoConfig, FlaxAutoModelForMaskedLM
config = AutoConfig.from_pretrained("./")
model = FlaxAutoModelForMaskedLM.from_pretrained("./", from_pt=True, config=config)
model.save_pretrained("./")
```
By taking this [reference](https://huggingface.co/transformers/model_doc/auto.html#transformers.FlaxAutoModelForSeq2SeqLM)
This was the logs
```
Traceback (most recent call last):
File "convert_to_flax.py", line 3, in <module>
model = FlaxAutoModelForMaskedLM.from_pretrained("./", from_pt=True, config=config)
File "/home/bhadresh/transformers/src/transformers/models/auto/auto_factory.py", line 387, in from_pretrained
raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers.models.t5.configuration_t5.T5Config'> for this kind of AutoModel: FlaxAutoModelForMaskedLM.
Model type should be one of RobertaConfig, BertConfig, BigBirdConfig, BartConfig, ElectraConfig.
``` | 07-09-2021 06:06:54 | 07-09-2021 06:06:54 | using `FlaxAutoModelForSeq2SeqLM` solved the issue, It was typing mistake |
transformers | 12,602 | closed | How to transfer fine-tuned model from python to rust? | # 🚀 Feature request
Since the maximum model in huggingface is in pytorch or tf. But I have used to BART-Large in rust-bert find out that the overall execution of BART-Large in rust is significantly lower than python. But don't know how to transfer fine-tuned pytorch model to rust env.
Feature needed:-
1 . Need to make a copy of all the models of huggingface to rust-bert to gain performance in execution.
2 . Need to develop a standard way to transfer huggingface python model to rust env.
| 07-09-2021 05:19:42 | 07-09-2021 05:19:42 | |
transformers | 12,601 | closed | Cannot load .pt model using Transformers | Hi,
I want to use Transformers to load .pt model. How can I do that? I know how to load .bin using transformers, but I do not know how to load .pt model using transformers. Thanks.
config = config_class.from_pretrained(
args.config_name if args.config_name else args.model_name_or_path,
num_labels=num_labels,
finetuning_task=args.task_name,
cache_dir=args.cache_dir if args.cache_dir else None,
)
tokenizer = tokenizer_class.from_pretrained(
args.tokenizer_name if args.tokenizer_name else args.model_name_or_path,
do_lower_case=args.do_lower_case,
cache_dir=args.cache_dir if args.cache_dir else None,
)
model = model_class.from_pretrained(
args.model_name_or_path,
from_tf=bool(".ckpt" in args.model_name_or_path),
config=config,
cache_dir=args.cache_dir if args.cache_dir else None,
)
| 07-09-2021 02:18:01 | 07-09-2021 02:18:01 | What is your .pt model? Where did you obtain it from?<|||||>While we don't have the complete details following is a possible solution:
You can initialize your model using transformers and simply load the weights using
`model.load_state_dict(torch.load('model.pt'))`
In case this is not what you're looking for please add further details.<|||||>Hi @Ap1075, Thanks for your reply. It is working to load the `model.pt` if I define the `model` class, but do you know if I want to load the tokenizer from the `model.pt`. How can I do that? For example, I can load the tokenizer by this way from huggingface `tokenizer = AutoTokenizer.from_pretrained(pretrained_model, do_lower_case=True)`, but I cannot do that if the `pretrained_model='model.pt'`.<|||||>Also, for this command, `model.load_state_dict(torch.load('model.pt'))`, if what is the `model` from `model.load_state_dict()`? How to define the model here? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,600 | closed | Custom tokenizer from Tokenizers library | Hi, thank you for the library.
I have a few questions regarding training from scratch.
I used this as a reference on how to train new language model.
https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb
I wanted to train XLNet for my corpus.
First, I could train with tokenizers by
```
tokenizer = SentencePieceUnigramTokenizer()
tokenizer._tokenizer.normalizer = Sequence([NFKC(), Replace("\n", "")])
tokenizer.train(files=paths, vocab_size=16000, special_tokens=[
"<s>",
"</s>",
"<pad>",
"<mask>",
"<unk>",
])
tokenizer.save_model("./new_tokenizer")
```
Then I have to use transformers library to train
```
from transformers import XLNetConfig
config = XLNetConfig(
vocab_size=16000,
)
from transformers import XLNetTokenizerFast
tokenizer = XLNetTokenizerFast.from_pretrained("./new_tokenizer", max_len=512)
```
this throws is a folder error.
```
terminate called after throwing an instance of 'std:: iOS failure'
what(): basic filebuf::underflow error reading the file: Is a directory
Aborted (core dumped)
```
How do I load my trained tokenizer? | 07-09-2021 00:56:24 | 07-09-2021 00:56:24 | https://github.com/huggingface/transformers/issues/11722
found solution here |
transformers | 12,599 | closed | Point to the right file for hybrid CLIP | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [X ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-09-2021 00:28:16 | 07-09-2021 00:28:16 | |
transformers | 12,598 | closed | `tokenizer.special_tokens_map` has stringified list for "additional_special_tokens" value. | ## Environment info
- `transformers` version: 4.8.2
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik maybe?
## Information
Model I am using (Bert, XLNet ...): XLMRoberta
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. `>>> from transformers import XLMRobertaTokenizer`
2. `>>> m = XLMRobertaTokenizer.from_pretrained('xlm-roberta-base')`
3. `>>> m.add_special_tokens({"additional_special_tokens": ["<space>"]})`
4. `>>> m.special_tokens_map['additional_special_tokens'] == "['<space>']" # True`
## Expected behavior
The value should be a list containing the special characters. :-) The work-around is to use the `additional_special_tokens` attribute directly. | 07-08-2021 22:49:47 | 07-08-2021 22:49:47 | That seems like an issue indeed! Pinging @SaulLu <|||||>Thank you for your issue @erip! This seems to be a bug to me as well, I just opened a PR #12759 that should solve this problem. Now the command:
```python
from transformers import AutoTokenizer
m = AutoTokenizer.from_pretrained('xlm-roberta-base')
m.add_special_tokens({"additional_special_tokens": ["<space>"]})
print(m.special_tokens_map['additional_special_tokens'] == ['<space>'])
```
should output:
```
True
``` |
transformers | 12,597 | closed | [doc] fix broken ref | add missing `:`
@sgugger | 07-08-2021 20:46:07 | 07-08-2021 20:46:07 | |
transformers | 12,596 | closed | Translate README.md to Simplified Chinese | This is part of the Hugging Face document translation project. I appreciate it if anyone (from Hong Kong / Taiwan) could help verify the traditional Chinese version (which is still WIP). | 07-08-2021 19:48:39 | 07-08-2021 19:48:39 | The Chinese translations looks good to me!<|||||>LGTM.<|||||>@JetRunner I can help with the translation of Traditional Chinese. Would you like me submitting a PR or any kinds of assistance?<|||||>> @JetRunner I can help with the translation of Traditional Chinese. Would you like me submitting a PR or any kinds of assistance?
Sure! Please do so. I recommend you convert the simplified Chinese version to traditional Chinese with a software and then polish it (e.g., replace `软件` with `軟體`) - in this way we can keep the two versions consistent (which is desirable for future maintenance).
Thanks a lot for your help! @qqaatw <|||||>@JetRunner No problem, I'll work on this soon. Once the PR is opened, I'll ping you for reviewing. |
transformers | 12,595 | closed | [Flax] Add flax marian | This PR adds Flax Marian | 07-08-2021 19:23:50 | 07-08-2021 19:23:50 | |
transformers | 12,594 | closed | GPT-2 asking for Padding Token | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.2
- Platform: Windows 10 (Google Collab)
- Python version: Python 3.6.9
- PyTorch version (GPU?):1.8.1+cu102
- Tensorflow version (GPU?): NA
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Models:
- gpt2: @patrickvonplaten, @LysandreJik
Library:
- tokenizers: @LysandreJik
## Information
Model I am using (gpt2-medium):
The problem arises when using:
Trainer, DataCollatorForLanguageModeling,GPT2Tokenizer
The tasks I am working on is:
* Triplets is a series of sequences I want gpt2 to train on
The Error:
ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via `tokenizer.add_special_tokens({'pad_token': '[PAD]'})`.
I put in this line which seems to fix the issue `tokenizer.pad_token = tokenizer.unk_token ` but I'm not sure if it makes sense for gpt-2
## To reproduce
Steps to reproduce the behavior:
Make a csv with column title "triplet" then anything below
Run the following code in google collab
----------------------------------------------------------------------------------------------------------------
```
!pip install pandas
!pip install transformers
!pip install datasets
!pip3 install torch==1.8.1+cu102 torchvision==0.9.1+cu102 torchaudio===0.8.1 -f https://download.pytorch.org/whl/torch_stable.html
import pandas as pd
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import numpy as np
import random
import torch
from torch.utils.data import Dataset, DataLoader
from transformers import GPT2Tokenizer, GPT2LMHeadModel, AdamW, get_linear_schedule_with_warmup,AutoTokenizer, DataCollatorForLanguageModeling, AutoConfig, Trainer, TrainingArguments,AutoModelForCausalLM
from tqdm import tqdm, trange
import torch.nn.functional as F
import csv
from datasets import load_dataset,load_metric
import io
from google.colab import files
print("upload 'train.csv'")
uploaded = files.upload()
#version of gpt we use
model_version = 'gpt2-medium'
#create the dataset
raw_datasets = load_dataset('csv', data_files=['train.csv'])
#raw_datasets["validation"] = (load_dataset('csv', data_files=['validate.csv']))["train"]
print(raw_datasets)
print(raw_datasets["train"][1])
#initialize tokenizer and model
tokenizer = GPT2Tokenizer.from_pretrained(model_version)
model = GPT2LMHeadModel.from_pretrained(model_version)
#vvv this makes the error go away but it doesn't seem to produce a proper attention task
#tokenizer.pad_token = tokenizer.unk_token #prevents error where there is no token. Doesn't matter since I pad properly in the collator? #https://huggingface.co/patrickvonplaten/bert2gpt2-cnn_dailymail-fp16#training-script
#helper for tokenizing everything
def tokenize_function(examples):
return tokenizer(examples["triplet"], truncation=True)
#tokenize all our data
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
#gets rid of original string data
tokenized_datasets=tokenized_datasets.remove_columns(["triplet"])
print(tokenized_datasets)
print(tokenized_datasets["train"]["input_ids"][1])
#collate data
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
#training args (you can control hyperprarameters from here, I just put output directory)
training_args = TrainingArguments(("Finetuned"))
trainer = Trainer(
model,
training_args,
train_dataset=tokenized_datasets["train"],
#eval_dataset=tokenized_datasets["validation"],
#compute_metrics=compute_metrics,
data_collator=data_collator,
tokenizer=tokenizer,
)
trainer.train()
trainer.save_model()
```
| 07-08-2021 18:11:35 | 07-08-2021 18:11:35 | ```python
tokenizer.pad_token = tokenizer.eos_token
```
is the recommended way to fix the warning :-) <|||||>alright thank you, so eos instead of unknown right?<|||||>Upon further research, it seems they default to the same thing anyways https://huggingface.co/transformers/model_doc/gpt2.html |
transformers | 12,593 | closed | XLM-RoBERTa NER extraction breaks/splitting the words ! | I have been using the huggingface xlm-roberta-large-finetuned-conll03-english model NER pipeline for extracting Names, Location and Organization Entities.
But i'm facing an issue now and then with certain entity extraction from short sentences where a word is broken down into sub-word tokens with different entity types. Code used as below
```
from transformers import pipeline, AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large-finetuned-conll03-english")
model = AutoModelForTokenClassification.from_pretrained("xlm-roberta-large-finetuned-conll03-english")
ner_model = pipeline("ner", model = model, tokenizer = tokenizer, grouped_entities = True)
text = "Brennan Nov2018"
ner_model(text)
```
output:
`
[
{
"entity_group": "PER",
"score": 0.6225427985191345,
"word": "Brenn",
"start": 0,
"end": 5
},
{
"entity_group": "LOC",
"score": 0.759472668170929,
"word": "an",
"start": 5,
"end": 7
}
]
`

Even though i'm using `grouped_entities = True` , i'm still getting some words broken down into 2 different entity groups.
Is there a way to prevent this to happen and to return only complete words as entity ?
- PyTorch Version : 1.7.1
- transformers : 4.6.0
- Python : 3.8.5 | 07-08-2021 17:35:11 | 07-08-2021 17:35:11 | Cc @Narsil <|||||>Hi @dummynov1 ,
You are using `grouped_entities` which will only to attempt to fuse *valid* entities (B-PER, I-PER, I-PER). Any break in that structure won't get merged and you might break words up.
We recently added other aggregation strategies ( https://huggingface.co/transformers/main_classes/pipelines.html?highlight=aggregation_strategy#transformers.TokenClassificationPipeline ) but they only work for word aware tokenizers (which is not the case of roberta).
Your issue is not isolated, so I actually looked into it, and I think I figured a better heuristic that you could end up using: https://github.com/huggingface/transformers/pull/12611<|||||>> Hi @dummynov1 ,
>
> You are using `grouped_entities` which will only to attempt to fuse _valid_ entities (B-PER, I-PER, I-PER). Any break in that structure won't get merged and you might break words up.
>
> We recently added other aggregation strategies ( https://huggingface.co/transformers/main_classes/pipelines.html?highlight=aggregation_strategy#transformers.TokenClassificationPipeline ) but they only work for word aware tokenizers (which is not the case of roberta).
>
> Your issue is not isolated, so I actually looked into it, and I think I figured a better heuristic that you could end up using: #12611
Could you elaborate, what needs to be done to fix this.? Should i use the aggregation strategies, but i'm using transformers 4.6.0 (have to use this version only, due to other dependencies).<|||||>You won't be able to fix it correctly in a super reliable way. Simply because `xlm` doesn't know what a "word" is.
**The only real fix you can do is make the model better by more finetuning, with more data probably. (To get correct tags on all your tokens)**
That being with the proposed PR you will be able to have a bit of a better heuristic that might be good enough for you:
you will be able to write:
```python
from transformers import pipeline, AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large-finetuned-conll03-english")
model = AutoModelForTokenClassification.from_pretrained("xlm-roberta-large-finetuned-conll03-english")
ner_model = pipeline("ner", model = model, tokenizer = tokenizer, aggregation_strategy = "max")
text = "Brennan Nov2018"
ner_model(text)
```
Because `xlm` doesn't know words, everything non space will be treated as a word
- "Brenn" "ann" will be fused as intended
- "Some", "one", "," ("Someone," ) too unfortunately.
- Any punctuation within string really. Any formatting within `yaml`, `json`, `markdown` etc..<|||||>@Narsil Could you advise if there is a model on HuggingFace hub that is "word-aware"? I am not sure if I understand it properly, but in my mind, none of the BERT models are actually "word-aware".
I struggled with this problem earlier last year, and did a lot of search online without a solution. I ended up with an ugly patch downstream to absorb this problem. So thanks for making some improvements to the pipelines.<|||||>Hi @ninjalu,
Do you mind explaining a little more what your issue is ?
Without context it's a bit hard to guide you correctly.
Tokenizers "word-aware" are the ones with `continuing_subword_prefix` set (`tokenizer.backend_tokenizer.model.continuing_subword_prefix` variable, if it exists). But most likely you shouldn't choose a tokenizer based purely on this, but probably first on considerations like what data it was trained on and the leveraging you can use in the underlying model (if you're doing fine-tuning for instance, it's better to pick a good model for your target data/langage than starting the whole model+tokenizer from scratch) |
transformers | 12,592 | closed | Add Flax sprint project evaluation section | Add a section on the project evaluation. As more jury members get confirmed, we can extend the list | 07-08-2021 16:08:02 | 07-08-2021 16:08:02 | |
transformers | 12,591 | closed | Fix MT5 init | # What does this PR do?
This PR fixes the MT5 init to make sure to always have the tokenizer available (even if tokenizers or sentencepiece is not available).
Fixes #12588 | 07-08-2021 14:59:09 | 07-08-2021 14:59:09 | |
transformers | 12,590 | closed | flax model parallel training | # What does this PR do?
Adds model parallel training example for GPTNeo using jax's [`pjit`](https://jax.readthedocs.io/en/latest/jax.experimental.pjit.html) transformation.
(This example probably just works on a single TPU v3-8).
This should enable training bigger models like 1.3B GPTNeo on a single TPU V3-8.
The `partition.py` file defines the `PyTree` of the `PartitionSpec` file which describes how the model parameters will be sharded. The actual sharding is automatically handled by `pjit`.
The key idea is to `pjit` the entire training step function. To do that we
- Define the mesh structure.
- Define `PartitionSpec` for every input argument and return value of the pjitted function. The axis names that are used here should match the axis names used in `PartitionSpec`. This means we need the spec for our parameter and optimizer state PyTrees
- The structure of the `PyTree` of `PartitionSpec` needs to match the structure of the `PyTree` of the actual values.
- Call the pijitted fun in a mesh context.
Below is a not-so minimal code-snippet that describes the approach
```python
# init our model
model = FlaxGPTNeoForCausalLM.from_pretrained("gpt-neo-125M")
# get the partition spec for model params
param_spec = set_partitions(unfreeze(model.params))
# get optimizer
optim = optax.adamw(learning_rate=decay_fn)
# mesh defination
mesh_devices = np.array(jax.devices()).reshape(1, jax.local_device_count())
def get_initial_state(params):
state = optim.init(params)
return tuple(state), params
# init optim in abstract way, this just returns the PyTree of opt_state with shapes
# so we can get the PartitionSpec for opt_state using the tree
shapes = jax.tree_map(lambda x: x.shape, model.params)
state = jax.eval_shape(get_initial_state, shapes)
# Get the opt spec
def get_opt_spec(x):
if isinstance(x, dict):
return param_spec
return None
opt_state_spec, param_spec = jax.tree_map(
get_opt_spec, state, is_leaf=lambda x: isinstance(x, (dict, optax.EmptyState))
)
# Now actually initialize the opt state
# this also takes care of sharding the opt and param state according to the spec.
p_get_initial_state = pjit(
get_initial_state,
in_axis_resources=None,
out_axis_resources=(opt_state_spec, param_spec),
)
with mesh(mesh_devices, ("dp", "mp")):
opt_state, params = p_get_initial_state(freeze(model.params))
# define out train step
def train_step(params, opt_state, dropout_rng, batch):
....
return new_params, tuple(new_opt_state), new_dropout_rng, metrics
# pjit the train step
# in_axis_resources and out_axis_resources expect the PartitionSpec
# for every input argument and return values
p_train_step = pjit(
train_step,
in_axis_resources=(param_spec, opt_state_spec, None, None),
out_axis_resources=(param_spec, opt_state_spec, None, None),
)
# do the training
with mesh(mesh_devices, ("dp", "mp")):
params, state, loss, rng = p_train_step(params, opt_state, ...)
```
As we can see above, all the sharding logic is outside of the model definition, so ideally we don't need to modify the modeling code. This also means it should be possible to apply this to any other model by defining the right `PyTree` of `PartitionSpec`.
A few things to consider for future work.
- A convenient way to get the PyTree of model parameters, so we can define the partition spec.
- Currently, model weights are initialized when the model class is instantiated. This can cause problems for models that cannot fit on one device. There should be an option to abstractly initialize the model without having to initialize the weights. This will also allow a convenient way to get the PyTree.
- The `from_pretrained` method also directly puts the weights on the device, we need to consider either sharded loading or initially loading the weights on the CPU then sharding them on the devices, to avoid OOM with huge models.
| 07-08-2021 14:46:22 | 07-08-2021 14:46:22 | This is typically done using using various load balancing methods, e.g. deepspeed pipe has:
https://www.deepspeed.ai/tutorials/pipeline/#load-balancing-pipeline-modules
pytorch has these too but I can't find any mentions of these in their docs.
Have to go to the source:
https://github.com/pytorch/pytorch/blob/58adaaba60441c1ed59f35389598aabf91a772dd/torch/distributed/pipeline/sync/_balance/__init__.py
```
def balance_cost
def balance_by_time
def balance_by_size
```
Is that what you're referring to, @patrickvonplaten
|
transformers | 12,589 | closed | Git LFS bug when uploading to hub | After running a MLM for 69000 steps
https://huggingface.co/birgermoell/roberta-swedish-scandi/tree/main
the model crashed and now I get an error when trying to upload to the hub. The same error was responsible for stopping the training.
```
Uploading LFS objects: 100% (2/2), 998 MB | 0 B/s, done.
Enumerating objects: 13, done.
Counting objects: 100% (13/13), done.
Delta compression using up to 96 threads
Compressing objects: 100% (9/9), done.
Writing objects: 100% (9/9), 33.99 KiB | 308.00 KiB/s, done.
Total 9 (delta 3), reused 0 (delta 0)
remote: -------------------------------------------------------------------------
remote: Your push was rejected because it contains files larger than 10M.
remote: Please use https://git-lfs.github.com/ to store larger files.
remote: -------------------------------------------------------------------------
remote: Offending files:
remote: - events.out.tfevents.1625668537.t1v-n-98937c84-w-0.121638.3.v2 (ref: refs/heads/main)
To https://huggingface.co/birgermoell/roberta-swedish-scandi
! [remote rejected] main -> main (pre-receive hook declined)
error: failed to push some refs to 'https://huggingface.co/birgermoell/roberta-swedish-scandi'
```
Git lfs is installed in the repository.
Perhaps the files stored in git are too large? | 07-08-2021 14:19:22 | 07-08-2021 14:19:22 | You need to explicitly `git lfs track` the files (of file name patterns) that are to be stored in LFS<|||||>Here it might be your tfevent files, so you can `git lfs track "*tfevents*"`?<|||||>Solved it by doing the following.
1.Tracking the file with LFS
```
git lfs track filename
```
2 Assuring that it is tracked by
```
git lfs status
```
3.
```
git lfs migrate import --include="*.v2"
``` |
transformers | 12,588 | closed | 'MT5Tokenizer' is not defined (on Google colab) | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: `transformers-4.9.0.dev0`
- Platform: Google Colab
- Python version: `3.7.10`
- PyTorch version (GPU?): `1.9.0+cu102`
- Tensorflow version (GPU?): `2.5.0`
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger , @patil-suraj
## Information
Trying to run_mlm.py -
The problem arises when using:
- Just importing packages.
## To reproduce
Steps to reproduce the behavior:
1. Run the following on Google colab
2.
```
!pip install git+https://github.com/huggingface/transformers.git
from transformers import (
CONFIG_MAPPING,
MODEL_FOR_MASKED_LM_MAPPING,
AutoConfig,
AutoModelForMaskedLM,
AutoTokenizer,
DataCollatorForLanguageModeling,
HfArgumentParser,
Trainer,
TrainingArguments,
set_seed,
)
```
Error message -
```
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-1-a06d7b6932ed> in <module>()
2
3
----> 4 from transformers import (
5 CONFIG_MAPPING,
6 MODEL_FOR_MASKED_LM_MAPPING,
5 frames
/usr/local/lib/python3.7/dist-packages/transformers/models/mt5/__init__.py in <module>()
94 globals()["__file__"],
95 _import_structure,
---> 96 extra_objects={"MT5Tokenizer": MT5Tokenizer, "MT5TokenizerFast": MT5TokenizerFast},
97 )
NameError: name 'MT5Tokenizer' is not defined
```
## Expected behavior
No error when importing.
Thank you for your help! | 07-08-2021 14:11:26 | 07-08-2021 14:11:26 | Fixed it for now - `!pip install git+https://github.com/huggingface/transformers.git@b29c394` <|||||>Should be fixed now, thanks for reporting! |
transformers | 12,587 | closed | OOM during saving step | I'm trying to train the Blenderbot-9B model using the Deepspeed integration on 8 GPUs, each of them has 16GB VRAM (one node).
Script:
`deepspeed --hostfile myhostfile \
${_PATH}/examples/pytorch/summarization/run_summarization.py \
--model_name_or_path hyunwoongko/blenderbot-9B \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size 8 \
--per_device_eval_batch_size 8 \
--deepspeed ${_PATH}/tests/deepspeed/ds_config_zero3.json \
--logging_steps 1 \
--fp16 \
--overwrite_output_dir \
--save_steps 10 \
--gradient_accumulation_steps 1 \
--evaluation_strategy="steps" \
--max_train_samples 10024 \
--max_eval_samples 32 \
--max_source_length 128 --max_target_length 128 \
--eval_steps 5
`
Training and evaluation seem to run fine, I see about 10GB of VRAM occupied on each GPU, so there is even free space left on the GPUs. However afterwards during the saving step I get OOM, which I don't understand.
Log:
[log.txt](https://github.com/huggingface/transformers/files/6785035/log.txt)
Deespeed: 0.4.3+c9fee82
torch 1.8, cuda 11.1
Transformers:
'4.9.0.dev0'
| 07-08-2021 14:07:40 | 07-08-2021 14:07:40 | cc @sgugger <|||||>I think this is more on the DeepSpeed side so cc-ing @stas00 to confirm.<|||||>Thank you for the full log.
Yes, it's on the deepspeed side.
As you can see in https://huggingface.co/transformers/master/main_classes/deepspeed.html#getting-the-model-weights-out if you use:
```
{
"zero_optimization": {
"stage3_gather_fp16_weights_on_model_save": true
}
}
```
then it reconsolidates the whole fp16 model on cpu, while gathering one layer at a time on GPU (and then moving to cpu).
You can see the code here: https://github.com/microsoft/DeepSpeed/blob/5652072e5451077da4179e5398b1c0c71c752c34/deepspeed/runtime/engine.py#L1991
So to first unblock you disable the above setting in `ds_config.json` by setting it to `false` and then use `zero_to_fp32.py` as explained here https://huggingface.co/transformers/master/main_classes/deepspeed.html#getting-the-model-weights-out if you need to extract the weights - as a bonus you get fp32 weights then.
Meanwhile let me have a look and see if I can make that code more memory tight - in theory if the training had enough gpu memory it should too - e.g. can iterate over each param, rather full layers. I will experiment and get back to you.<|||||>OK the proper fix is here https://github.com/microsoft/DeepSpeed/pull/1220 if you want to try that branch, but should be merged into deepspeed master shortly and hopefully a new release will be made soon.
<|||||>sorry, looks like more work is needed there. will keep you posted.<|||||>This version should do the right thing as all the tests now pass: https://github.com/microsoft/DeepSpeed/pull/1223
Unfortunately missed the new deepspeed release, so will enter the next one.
Do let me know if you encounter any issues with this PR branch.
Thank you.<|||||>Thank you @stas00 ! Here is what I did:
- I tried with your PR, no OOM anymore during save step. So the original problem is solved.
- However when trying to resume from that checkpoint via `--resume_from_checkpoint /tmp/tst-summarization/checkpoint-10` I ran out of cpu ram (512GB in my case).
Just some further comments:
- Setting the option `"stage3_gather_fp16_weights_on_model_save": false` works as well (HF model is simply not saved).
- Exporting the Deepspeed checkpoint offline using the script as you said works and I can also resume training using this exported model via `--model_name_or_path`. <|||||>Closing as original problem was solved. |
transformers | 12,586 | closed | Fix caching issue #12536 | # What does this PR do?
This PR is a proposed fix to issue #12536 . It does so by simply logging the unfound file instead of raising an error causing program execution, in the special case of non-existent optional vocab files which are handled in the case `local_files_only=True` (`FileNotFoundError`) and the case where `local_files_only=False` and user is online, but not the case where `local_files_only=False` and user is offlline.
This needs to be reviewed to ensure this is the direction to go to fix this issue, and that this will not be a problem in other cases.
Fixes #12536
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik | 07-08-2021 13:47:42 | 07-08-2021 13:47:42 | Note: Tests will have to be changed if you want to go this way, I would imagine it's a bit too general of a fix to be honest.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,585 | closed | Error when running wav2vec2 embeddings | While trying to extract wav2vec2 embeddings I get the following errors.
e "feature_extractor.py", line 80, in <module>
feature_extractor("/home/bmoell/data/media.talkbank.org/dementia/English/Pitt/Control/cookie")
File "feature_extractor.py", line 36, in feature_extractor
get_wav2vecembeddings_from_audiofile(wav_file)
File "feature_extractor.py", line 57, in get_wav2vecembeddings_from_audiofile
input_values = processor(resampled, return_tensors="pt", padding=True, sampling_rate=new_sample_rate) # there is no truncation param anymore
File "/home/bmoell/hubert-dementia-screening/dementia/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2284, in __call__
raise ValueError(
ValueError: text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples).
I'm using the following script to extract wav2vec2 embeddings from .wav files.
```python
def get_wav2vecembeddings_from_audiofile(wav_file):
print("the file is", wav_file)
speech, sample_rate = sf.read(wav_file)
if len(speech.shape) > 1:
speech = stereo_to_mono(speech)
# change sample rate to 16 000 hertz
resampled = change_sample_rate(speech, sample_rate, new_sample_rate)
print("the speech is", speech)
input_values = processor(resampled, return_tensors="pt", padding=True, sampling_rate=new_sample_rate) # there is no truncation param anymore
print("input values", input_values)
# import pdb
# pdb.set_trace()
with torch.no_grad():
encoded_states = model(
**input_values,
# attention_mask=input_values["attention_mask"],
output_hidden_states=True
)
last_hidden_state = encoded_states.hidden_states[-1] # The last hidden-state is the first element of the output tuple
print("getting wav2vec2 embeddings")
print(last_hidden_state)
torch.save(last_hidden_state, wav_file + '.wav2vec2.pt')
```
Updated script that takes in the file_path to processor. Now I get a different error.
```python
def get_wav2vecembeddings_from_audiofile(wav_file):
print("the file is", wav_file)
speech, sample_rate = sf.read(wav_file)
if len(speech.shape) > 1:
speech = stereo_to_mono(speech)
# change sample rate to 16 000 hertz
resampled = change_sample_rate(speech, sample_rate, new_sample_rate)
print("the speech is", speech)
input_values = processor(wav_file, return_tensors="pt", padding=True, sampling_rate=new_sample_rate) # there is no truncation param anymore
print("input values", input_values)
# import pdb
# pdb.set_trace()
with torch.no_grad():
encoded_states = model(
input_values=input_values["input_ids"],
# attention_mask=input_values["attention_mask"],
output_hidden_states=True
)
last_hidden_state = encoded_states.hidden_states[-1] # The last hidden-state is the first element of the output tuple
print("getting wav2vec2 embeddings")
print(last_hidden_state)
torch.save(last_hidden_state, wav_file + '.wav2vec2.pt')
```
File "/home/bmoell/hubert-dementia-screening/dementia/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 294, in _conv_forward
return F.conv1d(input, weight, bias, self.stride,
RuntimeError: expected scalar type Long but found Float
| 07-08-2021 13:14:54 | 07-08-2021 13:14:54 | Can you copy-paste a reproducible code snippet here (create dummy data if necessary) ? :-) <|||||>Made a colab that reproduced the error.
https://colab.research.google.com/drive/1JpZ33M3tCKJBK6XZeDhj4u30roWKQ63s?usp=sharing<|||||>I can't run the colab as `processor` is commented out<|||||>The main problem is the following:
We should not use a tokenizer to process wav files -> we should use the processor for that. So `AutoTokenizer` should be replaced by `Wav2Vec2Processor`. If you settle on using `HubertForCTC`, it's a good idea to first look into the examples of the docs to check how the model should be used. E.g. here we should an example for `HubertForCTC`: https://huggingface.co/transformers/master/model_doc/hubert.html#hubertforctc
=> so from this example you can see that you should load the wav file yourself and then use the `Wav2Vec2Processor` to process the input. This will return `input_values` that you can pass to the model.
Also, just a note on how to write issue for the future ;-):
It's always good to aim for a *minimal* reproducible code example. E.g. for this error it should be relatively simple to figure out that the error is produced by the following lines:
```python
input_values = processor(wav_file, return_tensors="pt", padding=True, sampling_rate=new_sample_rate) # there is no truncation param anymore
encoded_states = model(
input_values=input_values["input_ids"],
# attention_mask=input_values["attention_mask"],
output_hidden_states=True
)
```
in your code. So to make debugging easier it would be good to create a dumy `wav_array` (this can be just a random 1-D np.float32 array) and then post 3,4 lines here that show that there is a bug. E.g.:
```python
import numpy as np
from transformers import AutoTokenizer, HubertForCTC
tokenizer = AutoTokenizer.from_pretrained("facebook/hubert-large-ls960-ft")
model = HubertForCTC.from_pretrained("facebook/hubert-large-ls960-ft")
wav_file = np.random.random((1, 1024))
input_values = processor(wav_file, return_tensors="pt", padding=True)
encoded_states = model(input_values=input_values["input_ids"])
```
=> It takes much less time to run these 5 lines then going through the colab (which sadly doesn't even run correctly).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,584 | closed | [Flax]Not able to Run Hugging Face GPT2 model for jax on TPU's | Hi,Trying to do FlaxGPT2ForMultipleChoice.I am trying to run GPT2 using hugging face for TPU's.It is showing tuple out of index.But when i run it in CPU there is no such error.Even when i run a simple model without any code of mine it is also behaving the same in TPU.

This below example is for a basic model

To Reproduce
This is the colab notebook:
https://colab.research.google.com/drive/1h8CeTM5NUpHeS1oGHONX1YbwtGHf3nyU?usp=sharing
| 07-08-2021 13:13:09 | 07-08-2021 13:13:09 | @patrickvonplaten
<|||||>Hey @vivekvkashyap,
instead of putting a screenshot here - could you maybe copy paste the link to your google colab instead so that we can reproduce the error?
Thank you!<|||||>@patrickvonplaten i have added the colab notebook in the section To Reproduce
<|||||>@patil-suraj
<|||||>Similar to #12578 - we are working on it :-)
|
transformers | 12,583 | closed | AttributeError for DataCollatorForLanguageModelling with tokenizers.Tokenizer | ## Environment info
- `transformers` version: 4.8.2
- Platform: Linux-5.3.0-53-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
- tokenizers: @LysandreJik
- trainer: @sgugger
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
tokenizer = Tokenizer.from_file("my-tokenizer.json")
config = AutoConfig.from_pretrained("bert-base-cased", vocab_size=tokenizer.get_vocab_size())
model = AutoModelForMaskedLM.from_config(config)
tokenizer.enable_truncation(max_length=model.config.max_position_embeddings)
dataset = LMDataset(tokenizer, files=['train_1.txt', 'train_2.txt'])
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, **cfg.data_collator_kwargs)
```
```
Traceback (most recent call last):
...
File "/home/leb/lang-models/scripts/train_lm.py", line 25, in train_lm
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, **cfg.data_collator_kwargs)
File "<string>", line 7, in __init__
File "/home/leb/anaconda3/envs/lang-models/lib/python3.7/site-packages/transformers/data/data_collator.py", line 333, in __post_init__
if self.mlm and self.tokenizer.mask_token is None:
AttributeError: 'tokenizers.Tokenizer' object has no attribute 'mask_token'
```
## Expected behavior
Expected to be able to use tokenizers.Tokenizer in the tokenizer parameter to DataCollatorForLanguageModelling.
| 07-08-2021 12:45:37 | 07-08-2021 12:45:37 | Hi there! I am unsure why you thought you could use a `tokenizers.Tokenizer` object here. The [documentation](https://huggingface.co/transformers/main_classes/data_collator.html#datacollatorforlanguagemodeling) clearly states it has to be a `PreTrainedTokenizerBase`, so either a `PreTrainedTokenizer` or a `PreTrainedTokenizerFast`. You can instantiate one with
```
from transformers import PreTrainedTokenizerFast
tokenizer = PreTrainedTokenizerFast(tokenzier_file=path_to_json)
```<|||||>Sorry about that, I get confused what to use where between the two projects sometimes.
I've also found [this](https://huggingface.co/transformers/fast_tokenizers.html#) to help me.
Although the [documentation](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizerFast) for `PreTrainedTokenizerFast` doesn't show `tokenizer_file` as a valid parameter to `__init__`<|||||>Oh very true, it's definitely missing! Do you want to make a PR to fix it? |
transformers | 12,582 | closed | Simplify unk token | # What does this PR do?
As seen on [tokenizers#748](https://github.com/huggingface/tokenizers/issues/748) it's possible to avoid the UnigramTrainer forgetting about the unknown token if we set it properly as a kwarg when defining the trainer. This PR does that to avoid messing with the json after. | 07-08-2021 12:26:30 | 07-08-2021 12:26:30 | |
transformers | 12,581 | closed | ViT doesnt use tokenizer, yet shown as example transformer website | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.0.dev0
- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (tpu)
- Jax version: 0.2.16
- JaxLib version: 0.1.68
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): ViT
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Open vit [transformer](https://huggingface.co/google/vit-base-patch16-224) website
2. Click on "</> Use in Transformers"
3. Copy the text and run it
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Error trace:
```
tokenizer = AutoTokenizer.from_pretrained("google/vit-base-patch16-224")
File "/home/david/transformers/src/transformers/models/auto/tokenization_auto.py", line 576, in from_pretrained
raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers.models.vit.configuration_vit.ViTConfig'> to build an AutoTokenizer.
Model type should be one of RetriBertConfig, RoFormerConfig, T5Config, MT5Config, MobileBertConfig, DistilBertConfig, AlbertConfig, CamembertConfig, PegasusConfig, MBartConfig, XLMRobertaConfig, MarianConfig, BlenderbotSmallConfig, BlenderbotConfig, BartConfig, LongformerConfig, RobertaConfig, ReformerConfig, ElectraConfig, FunnelConfig, LxmertConfig, LayoutLMConfig, DPRConfig, SqueezeBertConfig, BertConfig, OpenAIGPTConfig, GPT2Config, TransfoXLConfig, XLNetConfig, FlaubertConfig, XLMConfig, CTRLConfig, FSMTConfig, BertGenerationConfig, DebertaConfig, DebertaV2Config, RagConfig, XLMProphetNetConfig, Speech2TextConfig, M2M100Config, ProphetNetConfig, MPNetConfig, TapasConfig, LEDConfig, ConvBertConfig, BigBirdConfig, IBertConfig, Wav2Vec2Config, HubertConfig, GPTNeoConfig, LukeConfig, BigBirdPegasusConfig, CanineConfig.
```
## Expected behavior
Able to load the model correctly. Also @patrickvonplaten already solved it for me saying its ViTFeatureExtractor instead, but still it should be changed in the website. Thank you!
<!-- A clear and concise description of what you would expect to happen. -->
| 07-08-2021 11:47:53 | 07-08-2021 11:47:53 | cc @julien-c @LysandreJik @sgugger @patil-suraj - For Vision and Speech models we probably should create a `AutoProcessor` and adapt the default website widget to not use `AutoTokenizer` for Vision & Speech<|||||>There is the `AutoFeatureExtractor` class already for vision (since there are no processors there).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I am facing same problem for 'facebook/dino-vitb16' on hugging face.
I am trying to do use transformers.onnx to convert model to onnx.
Though, there is no tokenizer for this model, it looks for tokenizer.
Any solution for this ?<|||||>>
Hi @kartikpodugu Could you open a new issue with a description and a code snippet that reproduce the issue? Thank you.<|||||>@kartikpodugu @ydshieh
I also faced same issue in `from transformers import ViTForImageClassification.`
However, I resolved this issue by upgrade the transformers version.
[Old] 4.12.0.dev0
[New] 4.29.2<|||||>@SangbumChoi Could you open a new issue with a description, your full environment information, and a code snippet that reproduce the issue? Thank you.
Tried `from transformers import ViTForImageClassification` and it works fine. |
transformers | 12,580 | closed | Unable to quantize Google's LaBSE model using convert_graph_to_onnx.py | ## Environment Info:
Experiment was performed on Google Colab, RAM: 12.69GB
Also experiment on machine with ~20GB RAM available.
### Who can help
@LysandreJik @sgugger @SilvanK4t1qbit
## Information
I'm unable to quantize Google's [`setu4993/LaBSE`](https://huggingface.co/setu4993/LaBSE) model using script [`convert_graph_to_onnx.py`](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_graph_to_onnx.py) (Approximate model size LaBSE is **~1.8GB**).
The command I used to convert the graph is:
```bash
python convert_graph_to_onnx.py --framework pt --model setu4993/LaBSE --quantize saved_models_temp/labse_bert_onnx/labse_bert.onnx --pipeline sentiment-analysis --opset 11
```
The ONNX `convert` and `optimize` steps are executed, and after that process is killed while running `quantize`.
The process is **Killed** without any error.
**Terminal Output:**
```bash
2021-07-08 10:11:48.524534: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
====== Converting model to ONNX ======
ONNX opset version set to: 11
Loading pipeline (model: setu4993/LaBSE, tokenizer: setu4993/LaBSE)
Downloading: 100% 560/560 [00:00<00:00, 763kB/s]
Downloading: 100% 1.88G/1.88G [00:39<00:00, 47.4MB/s]
tcmalloc: large alloc 1539547136 bytes == 0x5654616fc000 @ 0x7fb8b248ab6b 0x7fb8b24aa379 0x7fb8469f526e 0x7fb8469f69e2 0x7fb88a260b49 0x7fb88a261897 0x7fb88a63dd89 0x7fb88ada2b9a 0x7fb88ad85cbe 0x7fb88a98aa05 0x7fb89d2cc451 0x565457706338 0x56545783a1ba 0x5654578337ad 0x5654577c6c9f 0x565457807d79 0x565457804cc4 0x5654577c5559 0x5654578394f8 0x5654578337ad 0x5654577c6a81 0x565457807d79 0x565457804cc4 0x5654577c5462 0x565457838715 0x5654578337ad 0x5654577c6a81 0x565457807d79 0x565457804cc4 0x5654577c5462 0x565457838715
tcmalloc: large alloc 1539547136 bytes == 0x5654d1b10000 @ 0x7fb8b248ab6b 0x7fb8b24aa379 0x7fb8469f526e 0x7fb8469f69e2 0x7fb88b1129e9 0x7fb89d47d349 0x565457804c65 0x5654577c5462 0x565457838715 0x5654578337ad 0x5654577c6003 0x5654577c5b09 0x56545790d28d 0x56545787c1db 0x5654577c4bb1 0x5654578b5fed 0x565457838988 0x5654578337ad 0x565457705e2c 0x565457835bb5 0x5654578334ae 0x5654577c63ea 0x56545783532a 0x5654578334ae 0x5654577c6c9f 0x5654577c6ea1 0x565457835bb5 0x5654578334ae 0x5654577c6c9f 0x5654577c6ea1 0x565457835bb5
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at setu4993/LaBSE and are newly initialized: ['classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Downloading: 100% 239/239 [00:00<00:00, 374kB/s]
Downloading: 100% 5.22M/5.22M [00:00<00:00, 53.7MB/s]
Downloading: 100% 9.62M/9.62M [00:00<00:00, 45.6MB/s]
Downloading: 100% 112/112 [00:00<00:00, 169kB/s]
Creating folder path_to_model/saved_models_temp/labse_bert_onnx
Using framework PyTorch: 1.9.0+cu102
Found input input_ids with shape: {0: 'batch', 1: 'sequence'}
Found input token_type_ids with shape: {0: 'batch', 1: 'sequence'}
Found input attention_mask with shape: {0: 'batch', 1: 'sequence'}
Found output output_0 with shape: {0: 'batch'}
Ensuring inputs are in correct order
position_ids is not present in the generated input list.
Generated inputs order: ['input_ids', 'attention_mask', 'token_type_ids']
tcmalloc: large alloc 1539547136 bytes == 0x5654d1b10000 @ 0x7fb8b248ab6b 0x7fb8b24aa379 0x7fb8469f526e 0x7fb8469f69e2 0x7fb88b111a73 0x7fb88a647b7b 0x7fb88ada2bef 0x7fb88ad87480 0x7fb88a990454 0x7fb88a648890 0x7fb88ae9c26f 0x7fb88ac5af3e 0x7fb88c530f77 0x7fb88c5313f2 0x7fb88ac5af3e 0x7fb88c7e7fde 0x7fb88c7e8102 0x7fb88b0dd8a6 0x7fb89d163742 0x5654577c4d54 0x5654577c4a50 0x565457839105 0x5654578b6e36 0x5654578abe76 0x565457800484 0x565457804c65 0x5654577c5462 0x565457838715 0x5654578337ad 0x565457705eb1 0x7fb89d6c1275
/usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py:1974: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
input_tensor.shape[chunk_dim] == tensor_shape for input_tensor in input_tensors
tcmalloc: large alloc 1539547136 bytes == 0x5654d1b10000 @ 0x7fb8b24aa887 0x7fb8b0da0c29 0x7fb8b0da0d47 0x7fb8b0da27a5 0x7fb88ce699c6 0x7fb88ce6bbf6 0x7fb88ce6d20a 0x7fb88ce6de23 0x7fb89d6564f9 0x7fb89d659004 0x7fb89d5cfc00 0x7fb89d054b88 0x5654577c4cc0 0x5654577c4a50 0x565457838be0 0x5654578334ae 0x5654577c63ea 0x56545783532a 0x5654578334ae 0x5654577c63ea 0x56545783532a 0x5654578334ae 0x5654577c63ea 0x56545783532a 0x5654578334ae 0x5654577c63ea 0x5654578387f0 0x5654578334ae 0x5654577c63ea 0x56545783532a 0x5654577c630a
tcmalloc: large alloc 1883791360 bytes == 0x56554dbbe000 @ 0x7fb8b24aa887 0x7fb8b0da0c29 0x7fb8b0da1afb 0x7fb8b0da1bb4 0x7fb8b0da1f9c 0x7fb8987e7bb7 0x7fb8987e8064 0x7fb88ce66a1c 0x7fb89d699aff 0x7fb89d054b88 0x5654577c58a8 0x565457838fd5 0x5654578334ae 0x5654577c63ea 0x56545783532a 0x5654578334ae 0x5654577c63ea 0x5654578387f0 0x5654578334ae 0x5654577c63ea 0x56545783532a 0x5654577c630a 0x5654578343b5 0x5654578334ae 0x5654577c63ea 0x5654578343b5 0x5654578334ae 0x5654578331b3 0x5654578fd182 0x5654578fd4fd 0x5654578fd3a6
====== Optimizing ONNX model ======
tcmalloc: large alloc 2147483648 bytes == 0x5655a8fba000 @ 0x7fb8b248ab6b 0x7fb8b24aa379 0x7fb7eb84b34c 0x7fb7eb8482f4 0x7fb7eb8027d1 0x7fb7eb8077b2 0x7fb7eb80a4d6 0x7fb7eb6783a0 0x7fb7ebb3e747 0x7fb7ebb84b53 0x7fb7ebbacb38 0x5654577c58a8 0x565457838fd5 0x5654578334ae 0x5654577c63ea 0x56545783460e 0x5654578334ae 0x5654577c6a81 0x565457807d79 0x565457804cc4 0x5654577c5462 0x565457838715 0x5654577c630a 0x5654578343b5 0x5654578334ae 0x5654578331b3 0x5654578fd182 0x5654578fd4fd 0x5654578fd3a6 0x5654578d4723 0x5654578d43cc
2021-07-08 10:14:05.009614471 [W:onnxruntime:, inference_session.cc:1303 Initialize] Serializing optimized model with Graph Optimization level greater than ORT_ENABLE_EXTENDED and the NchwcTransformer enabled. The generated model may contain hardware specific optimizations, and should only be used in the same environment the model was optimized in.
Optimized model has been written at path_to_model/saved_models_temp/labse_bert_onnx/labse_bert-optimized.onnx: ✔
/!\ Optimized model contains hardware specific operators which might not be portable. /!\
As of onnxruntime 1.4.0, models larger than 2GB will fail to quantize due to protobuf constraint.
This limitation will be removed in the next release of onnxruntime.
WARNING:root:onnxruntime.quantization.quantize is deprecated.
Please use quantize_static for static quantization, quantize_dynamic for dynamic quantization.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator FusedMatMul. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator Gelu. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator FusedMatMul. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator Gelu. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator FusedMatMul. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator Gelu. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator FusedMatMul. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator Gelu. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator FusedMatMul. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator Gelu. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator FusedMatMul. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator Gelu. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator FusedMatMul. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator Gelu. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator FusedMatMul. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator Gelu. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator FusedMatMul. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator Gelu. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator FusedMatMul. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator Gelu. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator FusedMatMul. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator Gelu. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator FusedMatMul. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator Gelu. No schema registered for this operator.
Warning: Unsupported operator LayerNormalization. No schema registered for this operator.
Warning: Unsupported operator FusedGemm. No schema registered for this operator.
tcmalloc: large alloc 3079086080 bytes == 0x5656f6cf0000 @ 0x7fb8b24aa001 0x5654577f7b30 0x5654577ce655 0x7fb8ae61c5a9 0x5654577c4c47 0x5654578b5fed 0x565457838988 0x5654578334ae 0x5654577c63ea 0x56545783460e 0x5654578334ae 0x5654577c63ea 0x56545783460e 0x5654578337ad 0x5654577c63ea 0x56545783460e 0x5654577c630a 0x56545783460e 0x5654578334ae 0x5654577c63ea 0x56545783532a 0x5654577c630a 0x5654578343b5 0x5654578334ae 0x5654578331b3 0x5654578fd182 0x5654578fd4fd 0x5654578fd3a6 0x5654578d4723 0x5654578d43cc 0x7fb8b1292bf7
^C
```
The process is killed after this without any external interruption.
Could this be a memory issue. I also tried this same experiment on machine with over 20GB RAM available, but the results were similar.
## To reproduce
**Python package Requirements:**
```text
torch
transformers
onnx
onnxruntime
onnxruntime-tools
```
**Run command:**
```
python convert_graph_to_onnx.py --framework pt --model setu4993/LaBSE --quantize path_to_model/labse_bert_onnx/labse_bert.onnx --pipeline sentiment-analysis --opset 11
```
File: [`convert_graph_to_onnx.py`](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_graph_to_onnx.py)
| 07-08-2021 10:21:02 | 07-08-2021 10:21:02 | Indeed, this looks like a memory error!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,579 | closed | ImportError: cannot import name 'LineByLineTextDataset' from 'transformers' (unknown location) | when I tried to run the following codes as in **https://huggingface.co/blog/how-to-train** , it raised a bug:
from transformers import LineByLineTextDataset
dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path="./oscar.eo.txt",
block_size=128,
)
ImportError: cannot import name 'LineByLineTextDataset' from 'transformers' (unknown location)
@sgugger
| 07-08-2021 07:27:51 | 07-08-2021 07:27:51 | Yes this notebook is deprecated, you should look at the new version [here](https://github.com/huggingface/notebooks/blob/master/examples/language_modeling_from_scratch.ipynb) (or on [colab](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/language_modeling_from_scratch.ipynb)).
More generally the up-to-date list of notebooks is in the [documentation](https://huggingface.co/transformers/master/notebooks.html).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,578 | closed | tuple index out of range for FlaxMBartForConditionalGeneration | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.0.dev0 (installed from source)
- Platform: Google colab
- Python version: 3.7.10
- Using TPU in script?: Yes
- Dependecies were installed following this colab notebook: https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/causal_language_modeling_flax.ipynb#scrollTo=Sj1mJNJa6PPS
### Who can help
@patil-suraj @patrickvonplaten
## Information
Model I am using: FlaxMBartForConditionalGeneration
The problem arises when loading the model itself
## To reproduce
Steps to reproduce the behavior:
```
from transformers import FlaxMBartForConditionalGeneration
model = FlaxMBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-one-to-many-mmt")
```
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-5-f8556949d896> in <module>()
1 from transformers import FlaxMBartForConditionalGeneration, MBart50TokenizerFast
2
----> 3 model = FlaxMBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-one-to-many-mmt", from_pt=True)
4 tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-one-to-many-mmt", src_lang="en_XX")
15 frames
/usr/local/lib/python3.7/dist-packages/transformers/modeling_flax_utils.py in from_pretrained(cls, pretrained_model_name_or_path, dtype, *model_args, **kwargs)
336
337 # init random models
--> 338 model = cls(config, *model_args, **model_kwargs)
339
340 if from_pt:
/usr/local/lib/python3.7/dist-packages/transformers/models/mbart/modeling_flax_mbart.py in __init__(self, config, input_shape, seed, dtype, **kwargs)
948 ):
949 module = self.module_class(config=config, dtype=dtype, **kwargs)
--> 950 super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype)
951
952 def init_weights(self, rng: jax.random.PRNGKey, input_shape: Tuple) -> FrozenDict:
/usr/local/lib/python3.7/dist-packages/transformers/modeling_flax_utils.py in __init__(self, config, module, input_shape, seed, dtype)
103
104 # randomly initialized parameters
--> 105 random_params = self.init_weights(self.key, input_shape)
106
107 # save required_params as set
/usr/local/lib/python3.7/dist-packages/transformers/models/mbart/modeling_flax_mbart.py in init_weights(self, rng, input_shape)
973 decoder_attention_mask,
974 position_ids,
--> 975 decoder_position_ids,
976 )["params"]
977
/usr/local/lib/python3.7/dist-packages/flax/linen/module.py in init(self, rngs, method, mutable, *args, **kwargs)
998 _, v_out = self.init_with_output(
999 rngs, *args,
-> 1000 method=method, mutable=mutable, **kwargs)
1001 return v_out
1002
/usr/local/lib/python3.7/dist-packages/flax/linen/module.py in init_with_output(self, rngs, method, mutable, *args, **kwargs)
967 rngs = {'params': rngs}
968 return self.apply(
--> 969 {}, *args, rngs=rngs, method=method, mutable=mutable, **kwargs)
970
971 def init(self,
/usr/local/lib/python3.7/dist-packages/flax/linen/module.py in apply(self, variables, rngs, method, mutable, capture_intermediates, *args, **kwargs)
937 method, self,
938 mutable=mutable, capture_intermediates=capture_intermediates
--> 939 )(variables, *args, **kwargs, rngs=rngs)
940
941 def init_with_output(self,
/usr/local/lib/python3.7/dist-packages/flax/core/scope.py in wrapper(variables, rngs, *args, **kwargs)
685 **kwargs) -> Union[Any, Tuple[Any, VariableDict]]:
686 with bind(variables, rngs=rngs, mutable=mutable).temporary() as root:
--> 687 y = fn(root, *args, **kwargs)
688 if mutable is not False:
689 return y, root.mutable_variables()
/usr/local/lib/python3.7/dist-packages/flax/linen/module.py in scope_fn(scope, *args, **kwargs)
1176 _context.capture_stack.append(capture_intermediates)
1177 try:
-> 1178 return fn(module.clone(parent=scope), *args, **kwargs)
1179 finally:
1180 _context.capture_stack.pop()
/usr/local/lib/python3.7/dist-packages/flax/linen/module.py in wrapped_module_method(*args, **kwargs)
273 _context.module_stack.append(self)
274 try:
--> 275 y = fun(self, *args, **kwargs)
276 if _context.capture_stack:
277 filter_fn = _context.capture_stack[-1]
/usr/local/lib/python3.7/dist-packages/transformers/models/mbart/modeling_flax_mbart.py in __call__(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, position_ids, decoder_position_ids, output_attentions, output_hidden_states, return_dict, deterministic)
1310 output_hidden_states=output_hidden_states,
1311 return_dict=return_dict,
-> 1312 deterministic=deterministic,
1313 )
1314
/usr/local/lib/python3.7/dist-packages/flax/linen/module.py in wrapped_module_method(*args, **kwargs)
273 _context.module_stack.append(self)
274 try:
--> 275 y = fun(self, *args, **kwargs)
276 if _context.capture_stack:
277 filter_fn = _context.capture_stack[-1]
/usr/local/lib/python3.7/dist-packages/transformers/models/mbart/modeling_flax_mbart.py in __call__(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, position_ids, decoder_position_ids, output_attentions, output_hidden_states, return_dict, deterministic)
905 output_hidden_states=output_hidden_states,
906 return_dict=return_dict,
--> 907 deterministic=deterministic,
908 )
909
/usr/local/lib/python3.7/dist-packages/flax/linen/module.py in wrapped_module_method(*args, **kwargs)
273 _context.module_stack.append(self)
274 try:
--> 275 y = fun(self, *args, **kwargs)
276 if _context.capture_stack:
277 filter_fn = _context.capture_stack[-1]
/usr/local/lib/python3.7/dist-packages/transformers/models/mbart/modeling_flax_mbart.py in __call__(self, input_ids, attention_mask, position_ids, output_attentions, output_hidden_states, return_dict, deterministic)
763 )
764
--> 765 last_hidden_states = outputs[0]
766 last_hidden_states = self.layer_norm(last_hidden_states)
767
/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in __getitem__(self, k)
1810 return inner_dict[k]
1811 else:
-> 1812 return self.to_tuple()[k]
1813
1814 def __setattr__(self, name, value):
IndexError: tuple index out of range
``` | 07-08-2021 06:56:16 | 07-08-2021 06:56:16 | Hi @bhavitvyamalik
I tried this on TPU VM and it works. Where did you try it colab TPU or TPU VM ? <|||||>I tried this on Colab. Wanted to test the pipeline here before shifting it TPU VM. Is there any way to run it on google colab?<|||||>Not sure, I will try to see what's the issue with colab. But it should work just fine on TPU VM.<|||||>1. You were right! Works fine with TPU VM except for this (nothing to worry about I think):
`tcmalloc: large alloc 2444541952 bytes == 0x8f822000 @ 0x7f5700b36680 0x7f5700b57824 0x5f7b11 0x648631 0x5c38e6 0x4f30e6 0x64ee88 0x505653 0x56acb6 0x568d9a 0x50b868 0x56fb87 0x568d9a 0x68cdc7 0x67e161 0x67e1df 0x4a447c 0x4a4619 0x67e829 0x4eee7b 0x6b71ed 0x7f570094d0b3 0x5f96de`
2. I was trying to run a forward pass for generating outputs using this model. Using only `eval_step` part should suffice here right? (I referred to [this](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/causal_language_modeling_flax.ipynb#scrollTo=Sj1mJNJa6PPS) notebook here for steps)
```
linear_decay_lr_schedule_fn = optax.linear_schedule(init_value=3e-4, end_value=0, transition_steps=1000)
adamw = optax.adamw(learning_rate=linear_decay_lr_schedule_fn, b1=0.9, b2=0.98, eps=1e-8, weight_decay=0.01)
state = train_state.TrainState.create(apply_fn=model.__call__, params=model.params, tx=adamw)
def eval_step(params, batch):
generated_tokens = model.generate(**batch, params=params, train=False, forced_bos_token_id=tokenizer.lang_code_to_id["fr_XX"])["sequences"]
return generated_tokens
parallel_eval_step = jax.pmap(eval_step, "batch")
for model_input in model_inputs: # model_inputs contain tokenized values of input sentences
output_logits = parallel_eval_step(state.params, model_input) # Model forward
```<|||||>This [colab](https://colab.research.google.com/drive/1qn7d9FkEOEIQcaLr2WFhopH64JGKyXe6?usp=sharing) should help with how to use generate on TPU<|||||>@patil-suraj -> let's check if we can solve those issue when changing to `jnp.ndarray` type-check<|||||>fixed in #12638<|||||>I also face this error during quantization, I am using fastt5 library to quantize the weights of this model **"pszemraj/grammar-synthesis-base"** , but in transformers library (one file of this library at this path(**/usr/local/lib/python3.7/dist-packages/transformers/utils/generic.py])** show the error in colab notebook.
Error is **IndexError: tuple index out of range**
code is here:
`!pip install fastt5
from fastT5 import (OnnxT5, get_onnx_runtime_sessions,
generate_onnx_representation, quantize)
from transformers import AutoTokenizer
model_or_model_path = 'pszemraj/grammar-synthesis-base'
# Step 1. convert huggingfaces t5 model to onnx
onnx_model_paths = generate_onnx_representation(model_or_model_path)
# Step 2. (recommended) quantize the converted model for fast inference and to reduce model size.
quant_model_paths = quantize(onnx_model_paths)
# step 3. setup onnx runtime
model_sessions = get_onnx_runtime_sessions(quant_model_paths)
# step 4. get the onnx model
model = OnnxT5(model_or_model_path, model_sessions)`
Error occurred in this function(generate_onnx_representation)..
So how can we debug the error (thanks)..
one more error may you face during quantize is that **encoder is not defined ** at this function **(generate_onnx_representation)** |
transformers | 12,577 | closed | [Work In Progress] SentenceTransformer implementation based on CLIP | # What does this PR do?
Sentence Transformer Flax implementation for Flax/JAX week.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Reviewers will be added when the PR has progressed. | 07-08-2021 06:36:09 | 07-08-2021 06:36:09 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,576 | closed | Summarization failure "All images are copyrighted" for certain text inputs | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Linux-5.2.18-200.fc30.x86_64-x86_64-with-fedora-30-Thirty
- Python version: 3.7.4
- PyTorch version (GPU?): 1.7.1+cu101 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: nope
### Who can help @patil-suraj
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (PEGASUS-xsum):
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
```python
# Minimal example
import torch
bad_txt = "delivery giant DoorDash launched its Japan operation on Wednesday, a move that is expected to further intensify the already fierce competition for a slice of the country's food delivery market. Starting Wednesday, customers in Sendai, a major city in northeastern Japan, can order from hundreds of local restaurants as well as national chains via DoorDash. Japan is one of the largest delivery markets in the world, but it's still very underpenetrated relative to the size of the population and the size of the economy, DoorDash co-founder and CEO Tony Xu told Nikkei Asia in an interview on Tuesday."
model_name = 'google/pegasus-xsum' # Fails
#model_name = 'google/pegasus-cnn_dailymail' # Works fine
device = 'cuda' if torch.cuda.is_available() else 'cpu'
from transformers import PegasusForConditionalGeneration, PegasusTokenizer, PegasusConfig
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name).to(device)
def summarize(src_text):
batch = tokenizer(src_text, truncation=True, padding='longest', return_tensors="pt").to(device)
translated = model.generate(**batch,
early_stopping=True).to(device)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)[0]
return tgt_text
print(summarize(bad_txt))
# Summary output (xsum) : All images are copyrighted.
# Summary output (cnndm): delivery giant DoorDash launched its Japan operation on Wednesday.<n>The move is expected to further intensify the already fierce competition for a slice of the country's food delivery market.<n>Japan is one of the largest delivery markets in the world, but it's still very underpenetrated relative to the size of the population.
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
A relevant summary is produced by the model.
Shortening the length of the input at the start or end seems to make it work, however words removed are arbitrary. Input is already much less than token limit 512.
<!-- A clear and concise description of what you would expect to happen. -->
| 07-08-2021 06:06:45 | 07-08-2021 06:06:45 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>It seems Pegasus-XSum is waste of time and space, let me try cnn-dailymail
<|||||>I tried cnn-dailymail. its working. I wasted my 2.5 GB of data trying XSum. Soon In University will try Bart too...

|
transformers | 12,575 | closed | HTTPSConnectionPool(host='cdn-lfs.huggingface.co', port=443): Max retries exceeded | Dear,
I am new in Transformers, I just tried to run the syntax below:
from transformers import BertForSequenceClassification, AdamW, BertConfig
model = BertForSequenceClassification.from_pretrained(
"bert-base-cased",
num_labels = 2,
output_attentions = False,
output_hidden_states = False
)
model.cuda()
However, I got an error message like:
`HTTPSConnectionPool(host='cdn-lfs.huggingface.co', port=443): Max retries exceeded with url: /bert-base-cased/d6992b8cd27d7a132eafce6a8210272329a371b1c762d453588795dd3
835593e (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7ff9e60c6dd0>: Failed to establish a new connection: [Errno -2] Name or service
not known'))
Traceback (most recent call last):
File "/root/anaconda3/envs/bert/lib/python3.7/site-packages/urllib3/connection.py", line 170, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/root/anaconda3/envs/bert/lib/python3.7/site-packages/urllib3/util/connection.py", line 73, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/root/anaconda3/envs/bert/lib/python3.7/socket.py", line 752, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
`
Any suggestions on how to fix it?
Kind regards,
MY | 07-08-2021 02:29:38 | 07-08-2021 02:29:38 | Looks like a networking or DNS error. Can you try again, or try from another machine/network?<|||||>Dear @julien-c
Thank you for the assistance. We will try again to run this.
Kind regards,
MY<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>are you using pythonanywhere.com for hosting by any chance..?
because they have a system known as whitelisting sites. you need to go and convince them on their forum to make it whitelisted then it will work perfectly.
@julien-c is right, it is a problem of DNS<|||||>HTTPSConnectionPool(host='cdn-lfs.huggingface.co' - constant - unable to install this - all day long<|||||>from transformers import AutoModelWithLMHead, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased")
model = AutoModelWithLMHead.from_pretrained("distilbert-base-cased")
i am getting this error while I run the above code lines
ConnectionError: HTTPSConnectionPool(host='cdn-lfs.huggingface.co', port=443): Read timed out.
any thoughts on how to proceed ?<|||||>i am observing this error while trining the gpt4all
ConnectionError: HTTPSConnectionPool(host='cdn-lfs.huggingface.co', port=443): Max retries exceeded with url:<|||||>> HTTPSConnectionPool(host='cdn-lfs.huggingface.co' - constant - unable to install this - all day long
I also encountered the same problem, how did you solve it later?<|||||>any updates on this one?<|||||>@vijaykumar-1551
> from transformers import AutoModelWithLMHead, AutoTokenizer
>
> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased")
> model = AutoModelWithLMHead.from_pretrained("distilbert-base-cased")
>
> i am getting this error while I run the above code lines
> ConnectionError: HTTPSConnectionPool(host='cdn-lfs.huggingface.co', port=443): Read timed out.
>
> any thoughts on how to proceed ?
I encountered the same issue. As a workaraund I pass `resume_download=True` argument to `from_pretrained` and when the error occurs just restart the script 🤷
For example,
```python
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map='sequential',
resume_download=True,
cache_dir='.cache/open-llama-13b-open-instruct'
)
```<|||||>hey sorry i was abit busy, i could not resolve the error yet, will update
as soon I find a solution.
Sorry for the delay.
--
Thanks & Regards
Vijay Kumar V
Python Developer
P2Fsemiconductors
https://www.p2fsemi.com/index.php
Contact: 6361960718
On Wed, 28 Jun 2023 at 21:27, Mikhail Kravets ***@***.***>
wrote:
> @vijaykumar-1551 <https://github.com/vijaykumar-1551>
>
> from transformers import AutoModelWithLMHead, AutoTokenizer
>
> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased")
> model = AutoModelWithLMHead.from_pretrained("distilbert-base-cased")
>
> i am getting this error while I run the above code lines
> ConnectionError: HTTPSConnectionPool(host='cdn-lfs.huggingface.co',
> port=443): Read timed out.
>
> any thoughts on how to proceed ?
>
> I encountered the same issue. As a workaraund I pass resume_download=True
> argument to from_pretrained and when the error occurs just restart the
> script 🤷
>
> For example,
>
> model = AutoModelForCausalLM.from_pretrained(
> model_name,
> torch_dtype=torch.float16,
> device_map='sequential',
> resume_download=True,
> cache_dir='.cache/open-llama-13b-open-instruct'
> )
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/12575#issuecomment-1611695095>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/A4GZTW2DAROMH56L2NHBEWTXNRH7LANCNFSM477ZOC7A>
> .
> You are receiving this because you were mentioned.Message ID:
> ***@***.***>
>
|
transformers | 12,574 | closed | [model.from_pretrained] raise exception early on failed load | Currently if `load` pretrained weights fails in `from_pretrained`, we first print a whole bunch of successful messages and then fail - this PR puts the exception first to avoid all the misleading messages.
(github produces some weird replay effects when re-using a branch that it assigns by default when editing on github)
@sgugger, @LysandreJik | 07-08-2021 01:15:53 | 07-08-2021 01:15:53 | |
transformers | 12,573 | closed | PEGASUS using ONNX | Hello @patrickvonplaten. , I just uploaded my fine-tuned model to the hub and I wanted to use ONNX to convert the pytorch model and be able to use it in a JavaScript back-end.
**I used the following command:**
`!python3 -m transformers.convert_graph_to_onnx --model Karimfayed/pegasus-SAMSum --framework pt pegasus-SAMSum.onnx`
**I receive the following error message:**
> Error while converting the model: Unrecognized configuration class <class 'transformers.configuration_pegasus.PegasusConfig'> for this kind of AutoModel: AutoModel.
Model type should be one of RetriBertConfig, T5Config, DistilBertConfig, AlbertConfig, CamembertConfig, XLMRobertaConfig, BartConfig, LongformerConfig, RobertaConfig, LayoutLMConfig, SqueezeBertConfig, BertConfig, OpenAIGPTConfig, GPT2Config, MobileBertConfig, TransfoXLConfig, XLNetConfig, FlaubertConfig, FSMTConfig, XLMConfig, CTRLConfig, ElectraConfig, ReformerConfig, FunnelConfig, LxmertConfig, BertGenerationConfig, DebertaConfig, DPRConfig, XLMProphetNetConfig, ProphetNetConfig.
**Is PEGASUS going to be added to the list soon or is there any way around it?**
| 07-08-2021 01:13:48 | 07-08-2021 01:13:48 | Hello @karimfayed! We're in the process of switching our approach relative to using the ONNX converter. See the following PR https://github.com/huggingface/transformers/pull/11786.
It has support for BART, so enabling support for Pegasus should be fairly simple. Please let us know if you run into any issues.
You can see the docs here: https://235542-155220641-gh.circle-artifacts.com/0/docs/_build/html/serialization.html
Please make sure to git checkout the PR first!<|||||>> Hello @karimfayed! We're in the process of switching our approach relative to using the ONNX converter. See the following PR #11786.
>
> It has support for BART, so enabling support for Pegasus should be fairly simple. Please let us know if you run into any issues.
>
> You can see the docs here: https://235542-155220641-gh.circle-artifacts.com/0/docs/_build/html/serialization.html
>
> Please make sure to git checkout the PR first!
Hello @LysandreJik , thank you for your help. I read both the [docs](https://235542-155220641-gh.circle-artifacts.com/0/docs/_build/html/serialization.html) and the issue and I used the command :
`!python3 -m transformers.onnx -f pytorch --model=Karimfayed/pegasus-SAMSum --features=default --optimize --optimization-level=all onnx/Karimfayed/pegasus-SAMSum/`
**but I keep getting this error**
> /usr/bin/python3: No module named transformers.onnx
**Even when I replace ` transformers.onnx` with `transformers.onnx.export ` I get this error:**
> /usr/bin/python3: Error while finding module specification for 'transformers.onnx.export' (ModuleNotFoundError: No module named 'transformers.onnx')<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,572 | closed | push_to_hub related issues (from Google Colab) | I am trying to write a transformer model to a repo at huggingface.co
!git push doesn't work after successful !git add . and !git commit
fatal: could not read username for https://huggingface.co no such device or address | 07-08-2021 00:18:59 | 07-08-2021 00:18:59 | Are you inside a Colab?
You might want to try `huggingface_hub.Repository` and pass in an authentication token
<|||||>To write to the repo, you'll also need to login with `!huggingface-cli login`
Once you've done that, if you want to use only git commands without passing by the `Repository` class, you can do it as such:
```
!git clone https://user:$(cat /root/.huggingface/token)@huggingface.co/<NAMESPACE>/<MODEL_ID>
```
or, if you'd rather use environment variables:
```py
# Put the token in an environment variable
from huggingface_hub import HfFolder
import os
os.environ['HF_AUTH'] = HfFolder().get_token()
```
```
# Clone the repo with authentication
!git clone https://user:[email protected]/<NAMESPACE>/<MODEL_ID>
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,571 | closed | AutoTokenizer not loading gpt2 model on instance without internet connection even after caching model | I am trying to first download and cache the GPT2 Tokenizer to use on an instance that does not have internet connection. I am able to download the tokenizer on my ec2 instance that does have an internet connection but when i copy over the directory to my instance that does not have the connection it gives a connection error.
The issue seems to be with only the tokenizer and not the model
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.1
- Platform: Linux-4.14.232-176.381.amzn2.x86_64-x86_64-with-glibc2.9
- Python version: 3.6.10
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
Models:
- gpt2: @patrickvonplaten, @LysandreJik
## Information
Tokenizer/Model I am using (GPT2, microsoft/DialogRPT-updown):
The problem arises when using:
* [X] the official example scripts: (give details below)
The tasks I am working on is:
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. On my ec2 instance that has an internet connection I run
```
from transformers import GPT2Tokenizer
GPT2Tokenizer.from_pretrained("gpt2", cache_dir="<some_directory>")
```
2. On my ec2 instance which does not have an internet connection I run the same command
```
from transformers import GPT2Tokenizer
GPT2Tokenizer.from_pretrained("gpt2", cache_dir="<some_directory>")
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1680, in from_pretrained
user_agent=user_agent,
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/file_utils.py", line 1337, in cached_path
local_files_only=local_files_only,
File "/home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/file_utils.py", line 1553, in get_from_cache
"Connection error, and we cannot find the requested files in the cached path."
ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.
Also does not work with AutoTokenizer
## Expected behavior
After doing some digging it is looking for the added_tokens_file which does not exist. The vocab_file does exist.
| 07-08-2021 00:14:08 | 07-08-2021 00:14:08 | Seemed to have fixed it by following this https://github.com/huggingface/transformers/issues/9687
and using transformers 4.5.1 instead<|||||>Same problem as #12536. @LysandreJik <|||||>i got the same error for load model "bert-base-uncased"<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Is this still a problem here? I can load the tokenizer, save it and then load it again without internet connection<|||||>Both linked issues were never fixed so I would say so
On Wed, Aug 18, 2021, 6:44 PM Patrick von Platen ***@***.***>
wrote:
> Is this still a problem here?
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/12571#issuecomment-901266168>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AKLUCABJRZZD7AQL6HZDRITT5PPO7ANCNFSM477UY3MA>
> .
> Triage notifications on the go with GitHub Mobile for iOS
> <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
> or Android
> <https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email>
> .
>
<|||||>A simple workaround would be to just do:
```python
from transformers import GPT2Tokenizer
tok = GPT2Tokenizer.from_pretrained("gpt2", cache_dir="<some_directory>")
tok.save_pretrained("<some_directory>")
```
and loading it from there without internet, but I guess it would indeed be more userfriendly to allow this automatically once the tokenizer has been downloaded once<|||||>I digged a bit more into it in the linked issue #12536 (now stale) and the problem was that non existent files (such as the added tokens json in some of the tokenizers) caused a "breaking" exception offline but a simple warning online, or when the local files only flag was set to true. As you said, the workaround is super simple (even just setting local files only to true fixes it ) but it's just UX<|||||>In the other issue, I proposed a simple (very naive fix) as a PR that circumvented this behavior but I suspect it might break things elsewhere (and would require changing a pipeline test) <|||||>Hi everybody, I am getting the same error and after digging a bit deeper, I believe that the current caching mechanism depends on the Internet connection crucially for latest versions, e.g., 4.8.x and 4.9.2. I blame the function `get_from_cache`, which IMHO shouldn't work properly unless you always have Internet. Some details are below.
Simple code to reproduce the effect:
```
from transformers import AutoTokenizer, AutoModel
tok = AutoTokenizer.from_pretrained('roberta-base', unk_token='<unk>')
```
First, specifying the caching directory doesn't help, because the function `get_from_cache` computes the caching path using the so-caled `etag`:
```
filename = url_to_filename(url, etag)
```
I added a code to print the filename, the url, and the etag. When Internet is there, we get:
```
### url: https://huggingface.co/roberta-base/resolve/main/config.json etag: "8db5e7ac5bfc9ec8b613b776009300fe3685d957" filename: 733bade19e5f0ce98e6531021dd5180994bb2f7b8bd7e80c7968805834ba351e.35205c6cfc956461d8515139f0f8dd5d207a2f336c0c3a83b4bc8dca3518e37b
### url: https://huggingface.co/roberta-base/resolve/main/vocab.json etag: "5606f48548d99a9829d10a96cd364b816b02cd21" filename: d3ccdbfeb9aaa747ef20432d4976c32ee3fa69663b379deb253ccfce2bb1fdc5.d67d6b367eb24ab43b08ad55e014cf254076934f71d832bbab9ad35644a375ab
### url: https://huggingface.co/roberta-base/resolve/main/merges.txt etag: "226b0752cac7789c48f0cb3ec53eda48b7be36cc" filename: cafdecc90fcab17011e12ac813dd574b4b3fea39da6dd817813efa010262ff3f.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b
### url: https://huggingface.co/roberta-base/resolve/main/tokenizer.json etag: "ad0bcbeb288f0d1373d88e0762e66357f55b8311" filename: d53fc0fa09b8342651efd4073d75e19617b3e51287c2a535becda5808a8db287.fc9576039592f026ad76a1c231b89aee8668488c671dfbe6616bab2ed298d730
### url: https://huggingface.co/roberta-base/resolve/main/config.json etag: "8db5e7ac5bfc9ec8b613b776009300fe3685d957" filename: 733bade19e5f0ce98e6531021dd5180994bb2f7b8bd7e80c7968805834ba351e.35205c6cfc956461d8515139f0f8dd5d207a2f336c0c3a83b4bc8dca3518e37b
```
Then, I have to disconnect the Internet. Now, the files are cached and should be accessed just fine.
So, we retry to create a tokenizer again, but it failes because without etag, we generate a **very different filename**:
```
### url: https://huggingface.co/roberta-base/resolve/main/tokenizer_config.json etag: None filename: dfe8f1ad04cb25b61a647e3d13620f9bf0a0f51d277897b232a5735297134132
```
The function ``get_from_cache`` has the parameter local_files_only. When, it's true, etag is not computed. However, it is not clear how to use this to enable offline creation of resources after they have been downloaded once.
Thank you!<|||||>@searchivarius `local_files_only` _should_ indeed work. You can add it to your from_pretrained calls, e.g.
```py
tok = AutoTokenizer.from_pretrained('roberta-base', unk_token='<unk>', local_files_only=True)
```
That's the very hands-on, manual way to do this for each of your model, config, tokenizer inits. You can also set this globally. See https://github.com/huggingface/transformers/blob/master/docs/source/installation.md#offline-mode<|||||>Hi @BramVanroy thanks a lot, `TRANSFORMERS_OFFLINE`, indeed, resolves the issue!<|||||>it seems very strange for me that local_files_only=True still dosen't work for me
even though it works for BertConfig.from_pretrained
i must follow what this https://github.com/huggingface/transformers/issues/12571#issuecomment-901280736 does |
transformers | 12,570 | closed | Can't Select Specific GPU by TrainingArguments | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.2
- Platform: Jupyter Notebook on Ubuntu
- Python version: 3.7
- PyTorch version (GPU?): 1.8.0+cu111
- Using GPU in script?: No, By Jupyter Notebook
- Using distributed or parallel set-up in script?:It is distributed but I don't want that
### Who can help
- trainer: @sgugger
find by git-blame: @philschmid
## To reproduce
By TrainingArguments, I want to set up my compute device only to torch.device(type='cuda', index=1).
If I not set local_rank when init TrainingArguments, it will compute on both GPU.
Steps to reproduce the behavior:
```
from transformers import TrainingArguments, Trainer, EvalPrediction
training_args = TrainingArguments(
learning_rate=1e-4,
num_train_epochs=6,
per_device_train_batch_size=32,
per_device_eval_batch_size=32,
logging_steps=200,
output_dir="./training_output",
overwrite_output_dir=True,
# The next line is important to ensure the dataset labels are properly passed to the model
remove_unused_columns=False,
local_rank= 1
)
```
Then you will get `ValueError: Error initializing torch.distributed using env:// rendezvous: environment variable RANK expected, but not set`
But after I set
```
import os
os.environ["RANK"]="1"
```
I get `ValueError: Error initializing torch.distributed using env:// rendezvous: environment variable WORLD_SIZE expected, but not set`
These error not happen if I not set local_rank when init TrainingArguments even though I don't set any environment variable.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I want to set up my compute device only to torch.device(type='cuda', index=1). | 07-07-2021 23:51:13 | 07-07-2021 23:51:13 | You should use the env variable `CUDA_VISIBLE_DEVICES` to set the GPUs you want to use. If you have multiple GPUs available, the `Trainer` will use all of them, that is expected and not a bug.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I have a similar challenge. I have 3 GPUs in the server. I run the script like this:
```
CUDA_VISIBLE_DEVICES=2 python main.py.
```
however, when I print `training_args.device`, it still shows cuda:0. `model.device` shows the same thing
this does not help either:
```
export CUDA_VISIBLE_DEVICES=2
python main.py
```
I am using Seq2SeqTrainingArguments<|||||>This is normal, PyTorch names all visible devices from 0 to the number -1. So cuda0 in PyTorch is the first device you set as available, in this case GPU 2.<|||||>I'm having the same issue with the Trainer class.
Even after setting CUDA_VISIBLE_DEVICES, it still attempts to use all GPUs on my machine. This is problematic, as I share this server with other users. (And even during times with open GPUs, there are more than 8 GPUS present and it exhausts peer mapping resources.)
Error can be reproduced using the LED fine-tune Colab notebook demo if downloaded to a multi-GPU machine. https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb
with the following added code inserted in the first cell:
import os
os.environ["CUDA_VISIBLE_DEVICES"]="0"
I've attempted setting the CUDA_VISIBLE_DEVICES environmental variable other ways (ie. from terminal, etc.) with similar results.
I've also attempted using the PyTorch method of specifying GPU, with similar results:
import torch
DEFAULT_DEVICE = "cuda"
torch.cuda.set_device(0)<|||||>No you need to set that environment variable with the launch command, not inside your training script:
```
CUDA_VISIBLE_DEVICES="0" python main.py
```<|||||>So is there any way to do this within a notebook?<|||||>You need to set the variable before launching the jupyter notebook
```
CUDA_VISIBLE_DEVICES="0" jupyter notebook
```<|||||>Ahhh, thank you. That successfully restricts the GPUs accessed in the notebook. <|||||>> You need to set the variable before launching the jupyter notebook
>
> ```
> CUDA_VISIBLE_DEVICES="0" jupyter notebook
> ```
It's very inconvenient each time to restart jupyter lab/notebook to just change the device. Also, I may want to use several notebooks on different devices. PytorchLightening, for example, gives you freedom to select device for each run.<|||||>In Jupyter Notebook, we can use one of these:
```
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="0"
```
```
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=0
```<|||||>Referring to all above solutions, all my GPUs are running or get CUDA device errors.
As alternatives, I override TrainingArguments Class. However, it might have undiscovered issues though.
* backgrounds : I have more than one GPUs. Using huggingface trainer, all devices are involved in training.
* problems : Trainer [seems to use ddp after checking device and n_gpus](https://github.com/huggingface/transformers/blob/a22db885b41b3a1b302fc206312ee4d99cdf4b7c/src/transformers/trainer.py#L1159) method in TrainingArugments , and `_setup_devices` in [TrainingArguments](https://github.com/huggingface/transformers/blob/a22db885b41b3a1b302fc206312ee4d99cdf4b7c/src/transformers/training_args.py#L1105) controls overall device setting.
* temporary remedies : Instead of overriding `_setup_devices` (since it relates multiple dependent functions), I manually set `device` method and `n_gpus` method. In this case, I don't need to give any `os.environ` or `CUDA_VISIBLE_DEVICES `in front of python commands for single use. However, it may require if you want to use selected two or three gpus out of 4.
```
class customTrainingArguments(TrainingArguments):
def __init__(self,*args, **kwargs):
super(customTrainingArguments, self).__init__(*args, **kwargs)
@property
@torch_required
def device(self) -> "torch.device":
"""
The device used by this process.
Name the device the number you use.
"""
return torch.device("cuda:3")
@property
@torch_required
def n_gpu(self):
"""
The number of GPUs used by this process.
Note:
This will only be greater than one when you have multiple GPUs available but are not using distributed
training. For distributed training, it will always be 1.
"""
# Make sure `self._n_gpu` is properly setup.
# _ = self._setup_devices
# I set to one manullay
self._n_gpu = 1
return self._n_gpu
```<|||||>In a related problem, CUDA_VISIBLE_DEVICES doesn't seem to work, as I set it to use only the second gpu, but it always uses the first. I tried @kimcando 's solution but it just gives another error: "module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cuda:1", even after sending the data to device cuda:1. <|||||>@hrmello have you been able to solve this problem? I'm facing the same issue<|||||>> @hrmello have you been able to solve this problem? I'm facing the same issue
@ngonhi Unfortunately not. I found out that if you don't specify the GPU, it finds and use all of them. But if you only want to run your code using a single one, you must use gpu 0. <|||||>It feels folks need this feature so might be worth reopening the issue. @sgugger?<|||||>We can reopen it, but there is nothing I can do to fix it as it is part of the launching process of your script, which is implemented in PyTorch, not in Transformers :man_shrugging:
We are implementing this option in the `accelerate launcher` [here](https://github.com/huggingface/accelerate/pull/732) for folks interested.<|||||>Thank you. This is very helpful. <|||||>> Referring to all above solutions, all my GPUs are running or get CUDA device errors. As alternatives, I override TrainingArguments Class. However, it might have undiscovered issues though.
>
> * backgrounds : I have more than one GPUs. Using huggingface trainer, all devices are involved in training.
> * problems : Trainer [seems to use ddp after checking device and n_gpus](https://github.com/huggingface/transformers/blob/a22db885b41b3a1b302fc206312ee4d99cdf4b7c/src/transformers/trainer.py#L1159) method in TrainingArugments , and `_setup_devices` in [TrainingArguments](https://github.com/huggingface/transformers/blob/a22db885b41b3a1b302fc206312ee4d99cdf4b7c/src/transformers/training_args.py#L1105) controls overall device setting.
> * temporary remedies : Instead of overriding `_setup_devices` (since it relates multiple dependent functions), I manually set `device` method and `n_gpus` method. In this case, I don't need to give any `os.environ` or `CUDA_VISIBLE_DEVICES `in front of python commands for single use. However, it may require if you want to use selected two or three gpus out of 4.
>
> ```
> class customTrainingArguments(TrainingArguments):
> def __init__(self,*args, **kwargs):
> super(customTrainingArguments, self).__init__(*args, **kwargs)
>
> @property
> @torch_required
> def device(self) -> "torch.device":
> """
> The device used by this process.
> Name the device the number you use.
> """
> return torch.device("cuda:3")
>
> @property
> @torch_required
> def n_gpu(self):
> """
> The number of GPUs used by this process.
> Note:
> This will only be greater than one when you have multiple GPUs available but are not using distributed
> training. For distributed training, it will always be 1.
> """
> # Make sure `self._n_gpu` is properly setup.
> # _ = self._setup_devices
> # I set to one manullay
> self._n_gpu = 1
> return self._n_gpu
> ```
It works. I just have to comment out the `@torch_required` and add `import torch` at line 1, then I can freely choose whatever GPU I want. Thanks a million.<|||||>Thank you @kimcando I finally got my code to run on a single, specified GPU with your modification! <|||||>> Referring to all above solutions, all my GPUs are running or get CUDA device errors. As alternatives, I override TrainingArguments Class. However, it might have undiscovered issues though.
>
> * backgrounds : I have more than one GPUs. Using huggingface trainer, all devices are involved in training.
> * problems : Trainer [seems to use ddp after checking device and n_gpus](https://github.com/huggingface/transformers/blob/a22db885b41b3a1b302fc206312ee4d99cdf4b7c/src/transformers/trainer.py#L1159) method in TrainingArugments , and `_setup_devices` in [TrainingArguments](https://github.com/huggingface/transformers/blob/a22db885b41b3a1b302fc206312ee4d99cdf4b7c/src/transformers/training_args.py#L1105) controls overall device setting.
> * temporary remedies : Instead of overriding `_setup_devices` (since it relates multiple dependent functions), I manually set `device` method and `n_gpus` method. In this case, I don't need to give any `os.environ` or `CUDA_VISIBLE_DEVICES `in front of python commands for single use. However, it may require if you want to use selected two or three gpus out of 4.
>
> ```
> class customTrainingArguments(TrainingArguments):
> def __init__(self,*args, **kwargs):
> super(customTrainingArguments, self).__init__(*args, **kwargs)
>
> @property
> @torch_required
> def device(self) -> "torch.device":
> """
> The device used by this process.
> Name the device the number you use.
> """
> return torch.device("cuda:3")
>
> @property
> @torch_required
> def n_gpu(self):
> """
> The number of GPUs used by this process.
> Note:
> This will only be greater than one when you have multiple GPUs available but are not using distributed
> training. For distributed training, it will always be 1.
> """
> # Make sure `self._n_gpu` is properly setup.
> # _ = self._setup_devices
> # I set to one manullay
> self._n_gpu = 1
> return self._n_gpu
> ```
This is the **best** solution for now, I would like to provide more usage for new bees,
(we should comment out `@torch_required`.
We can specify the GPU by changing the `return torch.device("cuda:3")` in the `def device(self)`,
After overloading class `customTrainingArguments`,
we only need to `training_args = customTrainingArguments(...)` **instead** of `training_args = TrainingArguments(...)`
the arguments inside are as usual
It is the **simplest** way now <|||||>> This is the best solution for now, I would like to provide more usage for new bees,
(we should comment out @torch_required.
We can specify the GPU by changing the return torch.device("cuda:3") in the def device(self),
> After overloading class customTrainingArguments,
we only need to training_args = customTrainingArguments(...) instead of training_args = TrainingArguments(...)
the arguments inside are as usual
It is the simplest way now
Which version of transformers is it working with?
I get error
```
AttributeError: 'CustomTrainingArguments' object has no attribute 'distributed_state'
```
<|||||>I follow this tutorial to finetune Whisper, however, the code won't select specific GPUs and only run on gpu:0.
I used CUDA_VISIBLE_DEVICES.
code used
`from transformers import Seq2SeqTrainingArguments
from transformers import Seq2SeqTrainer`
best regards<|||||>> > This is the best solution for now, I would like to provide more usage for new bees,
> > (we should comment out @torch_required.
> > We can specify the GPU by changing the return torch.device("cuda:3") in the def device(self),
>
> > After overloading class customTrainingArguments,
> > we only need to training_args = customTrainingArguments(...) instead of training_args = TrainingArguments(...)
> > the arguments inside are as usual
> > It is the simplest way now
>
> Which version of transformers is it working with?
>
> I get error
>
> ```
> AttributeError: 'CustomTrainingArguments' object has no attribute 'distributed_state'
> ```
Did you ever fixed this error?
EDIT: Seems like using an older version worked here, Version 4.29.2 worked |
transformers | 12,569 | closed | Remove logging of GPU count etc from run_t5_mlm_flax.py | Successfully logging this information requires Pytorch. For the purposes of this script we are not using Pytorch.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-07-2021 20:57:14 | 07-07-2021 20:57:14 | @patrickvonplaten Hey Patrick can you please check this PR? |
transformers | 12,568 | closed | Pegasus from Pytorch to tensorflow | I have fine-tuned PEGASUS model for abstractive summarization using [this script](https://gist.github.com/jiahao87/50cec29725824da7ff6dd9314b53c4b3) which uses huggingface.
The output model is in pytorch.
On huggingface [docs](https://huggingface.co/transformers/serialization.html) the following is supposed to do the required conversion:
`python convert_graph_to_onnx.py --framework <pt, tf> --model bert-base-cased bert-base-cased.onnx`
I use colab and I ran the following command to transform my pegasus model:
`!python convert_graph_to_onnx.py --framework <pt, tf> --model ./results/checkpoint-4000 ./results/checkpoint-4000.onnx`
I keep getting the following message which is confusing as it is written in the documentation that the script convert_graph_to_onnx.py is at the root of the transformers sources:

**Thank you in advance.** | 07-07-2021 19:30:33 | 07-07-2021 19:30:33 | You should update `<pt, tf>` to reflect the library you want to use to export the graph. Either `pt` or `tf`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,567 | closed | Init pickle | # What does this PR do?
This PR is an alternative to #12552 and properly sets `_LazyModule` as a class used in all inits (no custom subclasses anymore) to make the `transformers` module picklable. It also cleans up nicely the inits.
The only downside is that new models started before this PR but not yet merged will need a rebase for the intermediate init.
| 07-07-2021 17:49:46 | 07-07-2021 17:49:46 | |
transformers | 12,566 | closed | [examples/hybrid_clip] fix loading clip vision model | # What does this PR do?
Fix loading config when the model is of type `clip_vision_model`. | 07-07-2021 17:20:09 | 07-07-2021 17:20:09 | |
transformers | 12,565 | closed | tfhub.de -> tfhub.dev | 07-07-2021 16:10:33 | 07-07-2021 16:10:33 | strictly speaking, tensorboard.dev is not a part of tfhub.dev, is it?
(sorry for piggybacking on this tiny PR 😂)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
|
transformers | 12,564 | closed | Added fsck_etags to verify cache consistency. | # What does this PR do?
A prototype function is added to check cache consistency.
Note that it does not verify the data is correct, just that the data hashes are consistent.
It may be used as: `python -c 'from transformers import file_utils; file_utils.fsck_etags()'`
No output means all the local etags match their data. Shown file by file if info logging is enabled.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
#12557
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-07-2021 15:51:02 | 07-07-2021 15:51:02 | Hello, thank you for your contribution! How do you envision using this? Running this locally on a smallish 23GB cache takes 46 seconds on my machine, so this isn't something that can be ran every time.
How would you recommend we approach solving #12557 using your proposal?<|||||>Hey, thanks for your reply.
Personally I was running into crashes from data corruption and needed a way to handle that situation. I thought others might get into the same scenario, so shared this code. When it moves the corrupt files away they get redownloaded.
This might work better if a standalone tool were added that users could run. I have severe cognitive and computer issues and contribute in very small bursts.
Regarding thoughts like 12557:
- this introduces the concept of fscking data, but doesn't solve the issue
- checking etags from the network could help, certificate pinning helps. The fscking could be merged into an 'update all models' function, since the network only has the latest etags.
- regarding automation speed, it might be faster to check only the files that are loaded, when they are loaded, or only after download or at user request, or have it disableable.
- things could be made more normative if the cache used git repositories in some way. git already has fsck and signatures to some degree, and users can make their own git repositories easily. not sure why the design decision was made not to do that, but the hashed filenames certainly provide more ways to verify.
- regarding 12557, i can come up with lots of ideas, but if there were some public messaging around the concept then more experienced cryptographers and security specialists would likely eventually weigh in. i'm just an old disabled hobbyist.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,563 | closed | Issue in terms of accuracy of onnx converted models of SQUAD based ROBERTA on legal domain. | Hi Team,
I was using the hugging face utility from latest transformer version to convert a SQUAD based Roberta model to ONNX. After conversion, I observed the accuracy has dipped significantly even though I never quantized the model. Any suggestions or advice what could have been the reason. Does the ONNX conversion results in loss in terms of predictions? If so, is there any parameter which can be experimented with to reduce and yet improve the run time performance.
| 07-07-2021 14:50:44 | 07-07-2021 14:50:44 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
If you'd like us to investigate this as a bug, please provide additional information so that we may help; for example, the ID of a pretrained model on the hub that loses accuracy when being converted, the commands run, the environment, library version.
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,562 | closed | Double check for attribute num_examples | # What does this PR do?
As pointed out by #12479, the `isinstance` check for `IterableDataset` and its subclasses does not look at the type, but whether the class implements some methods, but not all attributes. This PR adds a double check when necessary to avoid an `AttributeError`.
Fixes #12479 | 07-07-2021 14:35:05 | 07-07-2021 14:35:05 | Failure looks unrelated but circleCI is not letting me re-run the tests, so merging and watching master. |
transformers | 12,561 | closed | Don't stop at num_epochs when using IterableDataset | # What does this PR do?
Currently, when someone uses an `IterableDataset` inside the `Trainer`, the training loop stops after 3 iterations over the iterable dataset, this PR fixes that to just rely on `max_steps` (has to be set or there is an error at init).
Fixes #12499 | 07-07-2021 14:18:06 | 07-07-2021 14:18:06 | |
transformers | 12,560 | closed | Adding prepare_decoder_input_ids_from_labels methods to all TF ConditionalGeneration models | 07-07-2021 14:14:20 | 07-07-2021 14:14:20 | ||
transformers | 12,559 | closed | [Flax] Allow retraining from save checkpoint | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR allows all flax scripts to start training from already pretrained cehckpoints
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-07-2021 13:40:33 | 07-07-2021 13:40:33 | |
transformers | 12,558 | closed | Fix group_lengths for short datasets | # What does this PR do?
This PR adds a fix in the `group_lengths` function used in all language modeling examples so it also works for short datasets (without returning a dataset of length 0). The fix was discussed in the issue mentioned below.
Fixes #12438 | 07-07-2021 13:14:58 | 07-07-2021 13:14:58 | |
transformers | 12,557 | closed | Cached data not checked for integrity | ### Who can help
@julien-c @sgugger
## To reproduce
Steps to reproduce the behavior:
1. Mutate a cache file or cut the internet while downloading
2. Load the data e.g. via a pipeline
3. Either the incorrect data loads fine, or an unhelpful error is thrown
## Expected behavior
The files are hashed in git and delivered via https with the hash included. It would be good for the cache system to verify this hash and report corruption to help the user, especially if a model fails to load. It would be great if the hash system were a little integrated with git so that git signatures of the hashes could be checked some day. Alternatively/additionally certificate pinning could be used in the library to help protect the user.
Users should be informed when using the library, that it does not authenticate the models when they are loaded, and that it is easy for a malicious party to alter them undetected.
| 07-07-2021 13:06:50 | 07-07-2021 13:06:50 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,556 | closed | Slow gpt2 training on TPU with run_clm_flax.py | ## Environment info
- `transformers` version: 4.9.0.dev0
- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0+cpu (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu)
- Jax version: 0.2.16
- JaxLib version: 0.1.68
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
- gpt2: @patrickvonplaten, @LysandreJik
-->
## Information
Model I am using (Bert, XLNet ...): gpt2
The problem arises when using:
* [X] the official example scripts: (give details below)
https://huggingface.co/flax-community/papuGaPT2/blob/main/run_clm_flax.py
with only small modification added print(jax.device_count()) in main() to see if TPU is being used.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
Oscar, polish dataset 47GB:
--dataset_name="oscar" \
--dataset_config_name="unshuffled_deduplicated_pl" \
## To reproduce
Steps to reproduce the behavior:
1. login to remote VM:
```./google-cloud-sdk/bin/gcloud alpha compute tpus tpu-vm ssh dishcloth --zone us-central1-a --project hf-flax```
2. activate venv:
source ~/papugapt2/bin/activate
3. run pretraining:
```
cd papuGaPT2
bash pretrain_model.sh
```
## Expected behavior
Expected to get training speed around 1s/it on this dataset (as this was speed achieved before updating to latest scripts from master)
But instead the speed is around 10-20s/it
| 07-07-2021 12:46:55 | 07-07-2021 12:46:55 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,555 | closed | Display error message for pipeline loading failures | # What does this PR do?
Displays an error message when a model class fails to load a model for a pipeline.
This helped me understand what was going on when pipelines failed to load.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@Narsil @LysandreJik | 07-07-2021 12:25:19 | 07-07-2021 12:25:19 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I think this should be left to the following error message, that will let users know what classes were tried (all of them) and that they failed. The exact code should be left as follow-up.
This warning added is just noise in regular usage as it is expected that some classes won't work (so always issueing a warning when nothing is wrong).
This is not a super strong opinion though.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,554 | closed | Issue converting Flax model to Pytorch | When using the following script to convert a trained flax model to pytorch, the model seems to perform extremely poorly.
```
from transformers import RobertaForMaskedLM
model = RobertaForMaskedLM.from_pretrained("./", from_flax=True)
model.save_pretrained("./")
```
```python
from transformers import RobertaForMaskedLM, FlaxRobertaForMaskedLM
import numpy as np
import torch
model_fx = FlaxRobertaForMaskedLM.from_pretrained("birgermoell/roberta-swedish")
model_pt = RobertaForMaskedLM.from_pretrained("birgermoell/roberta-swedish", from_flax=True)
input_ids = np.asarray(2 * [128 * [0]], dtype=np.int32)
input_ids_pt = torch.tensor(input_ids)
logits_pt = model_pt(input_ids_pt).logits
print(logits_pt)
logits_fx = model_fx(input_ids).logits
print(logits_fx)
```
Comparing gives the following input.
```
tensor([[[ 1.7789, -13.5291, -11.2138, ..., -5.2875, -9.3274, -4.7912],
[ 2.3076, -13.4161, -11.1511, ..., -5.3181, -9.0602, -4.6083],
[ 2.6451, -13.4425, -11.0671, ..., -5.2838, -8.8323, -4.2280],
...,
[ 1.9009, -13.6516, -11.2348, ..., -4.9726, -9.3278, -4.6060],
[ 2.0522, -13.5394, -11.2804, ..., -4.9960, -9.1956, -4.5691],
[ 2.2570, -13.5093, -11.2640, ..., -4.9986, -9.1292, -4.3310]],
[[ 1.7789, -13.5291, -11.2138, ..., -5.2875, -9.3274, -4.7912],
[ 2.3076, -13.4161, -11.1511, ..., -5.3181, -9.0602, -4.6083],
[ 2.6451, -13.4425, -11.0671, ..., -5.2838, -8.8323, -4.2280],
...,
[ 1.9009, -13.6516, -11.2348, ..., -4.9726, -9.3278, -4.6060],
[ 2.0522, -13.5394, -11.2804, ..., -4.9960, -9.1956, -4.5691],
[ 2.2570, -13.5093, -11.2640, ..., -4.9986, -9.1292, -4.3310]]],
grad_fn=<AddBackward0>)
[[[ 0.1418128 -14.170926 -11.12649 ... -7.542998 -10.79537
-9.382975 ]
[ 1.7505689 -13.178099 -10.356588 ... -6.794136 -10.567211
-8.6670065 ]
[ 2.0270724 -13.522658 -10.372475 ... -7.0110755 -10.396935
-8.419178 ]
...
[ 0.19080782 -14.390833 -11.399942 ... -7.469897 -10.715849
-9.234054 ]
[ 1.3052869 -13.332332 -10.702984 ... -6.9498534 -10.813769
-8.608736 ]
[ 1.6442876 -13.226774 -10.59941 ... -7.0290956 -10.693554
-8.457008 ]]
[[ 0.1418128 -14.170926 -11.12649 ... -7.542998 -10.79537
-9.382975 ]
[ 1.7505689 -13.178099 -10.356588 ... -6.794136 -10.567211
-8.6670065 ]
[ 2.0270724 -13.522658 -10.372475 ... -7.0110755 -10.396935
-8.419178 ]
...
[ 0.19080782 -14.390833 -11.399942 ... -7.469897 -10.715849
-9.234054 ]
[ 1.3052869 -13.332332 -10.702984 ... -6.9498534 -10.813769
-8.608736 ]
[ 1.6442876 -13.226774 -10.59941 ... -7.0290956 -10.693554
-8.457008 ]]]
``` | 07-07-2021 11:58:51 | 07-07-2021 11:58:51 | Running the following command:
```python
from transformers import RobertaForMaskedLM, FlaxRobertaForMaskedLM
import numpy as np
import torch
model_fx = FlaxRobertaForMaskedLM.from_pretrained("birgermoell/roberta-swedish")
model_pt = RobertaForMaskedLM.from_pretrained("birgermoell/roberta-swedish", from_flax=True)
input_ids = np.asarray(2 * [128 * [0]], dtype=np.int32)
input_ids_pt = torch.tensor(input_ids)
logits_pt = model_pt(input_ids_pt).logits
print(logits_pt)
logits_fx = model_fx(input_ids).logits
print(logits_fx)
```
should give more or less identical results<|||||>Just corrected the pt weights. If you run:
```python
from transformers import RobertaForMaskedLM, FlaxRobertaForMaskedLM
import numpy as np
import torch
model_fx = FlaxRobertaForMaskedLM.from_pretrained("birgermoell/roberta-swedish")
model_pt = RobertaForMaskedLM.from_pretrained("birgermoell/roberta-swedish")
input_ids = np.asarray(2 * [128 * [0]], dtype=np.int32)
input_ids_pt = torch.tensor(input_ids)
logits_pt = model_pt(input_ids_pt).logits
print(logits_pt)
logits_fx = model_fx(input_ids).logits
print(logits_fx)
```
You should see equal results. The checkpoint was somehow incorrectly converted.<|||||>Note that one should convert checkpoints with:
```python
from transformers import RobertaForMaskedLM
model = RobertaForMaskedLM.from_pretrained("...", from_flax=True)
model.save_pretrained("./")
```
and not the `AutoModel....` classes.
Also it's important to realize that the lm head layer is actually tied to the input word embedding layer which is why Flax just doesn't save those weights. Then when converting those weights to PyTorch, PyTorch says there are missing but since the weights are tied PyTorch would have overwritten those weights anyways with the input embeddings which is why it the warning:
```
Some weights of RobertaForMaskedLM were not initialized from the Flax model and are newly initialized: ['lm_head.decoder.bias', 'lm_head.decoder.weight']
```
doesn't matter.<|||||>@BirgerMoell Also note that your local `.git` repository must be huge since you've essentially uploaded ~100 checkpoints of 500 MB each -> so your local `.git` stores 50 GB already I think.<|||||>Widget seems to work: https://huggingface.co/birgermoell/roberta-swedish?text=Var+kan+jag+hitta+n%C3%A5gon+%3Cmask%3E+talar+engelska%3F<|||||>Awesome. Just to clarify. Once I'm done with training, this script should help me convert the model to pytorch.
```
from transformers import RobertaForMaskedLM
model = RobertaForMaskedLM.from_pretrained("...", from_flax=True)
model.save_pretrained("./")
```<|||||>@patrickvonplaten the uploaded model is still performing poorly so I'm not 100% the issue is fully resolved.
<img width="1801" alt="Screenshot 2021-07-07 at 16 26 59" src="https://user-images.githubusercontent.com/1704131/124778176-100ba900-df41-11eb-8345-b3b51e0d1e9f.png">
As you can see it outputs empty tokens.<|||||>> @patrickvonplaten the uploaded model is still performing poorly so I'm not 100% the issue is fully resolved.
> <img alt="Screenshot 2021-07-07 at 16 26 59" width="1801" src="https://user-images.githubusercontent.com/1704131/124778176-100ba900-df41-11eb-8345-b3b51e0d1e9f.png">
> As you can see it outputs empty tokens.
Hi @BirgerMoell, I'm training a RoBERTa model too using JAX during this community week -- model [here](https://huggingface.co/flax-community/indonesian-roberta-base). I got about 2.188 evaluation loss, yet the results are still somewhat jibberish despite the result. I think our models are, somehow, trained incorrectly? Or possibly require more data cleaning of some sort.<|||||>@w11wo Yeah. Something is definitely up. I think a good idea would be that people who work with similar models figure out a good way to clean the data and look at other things that might be wrong.<|||||>Facing same issue here, trained a model with Flax / Jax, then saved. When loading in Pytorch via "from Flax = True" , I have silly output despite training showing OK loss... Did you manage to find a solution or understand the issue ? <|||||>Hi @jppaolim !
In my case, I loaded the earlier weights of the model (from the first few epochs), instead of the fully-trained model weights from the last training epoch. Loading the right model weights fixed it for me.
Another way to fix it might be training for longer.
Hope this helps! :) |
transformers | 12,553 | closed | `model_name_or_path` does not seem to load in previously trained checkpoints | ## Environment info
- `transformers` version: 4.9.0.dev0
- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu)
- Jax version: 0.2.16
- JaxLib version: 0.1.68
- Using GPU in script?: Using TPU
- Using distributed or parallel set-up in script?: Yes
## Information
Model I am using is RoBERTa, and it is a part of the flax-community week.
I am trying to load a previously trained model checkpoint by setting the 'model_name_or_path' flag into a MLM script, which can be found [here](https://huggingface.co/flax-community/roberta-large-scandi/blob/main/src/run_mlm_flax_stream.py), but seems that the model is initialized with new weights...
## Expected behavior
Seeing that the model training loss would continue from where it stopped, and not seeing that the new model metrics simply mimicked the already trained metrics. | 07-07-2021 10:57:45 | 07-07-2021 10:57:45 | Can you post a code snippet or make a Colab to reproduce the error?<|||||>@MalteHB it would be nice if you could provide an exact code snippet that we can copy paste to reproduce the error. Otherwise I don't really know what code you've run. I tried re-starting to train from a pretrained checkpoint and it works just fine on my side.<|||||>@NielsRogge @patrickvonplaten yes of course, sorry!
As mentioned we are using a modified version of the `run_mlm_flax_stream.py` script which you can find [here](https://huggingface.co/flax-community/roberta-large-scandi/blob/main/src/run_mlm_flax_stream.py), and the code used to run the script is, where `"/home/Z6HJB/roberta-large-scandi/roberta-base-pretrained-scandinavian/"` is a directory with a `config.json`and a `flax_model.msgpack`:
```
export MODEL_DIR=/home/Z6HJB/roberta-large-scandi/roberta-base-pretrained-scandinavian/
source /home/Z6HJB/test/bin/activate
python3 ./src/run_mlm_flax_stream.py \
--model_name_or_path="/home/Z6HJB/roberta-large-scandi/roberta-base-pretrained-scandinavian/" \
--output_dir="/home/Z6HJB/roberta-large-scandi/model_continued2" \
--tokenizer_name="${MODEL_DIR}" \
--dataset_name="mc4" \
--dataset_config_name="unshuffled_deduplicated_en" \
--max_seq_length="128" \
--per_device_train_batch_size="128" \
--per_device_eval_batch_size="128" \
--learning_rate="3e-4" \
--warmup_steps="1000" \
--overwrite_output_dir \
--adam_beta1="0.9" \
--adam_beta2="0.98" \
--num_train_steps="1000000" \
--num_eval_samples="5000" \
--save_steps="1000" \
--logging_steps="25" \
--eval_steps="1000" \
--push_to_hub \
#--config_name="${MODEL_DIR}" \
#--model_type="roberta" \
```
Let me know if this suffices or if you need more!
I might be busy for the rest of the day since I have a football match to watch 🇩🇰 🇩🇰 🇩🇰 🇩🇰 🇩🇰 🇩🇰 <|||||>I figured it out this line: https://huggingface.co/flax-community/roberta-large-scandi/blob/main/src/run_mlm_flax_stream.py#L439
Forces the model to be reinitialized from scratch everytime. I see that the official script also does that -> I'll open a PR to fix it in the official script and then you shoudl be able to copy paste from it :-) <|||||>What an awesome guy, you are @patrickvonplaten! Thank you so much! |
transformers | 12,552 | closed | Make LazyModule picklable | From this issue https://github.com/huggingface/transformers/issues/12549 it seems that it could be nice to have the `transformers` module picklable, since it can be useful for the `datasets` library's caching for example.
The only object that is currently not picklable is the `_LazyModule`.
In this PR I just made this object picklable, and so `transformers` becomes picklable as well.
This should hopefully help with issue https://github.com/huggingface/transformers/issues/12549 | 07-07-2021 09:54:28 | 07-07-2021 09:54:28 | This doesn't work sadly, because it then sets the wrong `__file__` and `__path__` attribute to the `transformers` module:
```py
import transformers
transformers.__file__
```
will return something like `'/home/sgugger/git/transformers/src/transformers/file_utils.py'` instead of `'/home/sgugger/git/transformers/src/transformers/__init__.py'`.
This will then probably mess up lots of things that depend on those attributes.
I can look at another solution when I have some time.<|||||>Indeed ! Thanks for checking
At least I tried x)
There must be a way to keep _LazyModule inside `__init__.py` and make it picklable. Currently the issue is that it can't be imported like this `from transformers import _LazyModule`. But as soon as we enable its import, it will be possible to pickle it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Resolved by #12567 |
transformers | 12,551 | closed | [trainer] add option to ignore keys for the train function too (#11719) | # What does this PR do?
This pull request adds the option of ignoring certain output keys for evaluation during the training phase. As of now, this option is only available for `predict` and `evaluation` methods of the `Trainer` class which can only be called after the training.
Fixes #11719
Changes:
1. Add a new parameter to the `Trainer.train` function called `ignore_keys_for_eval`.
2. Pass this to the `ignore_keys` parameter of the `trainer.evaluate` function that is already called within the `trainer.train` function.
2. Add the parameter to the docstring. | 07-07-2021 09:31:24 | 07-07-2021 09:31:24 | |
transformers | 12,550 | closed | This will reduce "Already borrowed error": | # What does this PR do?
Original issue https://github.com/huggingface/tokenizers/issues/537
The original issue is caused by transformers calling many times
mutable functions on the rust tokenizers.
Rust needs to guarantee that only 1 agent has a mutable reference
to memory at a given time (for many reasons which don't need explaining
here). Usually, the rust compiler can guarantee that this property is
true at compile time.
Unfortunately, this is impossible for Python to do that, so PyO3, the
bridge between rust and python used by `tokenizers`, will change the
compile guarantee for a dynamic guarantee, so if multiple agents try
to have multiple mutable borrows at the same time, then the runtime will
yell with "Already borrowed".
The proposed fix here in transformers, is simply to reduce the actual
number of calls that really need mutable borrows. By reducing them,
we reduce the risk of running into "Already borrowed" error.
The caveat is now we add a call to read the current configuration of the
`_tokenizer`, so worst case we have 2 calls instead of 1, and best case
we simply have 1 + a Python comparison of a dict (should be negligible).
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #https://github.com/huggingface/tokenizers/issues/537
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@n1t0 @LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 07-07-2021 09:09:25 | 07-07-2021 09:09:25 | @LysandreJik I would welcome your look on this too. |
transformers | 12,549 | closed | TypeError: cannot pickle '_LazyModule' object | @stas00 edit: please see https://github.com/huggingface/transformers/issues/12549#issuecomment-875287701 for the short reproduction script.
----------------
## Environment info
- `transformers` version: 4.9.0.dev0
- Platform: Linux with Nvidia P40
- Python version: 3.8.0
- PyTorch version (GPU?): 1.8.0
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help
@stas00 @patrickvonplaten, @LysandreJik
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [√] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [√] my own task or dataset: (give details below)
## To reproduce
I am running the minimal command:
```
python run_clm.py \
--model_name_or_path /mycheckpoin/ \
--train_file train.txt \
--validation_file eval.txt \
--do_train \
--do_eval \
--output_dir ./models/ \
--no_cuda False \
--fp16 \
--sharded_ddp simple \
--num_train_epochs 3.0 \
--disable_tqdm False \
--save_steps 100 \
--preprocessing_num_workers 32 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4
```
and I modified the following parts of the script ‘run_clm.py’, and the parameter rank passed in training_args.local_rank
```
def init_process(rank, size, fn, backend='gloo'):
""" Initialize the distributed environment. """
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29500'
dist.init_process_group(backend, rank=rank, world_size=size)
fn(rank)
if __name__ == "__main__":
# main()
# size = int(os.environ['WORLD_SIZE'])
size = int(torch.cuda.device_count())
print(size)
processes = []
mp.set_start_method("spawn")
for rank in range(size):
p = mp.Process(target=init_process, args=(rank, main))
p.start()
processes.append(p)
for p in processes:
p.join()
```
the traceback informations are:
```
Process Process-2:
Traceback (most recent call last):
File "/usr/local/anaconda3/envs/py38/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/local/anaconda3/envs/py38/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/media/cfs/gonglixing/9Nctl/gpt_v2/run_clm_v3.py", line 511, in init_process
fn(rank, size)
File "/media/cfs/gonglixing/9Nctl/gpt_v2/run_clm_v3.py", line 367, in main
tokenized_datasets = raw_datasets.map(
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 471, in map
{
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 472, in <dictcomp>
k: dataset.map(
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1736, in map
transformed_shards = [r.get() for r in results]
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1736, in <listcomp>
transformed_shards = [r.get() for r in results]
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/pool.py", line 771, in get
raise self._value
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/pool.py", line 537, in _handle_tasks
put(task)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/connection.py", line 209, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/reduction.py", line 54, in dumps
cls(buf, protocol, *args, **kwds).dump(obj)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py", line 498, in dump
StockPickler.dump(self, obj)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 487, in dump
self.save(obj)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 901, in save_tuple
save(element)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py", line 990, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 971, in save_dict
self._batch_setitems(obj.items())
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 997, in _batch_setitems
save(v)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py", line 1493, in save_function
pickler.save_reduce(_create_function, (obj.__code__,
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 692, in save_reduce
save(args)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 901, in save_tuple
save(element)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py", line 990, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 971, in save_dict
self._batch_setitems(obj.items())
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 997, in _batch_setitems
save(v)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 578, in save
rv = reduce(self.proto)
TypeError: cannot pickle '_LazyModule' object
```
I run the following command based on the original script, it works well. The reason why I don't use this command is that our cluster doesn't support this way of passing parameters: "-m torch.distributed.launch --nproc_per_node=4 "
```
python -m torch.distributed.launch --nproc_per_node=4 run_clm.py \
--model_name_or_path /mycheckpoin/ \
--train_file train.txt \
--validation_file eval.txt \
--do_train \
--do_eval \
--output_dir ./models/ \
--no_cuda False \
--fp16 \
--sharded_ddp simple \
--num_train_epochs 3.0 \
--disable_tqdm False \
--save_steps 100 \
--preprocessing_num_workers 32 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4
```
## Expected behavior
| 07-07-2021 03:53:55 | 07-07-2021 03:53:55 | Could you please attach the final script you used or a branch that we can use to reproduce your code exactly? Thanks.
note: I took the liberty to edit your OP to use code formatting which is much easier to read. If possible use a similar approach in future reports. Thank you!
<|||||>> Could you please attach the final script you used or a branch that we can use to reproduce your code exactly? Thanks.
>
> note: I took the liberty to edit your OP to use code formatting which is much easier to read. If possible use a similar approach in future reports. Thank you!
this is my scripts, thanks very much!
[run_clm.py.zip](https://github.com/huggingface/transformers/files/6774180/run_clm.py.zip)
<|||||>Thank you. The attached script fails for me. You also didn't supply the data, but I assume it doesn't matter. In the future please supply everything or adapt your runtime so that we could run it out of the box and not need to spend a lot of time to try to make things work.
```
python run_clm.py \
> --model_name_or_path gpt2 \
> --dataset_name wikitext \
> --dataset_config_name wikitext-2-raw-v1 \
> --do_train \
> --do_eval \
> --output_dir /tmp/test-clm
2021-07-06 21:18:15.064178: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2
2021-07-06 21:18:17.425481: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
2021-07-06 21:18:17.425484: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
Process Process-1:
Traceback (most recent call last):
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
TypeError: init_process() missing 1 required positional argument: 'fn'
Process Process-2:
Traceback (most recent call last):
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
TypeError: init_process() missing 1 required positional argument: 'fn'
```
same failure with distributed.<|||||>> Thank you. The attached script fails for me. You also didn't supply the data, but I assume it doesn't matter. In the future please supply everything or adapt your runtime so that we could run it out of the box and not need to spend a lot of time to try to make things work.
So sorry, it's my fault, I gave you the wrong version.
This is the right version.
[run_clm.py.zip](https://github.com/huggingface/transformers/files/6774314/run_clm.py.zip)
<|||||>I'm able to reproduce the problem - great!
Let's see what the culprit is.<|||||>So the trigger is: `--preprocessing_num_workers 32`
and the minimal reproduction cmd is:
```
python run_clm.py --model_name_or_path sshleifer/tiny-gpt2 --dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /tmp/test-clm \
--overwrite_output_dir --preprocessing_num_workers 32
```
It happens only with your version of the script. I tested with the one in `master` it works fine there.
The problem is unrelated to the change in https://github.com/huggingface/transformers/pull/11168 as you have discovered yourself, since your code removed my changes and you're just passing:
```
def tokenize_function1(examples):
return tokenizer(examples[text_column_name])
```
So need to look elsewhere for the cause.<|||||>From a quick look I suspect that perhaps this is an issue in `datasets` when `num_proc > 1`? Could you try to reduce the script to the bare minimum, so that it runs just:
```
with training_args.main_process_first(desc="dataset map tokenization"):
tokenized_datasets = raw_datasets.map(
None,
num_proc=5,
)
```
inside the multi-proc modifications you made.
e.g. the above is enough to trigger the same error in the script so removing most of the code should
<|||||>OK, here is the minimal reproducible script. Totally unrelated to `transformers` it seems except for the import of `transformers`
```
import logging
import math
import os
import sys
from dataclasses import dataclass, field
from typing import Optional
import datasets
from datasets import load_dataset
import transformers
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
def main(rank, size):
def tokenize_function(examples):
return None
raw_datasets = load_dataset("wikitext", "wikitext-2-raw-v1")
tokenized_datasets = raw_datasets.map(
tokenize_function,
num_proc=32,
)
def _mp_fn(index):
# For xla_spawn (TPUs)
main()
def init_process(rank, size, fn, backend='gloo'):
""" Initialize the distributed environment. """
os.environ['MASTER_ADDR'] = '127.0.0.1'
os.environ['MASTER_PORT'] = '29500'
dist.init_process_group(backend, rank=rank, world_size=size)
fn(rank, size)
if __name__ == "__main__":
# main()
# size = int(os.environ['WORLD_SIZE'])
size = int(torch.cuda.device_count())
print(size)
processes = []
mp.set_start_method("spawn")
for rank in range(size):
p = mp.Process(target=init_process, args=(rank, size, main))
p.start()
processes.append(p)
for p in processes:
p.join()
```
this still fails with the same error.
```
python run_clm.py
2
Reusing dataset wikitext (/home/stas/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/aa5e094000ec7afeb74c3be92c88313cd6f132d564c7effd961c10fd47c76f20)
Reusing dataset wikitext (/home/stas/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/aa5e094000ec7afeb74c3be92c88313cd6f132d564c7effd961c10fd47c76f20)
Process Process-1:
Process Process-2:
Traceback (most recent call last):
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/mnt/nvme1/code/huggingface/users/lancekung/run_clm.py", line 60, in init_process
fn(rank, size)
File "/mnt/nvme1/code/huggingface/users/lancekung/run_clm.py", line 46, in main
tokenized_datasets = raw_datasets.map(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/dataset_dict.py", line 471, in map
{
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/dataset_dict.py", line 472, in <dictcomp>
k: dataset.map(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_dataset.py", line 1736, in map
transformed_shards = [r.get() for r in results]
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_dataset.py", line 1736, in <listcomp>
transformed_shards = [r.get() for r in results]
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/pool.py", line 771, in get
raise self._value
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/pool.py", line 537, in _handle_tasks
put(task)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/connection.py", line 209, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/reduction.py", line 54, in dumps
cls(buf, protocol, *args, **kwds).dump(obj)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py", line 498, in dump
StockPickler.dump(self, obj)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 487, in dump
self.save(obj)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 901, in save_tuple
save(element)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py", line 990, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 971, in save_dict
self._batch_setitems(obj.items())
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 997, in _batch_setitems
save(v)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py", line 1493, in save_function
pickler.save_reduce(_create_function, (obj.__code__,
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 692, in save_reduce
save(args)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 901, in save_tuple
save(element)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py", line 990, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 971, in save_dict
self._batch_setitems(obj.items())
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 997, in _batch_setitems
save(v)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 578, in save
rv = reduce(self.proto)
TypeError: cannot pickle '_LazyModule' object
Traceback (most recent call last):
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/mnt/nvme1/code/huggingface/users/lancekung/run_clm.py", line 60, in init_process
fn(rank, size)
File "/mnt/nvme1/code/huggingface/users/lancekung/run_clm.py", line 46, in main
tokenized_datasets = raw_datasets.map(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/dataset_dict.py", line 471, in map
{
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/dataset_dict.py", line 472, in <dictcomp>
k: dataset.map(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_dataset.py", line 1736, in map
transformed_shards = [r.get() for r in results]
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_dataset.py", line 1736, in <listcomp>
transformed_shards = [r.get() for r in results]
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/pool.py", line 771, in get
raise self._value
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/pool.py", line 537, in _handle_tasks
put(task)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/connection.py", line 209, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/reduction.py", line 54, in dumps
cls(buf, protocol, *args, **kwds).dump(obj)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py", line 498, in dump
StockPickler.dump(self, obj)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 487, in dump
self.save(obj)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 901, in save_tuple
save(element)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py", line 990, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 971, in save_dict
self._batch_setitems(obj.items())
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 997, in _batch_setitems
save(v)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py", line 1493, in save_function
pickler.save_reduce(_create_function, (obj.__code__,
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 692, in save_reduce
save(args)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 901, in save_tuple
save(element)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py", line 990, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 971, in save_dict
self._batch_setitems(obj.items())
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 997, in _batch_setitems
save(v)
File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 578, in save
rv = reduce(self.proto)
TypeError: cannot pickle '_LazyModule' object
```
But if you either:
* comment out `import transformers`
* or set `num_proc=1` in `datasets.map` (instead of `n>1`)
all is good.
@lhoestq, @albertvillanova - does this ring any bells? Clearly `transformers` loads some module lazily and trips up `datasets` even though transformers isn't really used here directly. Thank you.
<|||||>> OK, here is the minimal reproducible script. Totally unrelated to `transformers` it seems except for the import of `transformers`
>
> ```
> import logging
> import math
> import os
> import sys
> from dataclasses import dataclass, field
> from typing import Optional
>
> import datasets
> from datasets import load_dataset
>
> import transformers
>
> import torch
> import torch.distributed as dist
> import torch.multiprocessing as mp
>
> def main(rank, size):
>
> def tokenize_function(examples):
> return None
>
> raw_datasets = load_dataset("wikitext", "wikitext-2-raw-v1")
> tokenized_datasets = raw_datasets.map(
> tokenize_function,
> num_proc=32,
> )
>
> def _mp_fn(index):
> # For xla_spawn (TPUs)
> main()
>
> def init_process(rank, size, fn, backend='gloo'):
> """ Initialize the distributed environment. """
> os.environ['MASTER_ADDR'] = '127.0.0.1'
> os.environ['MASTER_PORT'] = '29500'
> dist.init_process_group(backend, rank=rank, world_size=size)
> fn(rank, size)
>
>
> if __name__ == "__main__":
> # main()
> # size = int(os.environ['WORLD_SIZE'])
> size = int(torch.cuda.device_count())
> print(size)
> processes = []
> mp.set_start_method("spawn")
> for rank in range(size):
> p = mp.Process(target=init_process, args=(rank, size, main))
> p.start()
> processes.append(p)
>
> for p in processes:
> p.join()
> ```
>
> this still fails with the same error.
>
> ```
> python run_clm.py
> 2
> Reusing dataset wikitext (/home/stas/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/aa5e094000ec7afeb74c3be92c88313cd6f132d564c7effd961c10fd47c76f20)
> Reusing dataset wikitext (/home/stas/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/aa5e094000ec7afeb74c3be92c88313cd6f132d564c7effd961c10fd47c76f20)
> Process Process-1:
> Process Process-2:
> Traceback (most recent call last):
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
> self.run()
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/multiprocessing/process.py", line 108, in run
> self._target(*self._args, **self._kwargs)
> File "/mnt/nvme1/code/huggingface/users/lancekung/run_clm.py", line 60, in init_process
> fn(rank, size)
> File "/mnt/nvme1/code/huggingface/users/lancekung/run_clm.py", line 46, in main
> tokenized_datasets = raw_datasets.map(
> File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/dataset_dict.py", line 471, in map
> {
> File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/dataset_dict.py", line 472, in <dictcomp>
> k: dataset.map(
> File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_dataset.py", line 1736, in map
> transformed_shards = [r.get() for r in results]
> File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_dataset.py", line 1736, in <listcomp>
> transformed_shards = [r.get() for r in results]
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/pool.py", line 771, in get
> raise self._value
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/pool.py", line 537, in _handle_tasks
> put(task)
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/connection.py", line 209, in send
> self._send_bytes(_ForkingPickler.dumps(obj))
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/reduction.py", line 54, in dumps
> cls(buf, protocol, *args, **kwds).dump(obj)
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py", line 498, in dump
> StockPickler.dump(self, obj)
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 487, in dump
> self.save(obj)
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 560, in save
> f(self, obj) # Call unbound method with explicit self
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 901, in save_tuple
> save(element)
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 560, in save
> f(self, obj) # Call unbound method with explicit self
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py", line 990, in save_module_dict
> StockPickler.save_dict(pickler, obj)
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 971, in save_dict
> self._batch_setitems(obj.items())
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 997, in _batch_setitems
> save(v)
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 560, in save
> f(self, obj) # Call unbound method with explicit self
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py", line 1493, in save_function
> pickler.save_reduce(_create_function, (obj.__code__,
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 692, in save_reduce
> save(args)
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 560, in save
> f(self, obj) # Call unbound method with explicit self
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 901, in save_tuple
> save(element)
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 560, in save
> f(self, obj) # Call unbound method with explicit self
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py", line 990, in save_module_dict
> StockPickler.save_dict(pickler, obj)
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 971, in save_dict
> self._batch_setitems(obj.items())
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 997, in _batch_setitems
> save(v)
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 578, in save
> rv = reduce(self.proto)
> TypeError: cannot pickle '_LazyModule' object
> Traceback (most recent call last):
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
> self.run()
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/multiprocessing/process.py", line 108, in run
> self._target(*self._args, **self._kwargs)
> File "/mnt/nvme1/code/huggingface/users/lancekung/run_clm.py", line 60, in init_process
> fn(rank, size)
> File "/mnt/nvme1/code/huggingface/users/lancekung/run_clm.py", line 46, in main
> tokenized_datasets = raw_datasets.map(
> File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/dataset_dict.py", line 471, in map
> {
> File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/dataset_dict.py", line 472, in <dictcomp>
> k: dataset.map(
> File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_dataset.py", line 1736, in map
> transformed_shards = [r.get() for r in results]
> File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/arrow_dataset.py", line 1736, in <listcomp>
> transformed_shards = [r.get() for r in results]
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/pool.py", line 771, in get
> raise self._value
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/pool.py", line 537, in _handle_tasks
> put(task)
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/connection.py", line 209, in send
> self._send_bytes(_ForkingPickler.dumps(obj))
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/multiprocess/reduction.py", line 54, in dumps
> cls(buf, protocol, *args, **kwds).dump(obj)
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py", line 498, in dump
> StockPickler.dump(self, obj)
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 487, in dump
> self.save(obj)
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 560, in save
> f(self, obj) # Call unbound method with explicit self
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 901, in save_tuple
> save(element)
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 560, in save
> f(self, obj) # Call unbound method with explicit self
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py", line 990, in save_module_dict
> StockPickler.save_dict(pickler, obj)
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 971, in save_dict
> self._batch_setitems(obj.items())
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 997, in _batch_setitems
> save(v)
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 560, in save
> f(self, obj) # Call unbound method with explicit self
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py", line 1493, in save_function
> pickler.save_reduce(_create_function, (obj.__code__,
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 692, in save_reduce
> save(args)
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 560, in save
> f(self, obj) # Call unbound method with explicit self
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 901, in save_tuple
> save(element)
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 560, in save
> f(self, obj) # Call unbound method with explicit self
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/dill/_dill.py", line 990, in save_module_dict
> StockPickler.save_dict(pickler, obj)
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 971, in save_dict
> self._batch_setitems(obj.items())
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 997, in _batch_setitems
> save(v)
> File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/pickle.py", line 578, in save
> rv = reduce(self.proto)
> TypeError: cannot pickle '_LazyModule' object
> ```
>
> But if you either:
>
> * comment out `import transformers`
> * or set `num_proc=1` in `datasets.map` (instead of `n>1`)
> all is good.
>
> @lhoestq, @albertvillanova - does this ring any bells? Clearly `transformers` loads some module lazily and trips up `datasets` even though transformers isn't really used here directly. Thank you.
Thank you so much for your time, and hope other experts can give some tips about this problem.<|||||>Hi @stas00, thanks for pinging.
I'm having a look and after a first search, I think you are right and the problem comes from the fact that `transformers` makes a lazy import when importing it. I guess this affects `datasets` here: https://github.com/huggingface/datasets/blob/master/src/datasets/utils/py_utils.py#L319 (PR: https://github.com/huggingface/datasets/pull/502), which is used by dumps to pickle objects in a multiprocessing setup.
cc: @lhoestq <|||||>> Hi @stas00, thanks for pinging.
>
> I'm having a look and after a first search, I think you are right and the problem comes from the fact that `transformers` makes a lazy import when importing it. I guess this affects `datasets` here: https://github.com/huggingface/datasets/blob/master/src/datasets/utils/py_utils.py#L319 (PR: [huggingface/datasets#502](https://github.com/huggingface/datasets/pull/502)), which is used by dumps to pickle objects in a multiprocessing setup.
>
> cc: @lhoestq
hi albertvillanova, I removed import of transformers according to the following code, it still can't work.
`def _no_cache_fields(obj):
try:
if (
"PreTrainedTokenizerBase" in [base_class.__name__ for base_class in type(obj).__mro__]
and hasattr(obj, "cache")
and isinstance(obj.cache, dict)
)`
<|||||>Note that we can easily make `_LazyModule` picklable. I can open a PR if needed to implement a `__reduce__` method for `_LazyModule`. It's the only object that prevents `transformers` from being picklable.
EDIT: here it is: https://github.com/huggingface/transformers/pull/12552
This is just a way to easily fix this issue, but I think we should definitely keep trying to figure out why it tried to pickle `transformers` in the first place. This might come from `dill` that pickles the globals of some environments when pickling any object<|||||>Linking to the new PR: https://github.com/huggingface/transformers/pull/12567
<|||||>Should be closed by #12567, please let us know if the problem persists.<|||||>> Should be closed by #12567, please let us know if the problem persists.
Hi, a new problem has arisen
we can pickle "LazyModule" now, but can't pickle <class 'types.AutoModelForCausalLM'>
Traceback (most recent call last):
File "/usr/local/anaconda3/envs/py38/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/local/anaconda3/envs/py38/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/media/cfs/gonglixing/9Nctl/gpt_v2/run_clm_v3.py", line 509, in init_process
fn(rank, size)
File "/media/cfs/gonglixing/9Nctl/gpt_v2/run_clm_v3.py", line 367, in main
tokenized_datasets = raw_datasets.map(
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 471, in map
{
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 472, in <dictcomp>
k: dataset.map(
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1736, in map
transformed_shards = [r.get() for r in results]
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1736, in <listcomp>
transformed_shards = [r.get() for r in results]
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/pool.py", line 771, in get
raise self._value
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/pool.py", line 537, in _handle_tasks
put(task)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/connection.py", line 209, in send
self._send_bytes(_ForkingPickler.dumps(obj))
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/multiprocess/reduction.py", line 54, in dumps
cls(buf, protocol, *args, **kwds).dump(obj)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py",line 498, in dump
StockPickler.dump(self, obj)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 487, in dump
self.save(obj)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 901, in save_tuple
save(element)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py",line 990, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 971, in save_dict
self._batch_setitems(obj.items())
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 997, in _batch_setitems
save(v)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py",line 1493, in save_function
pickler.save_reduce(_create_function, (obj.__code__,
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 692, in save_reduce
save(args)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 901, in save_tuple
save(element)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py",line 990, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 971, in save_dict
self._batch_setitems(obj.items())
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 997, in _batch_setitems
save(v)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 560, in save
f(self, obj) # Call unbound method with explicit self
File "/usr/local/anaconda3/envs/py38/lib/python3.8/site-packages/dill/_dill.py",line 1439, in save_type
StockPickler.save_global(pickler, obj, name=name)
File "/usr/local/anaconda3/envs/py38/lib/python3.8/pickle.py", line 1070, in save_global
raise PicklingError(
_pickle.PicklingError: Can't pickle <class 'types.AutoModelForCausalLM'>: it's notfound as types.AutoModelForCausalLM |
transformers | 12,548 | closed | raise exception when arguments to pipeline are incomplete | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #12478 (issue)
As discussed in the issue, this PR adds an exception when arguments to `pipeline` are incomplete. Incomplete cases are providing `tokenizer` or `feature_extractor` without specifying the model, which could lead to unexpected behavior demonstrated in the issue.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik @Narsil
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-07-2021 03:48:33 | 07-07-2021 03:48:33 | |
transformers | 12,547 | closed | Get Start with CamembertForSequenceClassification | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.3
- Platform: Windows
- Python version: 3.9
- PyTorch version (GPU?): 1.9.0
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): CamemBert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
https://huggingface.co/transformers/task_summary.html
I follow the above Summary of the tasks, in the Sequence Classification section.
I'm trying to find the paraphrase probability of two French Sentence sentences by using CamemBertForSqeuenceClassification,
But I got the the following warning and output. How could I edit my code?
```
Some weights of the model checkpoint at ./nlp_models/camembert-base were not used when initializing CamembertForSequenceClassification: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']
- This IS expected if you are initializing CamembertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CamembertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of CamembertForSequenceClassification were not initialized from the model checkpoint at ./nlp_models/camembert-base and are newly initialized: ['classifier.dense.weight', 'classifier.dense.bias', 'classifier.out_proj.weight', 'classifier.out_proj.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
not paraphrase: 47%
is paraphrase: 53%
not paraphrase: 47%
is paraphrase: 53%
```
## To reproduce
Steps to reproduce the behavior:
```Python
import torch
from transformers import CamembertTokenizer
from transformers.models.camembert.modeling_camembert import CamembertForSequenceClassification
tokenizer = CamembertTokenizer.from_pretrained("./nlp_models/camembert-base")
model = CamembertForSequenceClassification.from_pretrained("./nlp_models/camembert-base")
classes = ["not paraphrase", "is paraphrase"]
sequence_0 = 'La société HuggingFace est basée à New York City'
sequence_1 = 'Les pommes sont particulièrement mauvaises pour la santé'
sequence_2 = "Le siège social de HuggingFace est situé à Manhattan"
#
paraphrase = tokenizer(sequence_0, sequence_2, return_tensors="pt")
not_paraphrase = tokenizer(sequence_0, sequence_1, return_tensors="pt")
paraphrase_classification_logits = model(**paraphrase).logits
not_paraphrase_classification_logits = model(**not_paraphrase).logits
paraphrase_results = torch.softmax(paraphrase_classification_logits, dim=1).tolist()[0]
not_paraphrase_results = torch.softmax(not_paraphrase_classification_logits, dim=1).tolist()[0]
# Should be paraphrase
for i in range(len(classes)):
print(f"{classes[i]}: {int(round(paraphrase_results[i] * 100))}%")
# Should not be paraphrase
for i in range(len(classes)):
print(f"{classes[i]}: {int(round(not_paraphrase_results[i] * 100))}%")
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 07-07-2021 00:58:18 | 07-07-2021 00:58:18 | Hi,
you are initializing a `CamembertForSequenceClassification` model with weights from `camembert-base`. This means that you are only initializing the base of the model, not the classification head. Hence, the head will have randomly initialized weights. This is also given as a warning:
```
Some weights of the model checkpoint at ./nlp_models/camembert-base were not used when initializing CamembertForSequenceClassification: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias']
- This IS expected if you are initializing CamembertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CamembertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of CamembertForSequenceClassification were not initialized from the model checkpoint at ./nlp_models/camembert-base and are newly initialized: ['classifier.dense.weight', 'classifier.dense.bias', 'classifier.out_proj.weight', 'classifier.out_proj.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
This tells you exactly that: one should first fine-tune `CamembertForSequenceClassification` on a downstream task (in this case, a labeled dataset of sentence pairs that are labeled with either paraphrase/not paraphrase). You can check the [hub](https://huggingface.co/models?search=camembert) to see whether someone has already fine-tuned CamemBERT for paraphrasing (however apparently this doesn't seem to be the case).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,546 | closed | Necessary resources for training a (small/tiny) LM from scratch? | This is mostly a follow up question in regards to this hugging face blog post on training a LM (and a tokenizer) from scratch :
https://huggingface.co/blog/how-to-train
I think this may be an ideal approach to try out in my situation, but I'm wondering about cost and how much data I really need to train a LM from scratch on my domain-specific dataset?
I'm quite new to the field and haven't read many papers on this subject as of yet, so I was hoping someone might be able to provide some ballpark estimates about computing resources required for training some small LM(s) from scratch? I'd like to obtain a fine-tuned (or trained from scratch) domain-specific LM to serve as a backbone for various downstream NLP tasks on my domain-specific text data. I have been experimenting with the fine-tuning LM approach, (i.e. fine-tuning BERT based models on MLM before performing task-specific fine-tuning), but I'm curious about the training from scratch option if I can get a rough idea the required compute resources / cost.
Thanks very much in advance for any help / tips on unpacking this question. | 07-06-2021 21:43:28 | 07-06-2021 21:43:28 | Hi,
could you please ask this question on the [forum](https://discuss.huggingface.co/) rather than here? We like to keep Github issues for bugs/feature requests.
Thanks!<|||||>Sure, thanks @NielsRogge . Here is a [link to the post on the forum](https://discuss.huggingface.co/t/necessary-resources-for-training-a-small-tiny-lm-from-scratch/8139), cheers. |
transformers | 12,545 | closed | [Flax] Error converting model to PyTorch from Flax | Hi, I followed the causal language modeling in Flax tutorial notebook provided [here](https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/causal_language_modeling_flax.ipynb) in Colab. And at the end of the training, I'd like to get a working PyTorch model from the JAX/Flax weights, hence I did this:
```python
from transformers import GPT2LMHeadModel
mdl_path = "w11wo/sundanese-gpt2-base"
pt_model = GPT2LMHeadModel.from_pretrained(mdl_path, from_flax=True)
pt_model.save_pretrained(mdl_path)
```
But during the conversion, it raised this error
```python
/usr/local/lib/python3.7/dist-packages/transformers/modeling_flax_pytorch_utils.py:201: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:180.)
pt_model_dict[flax_key] = torch.from_numpy(flax_tensor)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-44-08a8ff6e575c> in <module>()
7 tokenizer.save_pretrained(mdl_path)
8
----> 9 pt_model = GPT2LMHeadModel.from_pretrained(mdl_path, from_flax=True)
10 pt_model.save_pretrained(mdl_path)
11
2 frames
/usr/local/lib/python3.7/dist-packages/transformers/modeling_flax_pytorch_utils.py in load_flax_weights_in_pytorch_model(pt_model, flax_state)
199 # add weight to pytorch dict
200 flax_tensor = np.asarray(flax_tensor) if not isinstance(flax_tensor, np.ndarray) else flax_tensor
--> 201 pt_model_dict[flax_key] = torch.from_numpy(flax_tensor)
202 # remove from missing keys
203 missing_keys.remove(flax_key)
TypeError: can't convert np.ndarray of type bfloat16. The only supported types are: float64, float32, float16, complex64, complex128, int64, int32, int16, int8, uint8, and bool.
```
I think this issue occurred because the model I instantiated used `bfloat16` -- just as the tutorial showed. Specifically this block
```python
from transformers import FlaxAutoModelForCausalLM
model = FlaxAutoModelForCausalLM.from_config(config, seed=training_seed, dtype=jnp.dtype("bfloat16"))
```
I'd like to know if there's a workaround to this problem. Thanks! | 07-06-2021 19:21:16 | 07-06-2021 19:21:16 | Seeing how we are able to convert `float32` weights to `bfloat16` as shown [here](https://github.com/huggingface/transformers/issues/12534#issue-937864452), I tried to the reverse, i.e. converting `bfloat16` to `float32`. I executed the following lines:
```python
from transformers import GPT2LMHeadModel, GPT2Tokenizer, FlaxGPT2LMHeadModel
pretrained = "w11wo/sundanese-gpt2-base"
tmp_path = "sundanese-gpt-base"
model = FlaxGPT2LMHeadModel.from_pretrained(pretrained)
tokenizer = GPT2Tokenizer.from_pretrained(pretrained)
def to_f32(t):
return jax.tree_map(lambda x: x.astype(jnp.float32) if x.dtype == jnp.bfloat16 else x, t)
model.params = to_f32(model.params)
model.save_pretrained(tmp_path)
pt_model = GPT2LMHeadModel.from_pretrained(tmp_path, from_flax=True)
pt_model.save_pretrained(tmp_path)
```
And the model converted returned the following messages,
```python
All Flax model weights were used when initializing GPT2LMHeadModel.
Some weights of GPT2LMHeadModel were not initialized from the Flax model and are newly initialized: ['transformer.h.5.attn.masked_bias', 'transformer.h.4.attn.masked_bias', 'transformer.h.6.attn.bias', 'transformer.h.9.attn.bias', 'transformer.h.7.attn.bias', 'transformer.h.2.attn.masked_bias', 'transformer.h.6.attn.masked_bias', 'transformer.h.0.attn.bias', 'transformer.h.0.attn.masked_bias', 'transformer.h.1.attn.masked_bias', 'transformer.h.8.attn.masked_bias', 'transformer.h.10.attn.bias', 'transformer.h.8.attn.bias', 'transformer.h.4.attn.bias', 'transformer.h.9.attn.masked_bias', 'transformer.h.1.attn.bias', 'transformer.h.10.attn.masked_bias', 'transformer.h.3.attn.masked_bias', 'transformer.h.11.attn.bias', 'lm_head.weight', 'transformer.h.3.attn.bias', 'transformer.h.2.attn.bias', 'transformer.h.5.attn.bias', 'transformer.h.7.attn.masked_bias', 'transformer.h.11.attn.masked_bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
I suspect that there are conversion errors, but am clueless as to the reason behind the error.
Further, I tried using the converted PyTorch model for text-generation via the following pipeline:
```python
from transformers import pipeline
nlp = pipeline(
"text-generation",
model=tmp_path,
tokenizer=tokenizer
)
nlp("Nami abdi Budi")
>> Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
[{'generated_text': "Testipit ayeuna ayeuna409 pillowsKrKr bunderan Platform sipil poé Summary Sakitu Enak maké bunderan sipilthough papatong hubungiashmina Stock protésnyieun248duta terhadap Mm �issinginjeumdoi ' 'Dealer Studio gunarior floridabodas ' békénAndroid Holland majalah dot mangaruhanumpahinting"}]
```
Unsurprisingly, the output is very much jibberish, despite the model trained down to about 3.66 validation loss. I think the conversion is somehow incorrect, hence the incorrect porting of weights, and thus the jibberish output.<|||||>I will take a look later today!<|||||>Also cc @patil-suraj <|||||>Thanks @patrickvonplaten! Really appreciate what you and your 🤗 team are doing! <|||||>One more thing, I trained RoBERTa using a different script for the Flax community week; hub repo [here](https://huggingface.co/flax-community/indonesian-roberta-base). It seems that converting the model to PyTorch raises a possibly related error:
```python
from transformers import RobertaForMaskedLM
model = RobertaForMaskedLM.from_pretrained("flax-community/indonesian-roberta-base", from_flax=True)
```
```
/usr/local/lib/python3.7/dist-packages/transformers/modeling_flax_pytorch_utils.py:201: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:180.)
pt_model_dict[flax_key] = torch.from_numpy(flax_tensor)
All Flax model weights were used when initializing RobertaForMaskedLM.
Some weights of RobertaForMaskedLM were not initialized from the Flax model and are newly initialized: ['lm_head.decoder.bias', 'roberta.embeddings.position_ids', 'lm_head.decoder.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```<|||||>> Seeing how we are able to convert `float32` weights to `bfloat16` as shown [here](https://github.com/huggingface/transformers/issues/12534#issue-937864452), I tried to the reverse, i.e. converting `bfloat16` to `float32`. I executed the following lines:
>
> ```python
> from transformers import GPT2LMHeadModel, GPT2Tokenizer, FlaxGPT2LMHeadModel
>
> pretrained = "w11wo/sundanese-gpt2-base"
> tmp_path = "sundanese-gpt-base"
>
> model = FlaxGPT2LMHeadModel.from_pretrained(pretrained)
> tokenizer = GPT2Tokenizer.from_pretrained(pretrained)
>
> def to_f32(t):
> return jax.tree_map(lambda x: x.astype(jnp.float32) if x.dtype == jnp.bfloat16 else x, t)
>
> model.params = to_f32(model.params)
> model.save_pretrained(tmp_path)
>
> pt_model = GPT2LMHeadModel.from_pretrained(tmp_path, from_flax=True)
> pt_model.save_pretrained(tmp_path)
> ```
>
> And the model converted returned the following messages,
>
> ```python
> All Flax model weights were used when initializing GPT2LMHeadModel.
>
> Some weights of GPT2LMHeadModel were not initialized from the Flax model and are newly initialized: ['transformer.h.5.attn.masked_bias', 'transformer.h.4.attn.masked_bias', 'transformer.h.6.attn.bias', 'transformer.h.9.attn.bias', 'transformer.h.7.attn.bias', 'transformer.h.2.attn.masked_bias', 'transformer.h.6.attn.masked_bias', 'transformer.h.0.attn.bias', 'transformer.h.0.attn.masked_bias', 'transformer.h.1.attn.masked_bias', 'transformer.h.8.attn.masked_bias', 'transformer.h.10.attn.bias', 'transformer.h.8.attn.bias', 'transformer.h.4.attn.bias', 'transformer.h.9.attn.masked_bias', 'transformer.h.1.attn.bias', 'transformer.h.10.attn.masked_bias', 'transformer.h.3.attn.masked_bias', 'transformer.h.11.attn.bias', 'lm_head.weight', 'transformer.h.3.attn.bias', 'transformer.h.2.attn.bias', 'transformer.h.5.attn.bias', 'transformer.h.7.attn.masked_bias', 'transformer.h.11.attn.masked_bias']
> You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
> ```
>
> I suspect that there are conversion errors, but am clueless as to the reason behind the error.
>
> Further, I tried using the converted PyTorch model for text-generation via the following pipeline:
>
> ```python
> from transformers import pipeline
>
> nlp = pipeline(
> "text-generation",
> model=tmp_path,
> tokenizer=tokenizer
> )
>
> nlp("Nami abdi Budi")
> >> Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
> [{'generated_text': "Testipit ayeuna ayeuna409 pillowsKrKr bunderan Platform sipil poé Summary Sakitu Enak maké bunderan sipilthough papatong hubungiashmina Stock protésnyieun248duta terhadap Mm �issinginjeumdoi ' 'Dealer Studio gunarior floridabodas ' békénAndroid Holland majalah dot mangaruhanumpahinting"}]
> ```
>
> Unsurprisingly, the output is very much jibberish, despite the model trained down to about 3.66 validation loss. I think the conversion is somehow incorrect, hence the incorrect porting of weights, and thus the jibberish output.
I think this is actually the correct way how you should convert bfloat16 to float32. Having followed this way of conversion can you verify that the PT and Flax model give similar outputs (ideally on CPU since TPU uses approximations)<|||||>> One more thing, I trained RoBERTa using a different script for the Flax community week; hub repo [here](https://huggingface.co/flax-community/indonesian-roberta-base). It seems that converting the model to PyTorch raises a possibly related error:
>
> ```python
> from transformers import RobertaForMaskedLM
> model = RobertaForMaskedLM.from_pretrained("flax-community/indonesian-roberta-base", from_flax=True)
> ```
>
> ```
> /usr/local/lib/python3.7/dist-packages/transformers/modeling_flax_pytorch_utils.py:201: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:180.)
> pt_model_dict[flax_key] = torch.from_numpy(flax_tensor)
> All Flax model weights were used when initializing RobertaForMaskedLM.
>
> Some weights of RobertaForMaskedLM were not initialized from the Flax model and are newly initialized: ['lm_head.decoder.bias', 'roberta.embeddings.position_ids', 'lm_head.decoder.weight']
> You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
> ```
This is expected can you check with https://github.com/huggingface/transformers/issues/12554<|||||>```python
from transformers import GPT2LMHeadModel, GPT2Tokenizer, FlaxGPT2LMHeadModel
import torch
import numpy as np
import jax
import jax.numpy as jnp
pretrained = "w11wo/sundanese-gpt2-base"
tmp_path = "./"
model = FlaxGPT2LMHeadModel.from_pretrained(pretrained)
def to_f32(t):
return jax.tree_map(lambda x: x.astype(jnp.float32) if x.dtype == jnp.bfloat16 else x, t)
model.params = to_f32(model.params)
model.save_pretrained(tmp_path)
model_pt = GPT2LMHeadModel.from_pretrained(tmp_path, from_flax=True)
input_ids = np.asarray(2 * [128 * [0]], dtype=np.int32)
input_ids_pt = torch.tensor(input_ids)
logits_pt = model_pt(input_ids_pt).logits
print(logits_pt)
logits_fx = model(input_ids).logits
print(logits_fx)
```
=> here you can see that you are correctly converting the weighst. The two models give the same results when doing a forward pass. This probably means that the training didn't work very well. Also not that vanilla generation often doesn't work very well. One should use `do_sample=True`<|||||>> > One more thing, I trained RoBERTa using a different script for the Flax community week; hub repo [here](https://huggingface.co/flax-community/indonesian-roberta-base). It seems that converting the model to PyTorch raises a possibly related error:
> > ```python
> > from transformers import RobertaForMaskedLM
> > model = RobertaForMaskedLM.from_pretrained("flax-community/indonesian-roberta-base", from_flax=True)
> > ```
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > ```
> > /usr/local/lib/python3.7/dist-packages/transformers/modeling_flax_pytorch_utils.py:201: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at /pytorch/torch/csrc/utils/tensor_numpy.cpp:180.)
> > pt_model_dict[flax_key] = torch.from_numpy(flax_tensor)
> > All Flax model weights were used when initializing RobertaForMaskedLM.
> >
> > Some weights of RobertaForMaskedLM were not initialized from the Flax model and are newly initialized: ['lm_head.decoder.bias', 'roberta.embeddings.position_ids', 'lm_head.decoder.weight']
> > You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
> > ```
>
> This is expected can you check with #12554
Checked, and it does return the same results. Thanks!<|||||>> ```python
> from transformers import GPT2LMHeadModel, GPT2Tokenizer, FlaxGPT2LMHeadModel
> import torch
> import numpy as np
> import jax
> import jax.numpy as jnp
>
> pretrained = "w11wo/sundanese-gpt2-base"
> tmp_path = "./"
>
> model = FlaxGPT2LMHeadModel.from_pretrained(pretrained)
>
> def to_f32(t):
> return jax.tree_map(lambda x: x.astype(jnp.float32) if x.dtype == jnp.bfloat16 else x, t)
>
> model.params = to_f32(model.params)
> model.save_pretrained(tmp_path)
>
> pt_model = GPT2LMHeadModel.from_pretrained(tmp_path, from_flax=True)
>
> input_ids = np.asarray(2 * [128 * [0]], dtype=np.int32)
> input_ids_pt = torch.tensor(input_ids)
>
> logits_pt = model_pt(input_ids_pt).logits
> print(logits_pt)
> logits_fx = model(input_ids).logits
> print(logits_fx)
> ```
This doesn't seem to return the same results on my end.
```
All Flax model weights were used when initializing GPT2LMHeadModel.
Some weights of GPT2LMHeadModel were not initialized from the Flax model and are newly initialized: ['transformer.h.9.attn.masked_bias', 'transformer.h.8.attn.bias', 'transformer.h.11.attn.masked_bias', 'transformer.h.2.attn.bias', 'transformer.h.7.attn.masked_bias', 'transformer.h.6.attn.masked_bias', 'transformer.h.6.attn.bias', 'transformer.h.3.attn.masked_bias', 'transformer.h.10.attn.bias', 'transformer.h.4.attn.masked_bias', 'transformer.h.1.attn.bias', 'transformer.h.9.attn.bias', 'transformer.h.5.attn.masked_bias', 'transformer.h.11.attn.bias', 'transformer.h.7.attn.bias', 'transformer.h.4.attn.bias', 'transformer.h.0.attn.masked_bias', 'transformer.h.1.attn.masked_bias', 'transformer.h.0.attn.bias', 'lm_head.weight', 'transformer.h.3.attn.bias', 'transformer.h.5.attn.bias', 'transformer.h.2.attn.masked_bias', 'transformer.h.8.attn.masked_bias', 'transformer.h.10.attn.masked_bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
tensor([[[ 3.3714, -8.7829, -7.0013, ..., -0.9126, -5.1224, 1.1825],
[ 3.3988, -8.7616, -7.0675, ..., -0.8201, -5.0986, 1.1000],
[ 3.3717, -8.7876, -7.0300, ..., -0.8022, -5.0759, 1.1368],
...,
[ 3.0262, -8.7904, -7.3067, ..., -1.0611, -5.3096, 1.0842],
[ 3.0953, -8.7540, -7.2462, ..., -1.0426, -5.2823, 1.1086],
[ 3.0298, -8.7541, -7.3130, ..., -1.0767, -5.3349, 1.0631]],
[[ 3.3714, -8.7829, -7.0013, ..., -0.9126, -5.1224, 1.1825],
[ 3.3988, -8.7616, -7.0675, ..., -0.8201, -5.0986, 1.1000],
[ 3.3717, -8.7876, -7.0300, ..., -0.8022, -5.0759, 1.1368],
...,
[ 3.0262, -8.7904, -7.3067, ..., -1.0611, -5.3096, 1.0842],
[ 3.0953, -8.7540, -7.2462, ..., -1.0426, -5.2823, 1.1086],
[ 3.0298, -8.7541, -7.3130, ..., -1.0767, -5.3349, 1.0631]]],
grad_fn=<AddBackward0>)
[[[-0.18300235 -0.3880248 0.51604265 ... -0.03581356 0.33767372
0.63595504]
[-0.22266355 -0.43806082 0.54173964 ... -0.09416972 0.1144447
0.46392882]
[-0.21165054 -0.41117024 0.52854717 ... -0.10048242 -0.01106828
0.21045035]
...
[-0.13923354 -0.11146995 0.2263387 ... -0.06064492 0.6304022
-0.27594602]
[-0.20900789 -0.1306307 0.22801363 ... -0.07289732 0.6694305
-0.2215142 ]
[-0.17881839 -0.11881532 0.2094207 ... -0.06347632 0.6529123
-0.2000624 ]]
[[-0.18300235 -0.3880248 0.51604265 ... -0.03581356 0.33767372
0.63595504]
[-0.22266355 -0.43806082 0.54173964 ... -0.09416972 0.1144447
0.46392882]
[-0.21165054 -0.41117024 0.52854717 ... -0.10048242 -0.01106828
0.21045035]
...
[-0.13923354 -0.11146995 0.2263387 ... -0.06064492 0.6304022
-0.27594602]
[-0.20900789 -0.1306307 0.22801363 ... -0.07289732 0.6694305
-0.2215142 ]
[-0.17881839 -0.11881532 0.2094207 ... -0.06347632 0.6529123
-0.2000624 ]]]
```<|||||>> > ```python
> > from transformers import GPT2LMHeadModel, GPT2Tokenizer, FlaxGPT2LMHeadModel
> > import torch
> > import numpy as np
> > import jax
> > import jax.numpy as jnp
> >
> > pretrained = "w11wo/sundanese-gpt2-base"
> > tmp_path = "./"
> >
> > model = FlaxGPT2LMHeadModel.from_pretrained(pretrained)
> >
> > def to_f32(t):
> > return jax.tree_map(lambda x: x.astype(jnp.float32) if x.dtype == jnp.bfloat16 else x, t)
> >
> > model.params = to_f32(model.params)
> > model.save_pretrained(tmp_path)
> >
> > pt_model = GPT2LMHeadModel.from_pretrained(tmp_path, from_flax=True)
> >
> > input_ids = np.asarray(2 * [128 * [0]], dtype=np.int32)
> > input_ids_pt = torch.tensor(input_ids)
> >
> > logits_pt = model_pt(input_ids_pt).logits
> > print(logits_pt)
> > logits_fx = model(input_ids).logits
> > print(logits_fx)
> > ```
>
> This doesn't seem to return the same results on my end.
>
> ```
> All Flax model weights were used when initializing GPT2LMHeadModel.
>
> Some weights of GPT2LMHeadModel were not initialized from the Flax model and are newly initialized: ['transformer.h.9.attn.masked_bias', 'transformer.h.8.attn.bias', 'transformer.h.11.attn.masked_bias', 'transformer.h.2.attn.bias', 'transformer.h.7.attn.masked_bias', 'transformer.h.6.attn.masked_bias', 'transformer.h.6.attn.bias', 'transformer.h.3.attn.masked_bias', 'transformer.h.10.attn.bias', 'transformer.h.4.attn.masked_bias', 'transformer.h.1.attn.bias', 'transformer.h.9.attn.bias', 'transformer.h.5.attn.masked_bias', 'transformer.h.11.attn.bias', 'transformer.h.7.attn.bias', 'transformer.h.4.attn.bias', 'transformer.h.0.attn.masked_bias', 'transformer.h.1.attn.masked_bias', 'transformer.h.0.attn.bias', 'lm_head.weight', 'transformer.h.3.attn.bias', 'transformer.h.5.attn.bias', 'transformer.h.2.attn.masked_bias', 'transformer.h.8.attn.masked_bias', 'transformer.h.10.attn.masked_bias']
> You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
> tensor([[[ 3.3714, -8.7829, -7.0013, ..., -0.9126, -5.1224, 1.1825],
> [ 3.3988, -8.7616, -7.0675, ..., -0.8201, -5.0986, 1.1000],
> [ 3.3717, -8.7876, -7.0300, ..., -0.8022, -5.0759, 1.1368],
> ...,
> [ 3.0262, -8.7904, -7.3067, ..., -1.0611, -5.3096, 1.0842],
> [ 3.0953, -8.7540, -7.2462, ..., -1.0426, -5.2823, 1.1086],
> [ 3.0298, -8.7541, -7.3130, ..., -1.0767, -5.3349, 1.0631]],
>
> [[ 3.3714, -8.7829, -7.0013, ..., -0.9126, -5.1224, 1.1825],
> [ 3.3988, -8.7616, -7.0675, ..., -0.8201, -5.0986, 1.1000],
> [ 3.3717, -8.7876, -7.0300, ..., -0.8022, -5.0759, 1.1368],
> ...,
> [ 3.0262, -8.7904, -7.3067, ..., -1.0611, -5.3096, 1.0842],
> [ 3.0953, -8.7540, -7.2462, ..., -1.0426, -5.2823, 1.1086],
> [ 3.0298, -8.7541, -7.3130, ..., -1.0767, -5.3349, 1.0631]]],
> grad_fn=<AddBackward0>)
> [[[-0.18300235 -0.3880248 0.51604265 ... -0.03581356 0.33767372
> 0.63595504]
> [-0.22266355 -0.43806082 0.54173964 ... -0.09416972 0.1144447
> 0.46392882]
> [-0.21165054 -0.41117024 0.52854717 ... -0.10048242 -0.01106828
> 0.21045035]
> ...
> [-0.13923354 -0.11146995 0.2263387 ... -0.06064492 0.6304022
> -0.27594602]
> [-0.20900789 -0.1306307 0.22801363 ... -0.07289732 0.6694305
> -0.2215142 ]
> [-0.17881839 -0.11881532 0.2094207 ... -0.06347632 0.6529123
> -0.2000624 ]]
>
> [[-0.18300235 -0.3880248 0.51604265 ... -0.03581356 0.33767372
> 0.63595504]
> [-0.22266355 -0.43806082 0.54173964 ... -0.09416972 0.1144447
> 0.46392882]
> [-0.21165054 -0.41117024 0.52854717 ... -0.10048242 -0.01106828
> 0.21045035]
> ...
> [-0.13923354 -0.11146995 0.2263387 ... -0.06064492 0.6304022
> -0.27594602]
> [-0.20900789 -0.1306307 0.22801363 ... -0.07289732 0.6694305
> -0.2215142 ]
> [-0.17881839 -0.11881532 0.2094207 ... -0.06347632 0.6529123
> -0.2000624 ]]]
> ```
weird it gives the same results for me<|||||>Here's my environment @patrickvonplaten, in case it'll help
- `transformers` version: 4.9.0.dev0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu)
- Jax version: 0.2.13
- JaxLib version: 0.1.66
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in><|||||>That's my output:
```bash
Some weights of GPT2LMHeadModel were not initialized from the Flax model and are newly initialized: ['transformer.h.9.attn.masked_bias', 'transformer.h.3.attn.bias', 'transformer.h.8.attn.masked_bias', 'transformer.h.2.attn.bias', 'transformer.h.4.attn.masked_bias', 'transformer.h.6.attn.masked_bias', 'transformer.h.2.attn.masked_bias', 'transformer.h.3.attn.masked_bias', 'transformer.h.11.attn.bias', 'transformer.h.0.attn.masked_bias', 'transformer.h.10.attn.masked_bias', 'lm_head.weight', 'transformer.h.11.attn.masked_bias', 'transformer.h.0.attn.bias', 'transformer.h.10.attn.bias', 'transformer.h.7.attn.masked_bias', 'transformer.h.4.attn.bias', 'transformer.h.1.attn.masked_bias', 'transformer.h.7.attn.bias', 'transformer.h.5.attn.masked_bias', 'transformer.h.9.attn.bias', 'transformer.h.5.attn.bias', 'transformer.h.6.attn.bias', 'transformer.h.8.attn.bias', 'transformer.h.1.attn.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
tensor([[[-0.1830, -0.3880, 0.5160, ..., -0.0358, 0.3377, 0.6360],
[-0.2227, -0.4381, 0.5417, ..., -0.0942, 0.1144, 0.4639],
[-0.2117, -0.4112, 0.5285, ..., -0.1005, -0.0111, 0.2105],
...,
[-0.1392, -0.1115, 0.2263, ..., -0.0606, 0.6304, -0.2759],
[-0.2090, -0.1306, 0.2280, ..., -0.0729, 0.6694, -0.2215],
[-0.1788, -0.1188, 0.2094, ..., -0.0635, 0.6529, -0.2001]],
[[-0.1830, -0.3880, 0.5160, ..., -0.0358, 0.3377, 0.6360],
[-0.2227, -0.4381, 0.5417, ..., -0.0942, 0.1144, 0.4639],
[-0.2117, -0.4112, 0.5285, ..., -0.1005, -0.0111, 0.2105],
...,
[-0.1392, -0.1115, 0.2263, ..., -0.0606, 0.6304, -0.2759],
[-0.2090, -0.1306, 0.2280, ..., -0.0729, 0.6694, -0.2215],
[-0.1788, -0.1188, 0.2094, ..., -0.0635, 0.6529, -0.2001]]],
grad_fn=<UnsafeViewBackward>)
2021-07-07 14:27:19.167439: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
[[[-0.18300146 -0.38216865 0.51648384 ... -0.03733653 0.33876732
0.63263166]
[-0.217145 -0.4353617 0.5423262 ... -0.09400246 0.11277512
0.46030286]
[-0.20957293 -0.4070356 0.52822715 ... -0.10254323 -0.01395297
0.20638129]
...
[-0.13773967 -0.10961649 0.22636533 ... -0.065424 0.6344244
-0.27795315]
[-0.20452675 -0.1309557 0.23095486 ... -0.07456343 0.66748977
-0.21973252]
[-0.17408411 -0.11795001 0.21041611 ... -0.06386398 0.6547011
-0.1990372 ]]
[[-0.18300146 -0.38216865 0.51648384 ... -0.03733653 0.33876732
0.63263166]
[-0.217145 -0.4353617 0.5423262 ... -0.09400246 0.11277512
0.46030286]
[-0.20957293 -0.4070356 0.52822715 ... -0.10254323 -0.01395297
0.20638129]
...
[-0.13773967 -0.10961649 0.22636533 ... -0.065424 0.6344244
-0.27795315]
[-0.20452675 -0.1309557 0.23095486 ... -0.07456343 0.66748977
-0.21973252]
[-0.17408411 -0.11795001 0.21041611 ... -0.06386398 0.6547011
-0.1990372 ]]]
```<|||||>My env is:
```
- `transformers` version: 4.9.0.dev0
- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (tpu)
- Jax version: 0.2.16
- JaxLib version: 0.1.68
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```<|||||>Can you update jax, flax and jaxlib and also check on TPU?<|||||>> Can you update jax, flax and jaxlib and also check on TPU?
Got it, I'll do it in a bit. Thanks for the help 👍 <|||||>> Can you update jax, flax and jaxlib and also check on TPU?
Okay, I checked several setups. I upgraded both jax and jaxlib in Google Colab's **CPU** runtime.
I ran this line to upgrade
```
!pip install -Uq jax jaxlib git+https://github.com/huggingface/transformers.git tokenizers datasets flax git+https://github.com/deepmind/optax.git
```
- `transformers` version: 4.9.0.dev0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu)
- Jax version: 0.2.16
- JaxLib version: 0.1.68
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
Then this
> ```python
> from transformers import GPT2LMHeadModel, GPT2Tokenizer, FlaxGPT2LMHeadModel
> import torch
> import numpy as np
> import jax
> import jax.numpy as jnp
>
> pretrained = "w11wo/sundanese-gpt2-base"
> tmp_path = "./"
>
> model = FlaxGPT2LMHeadModel.from_pretrained(pretrained)
>
> def to_f32(t):
> return jax.tree_map(lambda x: x.astype(jnp.float32) if x.dtype == jnp.bfloat16 else x, t)
>
> model.params = to_f32(model.params)
> model.save_pretrained(tmp_path)
>
> model_pt = GPT2LMHeadModel.from_pretrained(tmp_path, from_flax=True)
>
> input_ids = np.asarray(2 * [128 * [0]], dtype=np.int32)
> input_ids_pt = torch.tensor(input_ids)
>
> logits_pt = model_pt(input_ids_pt).logits
> print(logits_pt)
> logits_fx = model(input_ids).logits
> print(logits_fx)
> ```
It worked as intended. It returned the correct tensors.
---
Finally, I checked the **TPU** setup in Colab. Ran the same upgrade command, plus this block.
```python
import jax.tools.colab_tpu
jax.tools.colab_tpu.setup_tpu()
jax.device_count()
```
```
8
```
Then I ran the same test as above. ~~Unfortunately, it returns a different error message (didn't appear in **CPU** runtime):~~
EDIT: Both the PyTorch and JAX models returned the same tensors after restarting the TPU runtime after library upgrades and JAX Colab TPU setup.
Here's my final environment on Colab's TPU
- `transformers` version: 4.9.0.dev0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu)
- Jax version: 0.2.16
- JaxLib version: 0.1.68
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in><|||||>@patrickvonplaten I think I'll close the issue now. Upgrading both jax and jaxlib solved the problem. Many thanks! |
transformers | 12,544 | closed | [examples/flax] add adafactor optimizer | # What does this PR do?
This PR adds the Adafactor optimizer in all language modeling scripts. This enables fine-tuning large models like 1.3B GPTNeo. Without adafactor the current CLM script can't even load the optimizer states. With Adafactor and bf16 it should now be possible to fit 1.3B GPTNeo on a single v3-8.
With 1.3B GPT Neo, one should be able to fit
- 8 per device batch size with max length 512
- 2 per device batch size with max_length 1024
```
python examples/flax/language-modeling/run_clm_flax.py \
--model_name_or_path EleutherAI/gpt-neo-1.3B \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--do_train --do_eval \
--block_size 512 \
--num_train_epochs 1 \
--learning_rate 1e-5 \
--per_device_train_batch_size 8 --per_device_eval_batch_size 8 \
--dtype bfloat16 \
--adafactor \
--overwrite_output_dir \
--output_dir ~/tmp \ | 07-06-2021 19:01:07 | 07-06-2021 19:01:07 | |
transformers | 12,543 | closed | [Flax] Adapt examples to be able to use eval_steps and save_steps | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR makes sure that evaluation and saving can be defined by `save_steps` and `eval_steps`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-06-2021 17:37:59 | 07-06-2021 17:37:59 | |
transformers | 12,542 | closed | ModuleNotFoundError: No module named 'transformers' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
transformers-cliv: command not found
- `transformers` version: 4.8.2
- Platform: Ubuntu (20.04) with KDE desktop
- Python version: 3.9
- PyTorch version (GPU?): 1.9.0, (not running on GPU)
- Tensorflow version (GPU?): 2.5.0 (not running on GPU)
- Using GPU in script?: no
-
- Using distributed or parallel set-up in script?: distributed
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- gpt2: @patrickvonplaten, @LysandreJik
Library:
- text generation: @patrickvonplaten
-->
## Information
Model I am using (Bert, XLNet ...): DialoGPT
I tried uninsall transformes, reinstalling it, installing version 3.5.0 but all doesnt work
The problem arises when using:
* [ ] the official example scripts: (give details below):
* [ ] Script is basic exemple of DialoGPT; {
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-large")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-large")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
error:
(base) hest1a@latitude-ubuntu:~/ProjectChristina$ python3.9 christina.py
Traceback (most recent call last):
File "/home/hest1a/ProjectChristina/christina.py", line 1, in <module>
from transformers import AutoModelForCausalLM, AutoTokenizer
ModuleNotFoundError: No module named 'transformers'
]}
## To reproduce
Steps to reproduce the behavior:
1. run the code (python3.9 code.py)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
when running the code, I expect to start the basic DialoGPT chat program..
<!-- A clear and concise description of what you would expect to happen. -->
| 07-06-2021 15:31:41 | 07-06-2021 15:31:41 | I fixed it,
it appears that using pip install (libraries) ubuntu installed the python 2 version of the required module..
uninstalling them and using
```
pip3.9 install transformers
```
installs the 3.9 versions of the module and allows the code to run smoothly.. |
transformers | 12,541 | closed | Edit readme | # What does this PR do?
I have the impression that a tiny typo has crept into the readme.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-06-2021 15:27:10 | 07-06-2021 15:27:10 | |
transformers | 12,540 | closed | Updated README | Added video links for all talks and anonymized chat history. | 07-06-2021 13:59:39 | 07-06-2021 13:59:39 | |
transformers | 12,539 | closed | How to make BART infill unmasked deletions (not masked tokens)? | Hello,
I see that BART is able to infill <mask> tokens in the input and was able to run this example. https://huggingface.co/transformers/model_doc/bart.html#mask-filling
Since BART can also handle other type of pre-training noise such as token deletion, is there a way I could get BART to take inputs with deletions, and see the output of BARTs attempt to infill those deleted spans?
In the BART paper,
`A _C. _E. ->. A B C. D E. ` is an example of denoising Token Masking, and
`A. C. E. -> A B C. D E. ` is an example of denoising Token Deletion
Since the best BART model in the paper is found to be pre-trained with "sentence shuffling" and "text-infilling,
where arbitrary length spans of text are replaced with a single mask token," and NOT with the token deletion strategy (where no <mask> token is left behind), would we expect to have to obtain a variant of BART that has been pre-trained on token deletion in order to use BART for infilling unmasked deletions?
I am asking because I would like to use BART to repair text where certain spans may be missing/ deleted. For example:
input: `"Wikipedia is a free online, created and edited by volunteers around the world"`
desired output: `"Wikipedia is a free online encyclopedia, created and edited by volunteers around the world"`
(Note the generation of "encyclopedia" but no <mask> token included in the input.) | 07-06-2021 13:58:52 | 07-06-2021 13:58:52 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,538 | closed | [WIP] Extend the testing of tokenizers that just have a legacy version | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
**WIP** Fixes #12535
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. I will tag some reviewers once the PR is more advanced :blush:
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-06-2021 13:53:56 | 07-06-2021 13:53:56 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,537 | open | [WIP] flax gradient checkpointing | # What does this PR do?
Adds gradient checkpointing to flax models | 07-06-2021 13:47:28 | 07-06-2021 13:47:28 | Would be good to get some results on how much memory can be saved and if this is correctly implemented <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>unstale |
transformers | 12,536 | closed | Attempting to load non-existent vocab files from cache leads to a breaking behavior offline (while non-breaking online) | Hello,
I found an error trying to use the `openai/clip-vit-base-patch32` model offline, that was downloaded and cached from previous uses. Digging through the code, the problem seems to arise from the fact that when used online, attempting to download a file that does not exist (here `added_tokens.json`) leads to a 404 Error, which simply sets the resolved_vocab_files[file_id] to None (line 1692 of `tokenization_utils_base.py`) with no further repercussions on the model use.
However, using it offline, since the non-existent file was never found and thus never downloaded and cached, no file is matched by the `get_from_cache()` in `file_utils.py` and a ValueError is raised, preventing the use of the model offline ("Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.".)
Circumventing this ValueError similarly as what is done for the 404 leads to a perfectly working model (but I am guessing breaks in other cases).
I am guessing possible solutions are to add a handcrafted exception for optional vocab files, or to cache a list of files that caused 404 during the initial download ?
Cheers,
Manuel
## Environment info
- `transformers` version: 4.8.2
- Platform: Linux-5.8.0-59-generic-x86_64-with-glibc2.29
- Python version: 3.8.5
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu)
- Jax version: 0.2.16
- JaxLib version: 0.1.68
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik is the most relevant for the issue I would say, as it is a general problem during offline tokenizer download.
## Information
Model I am using (Bert, XLNet ...):
I am using the `openai/clip-vit-base-patch32` but the bug would be reproducible with other models. Here the issue is caused by the non-existance of the `added_tokens.json`.
## To reproduce
Steps to reproduce the behavior:
1. Download the CLIP model with the default scripts.
2. Put computer offline.
3. Attempt to run the same scripts offline, loading the files from cache.
## Expected behavior
The expected behavior is for offline use of the model to work once cached (better described above).
| 07-06-2021 13:20:26 | 07-06-2021 13:20:26 | Hello! I am unsure of which scripts you are using. However, when instantiating a model or a tokenizer, you should specify
the `local_files_only=True` parameter to the `from_pretrained` method when offline to ensure that the objects don't try to fetch files they can't access.<|||||>For every other file, they can be matched from cache even offline when `local_files_only=False`. Since `added_tokens.json` was never downloaded cause non-existent (and thus raises a 404 which is a considered case), setting `local_files_only` to True bypasses the `ValueError: Connection error ...` by instead raising a `FileNotFoundError` which is caught by the exception conditions l.1684 of `tokenization_utils_base.py`, and adds the non-existent file to the `unresolved_files` list, with a logger info message. On the other hand, the `ValueError` is not considered and interrupts program execution with an error message.
To allow users not to modify the `local_files_only` flag every time they go offline, wouldn't it be useful in this case to treat the `ValueError` the same way ? Adding it to the unresolved file list, logging a warning message and not letting the program interrupt itself for a failed resolution of a file that does not exist anyways ?
The scripts:
```
from PIL import Image
from transformers import CLIPProcessor, CLIPModel
model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32", local_files_only=False)
# Setting flag to True breaks execution when offline - (ofc when tokenizer was already cached)
image = Image.open("/home/manu/perso/clip/harden.png").convert("RGB")
options = ["a photo of a man", "a photo of a cat"]
inputs = processor(text=options, images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
probs = logits_per_image.softmax(dim=1)
print(probs)
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,535 | closed | Extend the testing of tokenizers that just have a legacy version | # 🚀 Feature request
For the moment, only tokenizers that have a rust version use the tests in `test_tokenization_common.py` file that use the `self.tokenizers_list` attribute that stores a (eventually some) model name stored on the hub.
As mentioned by @LysandreJik in this [discussion](https://github.com/huggingface/transformers/pull/11810#discussion_r643083834_), it might be interesting not to limit this tests for tokenizers that have a rust version. Indeed, some tests that use `self.tokenizers_list` do not compare the behavior of the legacy tokenizer and the rust tokenizer but only test their correct functioning. Moreover, other tests which use the `self.tokenizers_list` attribute, test a behavior of the rust version whereas one could expect the same behavior of the legacy version.
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
This would allow to test rust only and legacy only tokenizers in an (almost) equivalent way.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
I started working on it but some tests do not pass as is on legacy versions.
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
cc @LysandreJik , @sgugger
| 07-06-2021 13:08:04 | 07-06-2021 13:08:04 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,534 | open | [Flax] from_pretrained does not consider the passed dtype | ## Environment info
When loading a flax model with `from_pretrained` the type argument is not used. The weights are initialized with the dtype of saved weights.
So if you do
```python
model = FlaxGPT2ForCausalLM.from_pretrained("gpt2", dtype=jnp.dtype("bfloat16"))
# check the dtype of one of the params
model.params["transformer"]["wpe"]["embedding"].dtype
=> dtype("float32")
```
We should probably cast the weights to `self.dtype`.
As a workaround for `bf16`, one could manually cast the weighs with
```
def to_bf16(t):
return jax.tree_map(lambda x: x.astype(jnp.bfloat16) if x.dtype == jnp.float32 else x, t)
model.params = to_bf16(model.params)
```
cc @patrickvonplaten | 07-06-2021 12:51:26 | 07-06-2021 12:51:26 | I wonder whether this might be problematic for layer norm weights since those should usually always be of type `float32`, no?<|||||>Would love to hear what @avital @marcvanzee think here<|||||>I think it's fine to manually port weights to bfloat16 if you want to. In general all Flax layers accept a dtype attribute when it's safe to do intermediate computation in bloat16 and you can set dtype=bfloat16 for those layers. Keeping parameters as bfloat16 should only be necessary if the model is huge and the parameters can't fit on device memory, from what I know. I think it's tricky to get that right and requires careful attention to which parameters are safe to keep in bfloat16, but I don't have too much personal context here. I can ask others if that helps.
So I'm first curious whether indeed it's necessary to keep parameters as bfloat16 in this case, and if so, why<|||||>hello so <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>will soon be taken care of by @patil-suraj :-) |
transformers | 12,533 | closed | The "additional_special_tokens" argument in the ".from_pretrained" method of the tokenizer is not necessarily taken into account. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.2
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik
@sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Tapas
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
To easily reproduce the behavior, you can directly open this [Google colab](https://colab.research.google.com/drive/1zKvkdmH2blYO4dx1s6OojTkDQcBGd35L?usp=sharing)
Steps to reproduce the behavior:
1. With the latest version of transformers, initialized a Tapas tokenizer with a new added token `"<special>"`
```
from transformers import TapasTokenizer
added_tokens = ["<special>"]
pretrained_name = "google/tapas-base"
tokenizer_p = TapasTokenizer.from_pretrained(
pretrained_name, additional_special_tokens=added_tokens
)
```
2. Check if the new added token has been taken into account
```
"<special>" in tokenizer_p.get_vocab()
```
Output: `False`
```
tokenizer_p.additional_special_tokens # I would expect "<special>" in the list
```
Output: `['[EMPTY]']`
```
special_token_id = tokenizer_p.convert_tokens_to_ids(["<special>"])
print(tokenizer_p.convert_ids_to_tokens(special_token_id))
```
Output: `['[UNK]']`
3. See how an input containing `"<special>"` is tokenized
```
import pandas as pd
query = "Hey this is a <special> token"
data = [
["Pos"],
["1"],
["2"],
]
table = pd.DataFrame.from_records(data[1:], columns=data[0])
p_output = tokenizer_p.encode(table, query)
```
Output: `['[CLS]', 'hey', 'this', 'is', 'a', '<', 'special', '>', 'token', '[SEP]', 'po', '##s', '1', '2']`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I would have hoped that the special token would be taken into account. In other words, I would have hoped to have the following outputs for each of the commands:
```
"<special>" in tokenizer_p.get_vocab()
```
Output: `True`
```
tokenizer_p.additional_special_tokens # I would expect "<special>" in the list
```
Output: `['<special>', '[EMPTY]']`
```
special_token_id = tokenizer_p.convert_tokens_to_ids(["<special>"])
print(tokenizer_p.convert_ids_to_tokens(special_token_id))
```
Output: `['<special>']`
```
import pandas as pd
query = "Hey this is a <special> token"
data = [
["Pos"],
["1"],
["2"],
]
table = pd.DataFrame.from_records(data[1:], columns=data[0])
p_output = tokenizer_p.encode(table, query)
```
Output: `['[CLS]', 'hey', 'this', 'is', 'a', '<special>', 'token', '[SEP]', 'po', '##s', '1', '2']`
<!-- A clear and concise description of what you would expect to happen. -->
## Explanation ideas
I have the impression that when the `"additional_special_tokens"` key is filled in the `special_tokens_map.json` file then only the associated list is kept for the `additional_special_tokens` attribute. The content in the `additional_special_tokens` argument in ` TapasTokenizer.from_pretrained(pretrained_name, additional_special_tokens=added_tokens)` is then ignored.
Is this the behavior we want to have?
From the test [here](https://github.com/huggingface/transformers/blob/master/tests/test_tokenization_common.py#L3125) I have the impression that it is not -- it would fail on "google/tapas-base" of TapasTokenizer which is not tested at the moment but will be as part of the issue #12535.
| 07-06-2021 12:50:11 | 07-06-2021 12:50:11 | Indeed, it seems the `TapasTokenizer` isn't initialized correctly here; this should be patched!<|||||>Seems like the `MBart50TokenizerFast` is facing the same issue.
One can see here how the special tokens are set according to the lang_codes:
https://github.com/huggingface/transformers/blob/dc42e770b86b737251ce5b83f6b0606fe1cd3548/src/transformers/models/mbart/tokenization_mbart50_fast.py#L144
In contrast to how the regular mbart Tokenizer handles both the additional_special_tokens and the language codes:
https://github.com/huggingface/transformers/blob/dc42e770b86b737251ce5b83f6b0606fe1cd3548/src/transformers/models/mbart/tokenization_mbart.py#L123-L127<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Normally the problem should have been solved by the above mentioned PR. Please let me know if this is not the case! :slightly_smiling_face: |
transformers | 12,532 | closed | Flax Save/Load from base model with different name | The following doesn't work correctly at the moment - loading a base model into a head from PyTorch model when the names are different. *E.g.*:
```python
from transformers import RobertaModel, FlaxRobertaForMaskedLM, RobertaConfig
model = RobertaModel(RobertaConfig())
model.save_pretrained("./")
FlaxRobertaForMaskedLM.from_pretrained("./", from_pt=True)
```
=> Many weights are incorrectly loaded here. I know what the problem is, but it'll require ~1h to solve. Will take a look later today. | 07-06-2021 11:44:46 | 07-06-2021 11:44:46 | cc @patil-suraj <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,531 | closed | run_clm_no_trainer.py ModuleNotFoundError | Hello,
I am trying to run the run_clm_no_trainer.py (since I'm using mimic data to train) file, I get this error:
ModuleNotFoundError: No module named 'datasets_modules.datasets.mimic_string'
Running on colab - This is the code:
!python3 'gdrive/My Drive/UmlsBERT-master/language-modeling/run_clm_no_trainer.py' --output_dir 'gdrive/My Drive/UmlsBERT-master/language-modeling/models/clinicalBert-v1' --model_name_or_path emilyalsentzer/Bio_ClinicalBERT --learning_rate 5e-5 --block_size 128 --seed 42 --dataset_config_name 'gdrive/My Drive/UmlsBERT-master/language-modeling/config.json' --dataset_name 'gdrive/My Drive/UmlsBERT-master/language-modeling/mimic_string.txt'
Here is the full output:
2021-07-06 10:08:00.087779: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
07/06/2021 10:08:01 - INFO - main - Distributed environment: NO
Num processes: 1
Process index: 0
Local process index: 0
Device: cuda
Use FP16 precision: False
Traceback (most recent call last):
File "gdrive/My Drive/UmlsBERT-master/language-modeling/run_clm_no_trainer.py", line 472, in
main()
File "gdrive/My Drive/UmlsBERT-master/language-modeling/run_clm_no_trainer.py", line 241, in main
raw_datasets = load_dataset(args.dataset_name, args.dataset_config_name)
File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 838, in load_dataset
**config_kwargs,
File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 687, in load_dataset_builder
builder_cls = import_main_class(module_path, dataset=True)
File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 91, in import_main_class
module = importlib.import_module(module_path)
File "/usr/lib/python3.7/importlib/init.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 1006, in _gcd_import
File "", line 983, in _find_and_load
File "", line 953, in _find_and_load_unlocked
File "", line 219, in _call_with_frames_removed
File "", line 1006, in _gcd_import
File "", line 983, in _find_and_load
File "", line 953, in _find_and_load_unlocked
File "", line 219, in _call_with_frames_removed
File "", line 1006, in _gcd_import
File "", line 983, in _find_and_load
File "", line 953, in _find_and_load_unlocked
File "", line 219, in _call_with_frames_removed
File "", line 1006, in _gcd_import
File "", line 983, in _find_and_load
File "", line 953, in _find_and_load_unlocked
File "", line 219, in _call_with_frames_removed
File "", line 1006, in _gcd_import
File "", line 983, in _find_and_load
File "", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'datasets_modules.datasets.mimic_string'
Can't figure out how to fix this and looking for advice. I am fairly new to this kind of programming so apologies if the solution is something obvious. Happy to provide more information if required. Thank you!
@sgugger | 07-06-2021 10:55:01 | 07-06-2021 10:55:01 | This looks like a problem with the dataset you are using and a missing package? The script does not use `mimic_string` so the import error can't come from it.<|||||>Thank you! |
transformers | 12,530 | closed | Is there a Bert version of the OpenAIGPTLMHeadModel? | # 🚀 Feature request
I am trying to use the bert-base-chinese model to evaluate the fluency of Chinese texts. I noticed a sample code that uses openai-gpt for English that employes the class OpenAIGPTLMHeadModel. However, since only the bert-base-chinese model is available for Chinese, I am wondering whether there is an equivalence in BERT.
The sample code is attached below:
model = OpenAIGPTLMHeadModel.from_pretrained('openai-gpt',cache_dir='gpt')
tokenizer = OpenAIGPTTokenizer.from_pretrained('openai-gpt',cache_dir='gpt')
tokenize_input = tokenizer.tokenize(sentence)
tensor_input = torch.tensor([tokenizer.convert_tokens_to_ids(tokenize_input)])
loss=model(tensor_input, lm_labels=tensor_input)
print(math.exp(loss))
| 07-06-2021 08:34:23 | 07-06-2021 08:34:23 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,529 | closed | Load Trainer state | # 🚀 Feature request
Hi,
I wonder if there is a method to load a saved state of the trainer, so that I can continue the training loop from where I started.
If I understand it right, I need to load the saved states of _model_, _optimizer_ and _scheduler_.
## Motivation
However, it would be much more convenient to have all of this done in one method, when you only pass the `results` folder path to the method.
## Your contribution
In case there is not, I can submit the PR, since it does not seem too hard to be prepared.
| 07-06-2021 08:16:33 | 07-06-2021 08:16:33 | cc @sgugger <|||||>How would this be different from
```
trainer.train(resume_from_checkpoint=True)
```
?<|||||>That's pretty similar, still, when one needs to use `trainer.predict` of a trained model, for instance.<|||||>You don't need the optimizer and scheduler states for this, just the model.<|||||>This makes sense, thank you! Closing the issue.
On Wed, 7 Jul 2021 at 16:33, Sylvain Gugger ***@***.***>
wrote:
> You don't need the optimizer and scheduler states for this, just the model.
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/12529#issuecomment-875608286>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AIXZKXIKEBB7ZHSHCE7DJC3TWRJTTANCNFSM474BV33Q>
> .
>
|
transformers | 12,528 | closed | FlaxRobertaModel.from_pretrained does not load weights correctly | ## Environment info
- `transformers` version: 4.8.2
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu)
- Jax version: 0.2.13
- JaxLib version: 0.1.66
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
## Information
`from_pretrained` does not work correctly for the `roberta-base` Flax model. I have not checked if this also affects other models.
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import FlaxRobertaModel
model = FlaxRobertaModel.from_pretrained("roberta-base")
```
prints:
```
Some weights of the model checkpoint at roberta-base were not used when initializing FlaxRobertaModel: {('encoder', 'layer', '9', 'attention', 'layer_norm', 'beta'), ('encoder', 'layer', '0', 'attention', 'self', 'out', 'kernel'), ('encoder', 'layer', '2', 'output', 'layer_norm', 'beta'), ('embeddings', 'word_embeddings', 'weight'), ('encoder', 'layer', '4', 'attention', 'layer_norm', 'gamma'), ('encoder', 'layer', '4', 'output', 'layer_norm', 'gamma'), ('encoder', 'layer', '8', 'output', 'layer_norm', 'beta'), ('encoder', 'layer', '10', 'attention', 'self', 'out', 'bias'), ('encoder', 'layer', '1', 'attention', 'layer_norm', 'beta'), ('encoder', 'layer', '11', 'attention', 'self', 'out', 'kernel'), ('encoder', 'layer', '10', 'attention', 'layer_norm', 'beta'), ('encoder', 'layer', '6', 'attention', 'self', 'out', 'bias'), ('encoder', 'layer', '7', 'attention', 'self', 'out', 'kernel'), ('encoder', 'layer', '5', 'output', 'layer_norm', 'beta'), ('layer_norm', 'weight'), ('encoder', 'layer', '7', 'output', 'layer_norm', 'gamma'), ('encoder', 'layer', '11', 'output', 'layer_norm', 'gamma'), ('encoder', 'layer', '4', 'attention', 'self', 'out', 'bias'), ('encoder', 'layer', '0', 'attention', 'self', 'out', 'bias'), ('encoder', 'layer', '1', 'output', 'layer_norm', 'beta'), ('encoder', 'layer', '5', 'attention', 'self', 'out', 'bias'), ('encoder', 'layer', '5', 'attention', 'layer_norm', 'beta'), ('encoder', 'layer', '1', 'attention', 'self', 'out', 'kernel'), ('encoder', 'layer', '3', 'output', 'layer_norm', 'gamma'), ('encoder', 'layer', '10', 'output', 'layer_norm', 'gamma'), ('encoder', 'layer', '0', 'attention', 'layer_norm', 'beta'), ('encoder', 'layer', '3', 'attention', 'self', 'out', 'kernel'), ('encoder', 'layer', '6', 'output', 'layer_norm', 'beta'), ('encoder', 'layer', '8', 'attention', 'self', 'out', 'bias'), ('encoder', 'layer', '0', 'output', 'layer_norm', 'gamma'), ('encoder', 'layer', '7', 'attention', 'layer_norm', 'gamma'), ('encoder', 'layer', '8', 'attention', 'layer_norm', 'gamma'), ('encoder', 'layer', '1', 'attention', 'layer_norm', 'gamma'), ('encoder', 'layer', '2', 'attention', 'layer_norm', 'gamma'), ('encoder', 'layer', '9', 'attention', 'layer_norm', 'gamma'), ('decoder', 'weight'), ('encoder', 'layer', '9', 'attention', 'self', 'out', 'kernel'), ('encoder', 'layer', '2', 'attention', 'self', 'out', 'kernel'), ('encoder', 'layer', '11', 'attention', 'layer_norm', 'gamma'), ('encoder', 'layer', '3', 'attention', 'layer_norm', 'gamma'), ('encoder', 'layer', '4', 'attention', 'self', 'out', 'kernel'), ('encoder', 'layer', '7', 'attention', 'self', 'out', 'bias'), ('encoder', 'layer', '11', 'output', 'layer_norm', 'beta'), ('encoder', 'layer', '9', 'output', 'layer_norm', 'beta'), ('encoder', 'layer', '2', 'output', 'layer_norm', 'gamma'), ('encoder', 'layer', '6', 'attention', 'self', 'out', 'kernel'), ('encoder', 'layer', '11', 'attention', 'self', 'out', 'bias'), ('encoder', 'layer', '6', 'attention', 'layer_norm', 'beta'), ('encoder', 'layer', '4', 'output', 'layer_norm', 'beta'), ('encoder', 'layer', '10', 'attention', 'self', 'out', 'kernel'), ('encoder', 'layer', '5', 'attention', 'layer_norm', 'gamma'), ('encoder', 'layer', '5', 'output', 'layer_norm', 'gamma'), ('embeddings', 'token_type_embeddings', 'weight'), ('layer_norm', 'bias'), ('encoder', 'layer', '10', 'attention', 'layer_norm', 'gamma'), ('embeddings', 'layer_norm', 'gamma'), ('encoder', 'layer', '1', 'output', 'layer_norm', 'gamma'), ('encoder', 'layer', '6', 'output', 'layer_norm', 'gamma'), ('encoder', 'layer', '4', 'attention', 'layer_norm', 'beta'), ('encoder', 'layer', '2', 'attention', 'self', 'out', 'bias'), ('encoder', 'layer', '8', 'output', 'layer_norm', 'gamma'), ('encoder', 'layer', '9', 'attention', 'self', 'out', 'bias'), ('encoder', 'layer', '3', 'output', 'layer_norm', 'beta'), ('dense', 'kernel'), ('encoder', 'layer', '8', 'attention', 'layer_norm', 'beta'), ('encoder', 'layer', '10', 'output', 'layer_norm', 'beta'), ('encoder', 'layer', '8', 'attention', 'self', 'out', 'kernel'), ('bias',), ('dense', 'bias'), ('encoder', 'layer', '3', 'attention', 'self', 'out', 'bias'), ('encoder', 'layer', '7', 'output', 'layer_norm', 'beta'), ('encoder', 'layer', '9', 'output', 'layer_norm', 'gamma'), ('encoder', 'layer', '0', 'output', 'layer_norm', 'beta'), ('embeddings', 'layer_norm', 'beta'), ('encoder', 'layer', '0', 'attention', 'layer_norm', 'gamma'), ('encoder', 'layer', '2', 'attention', 'layer_norm', 'beta'), ('encoder', 'layer', '6', 'attention', 'layer_norm', 'gamma'), ('encoder', 'layer', '1', 'attention', 'self', 'out', 'bias'), ('encoder', 'layer', '11', 'attention', 'layer_norm', 'beta'), ('embeddings', 'position_embeddings', 'weight'), ('encoder', 'layer', '5', 'attention', 'self', 'out', 'kernel'), ('encoder', 'layer', '3', 'attention', 'layer_norm', 'beta'), ('encoder', 'layer', '7', 'attention', 'layer_norm', 'beta')}
- This IS expected if you are initializing FlaxRobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing FlaxRobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of FlaxRobertaModel were not initialized from the model checkpoint at roberta-base and are newly initialized: {('encoder', 'layer', '6', 'attention', 'output', 'dense', 'bias'), ('encoder', 'layer', '3', 'output', 'LayerNorm', 'bias'), ('encoder', 'layer', '8', 'attention', 'output', 'LayerNorm', 'bias'), ('encoder', 'layer', '8', 'attention', 'output', 'LayerNorm', 'scale'), ('encoder', 'layer', '4', 'attention', 'output', 'LayerNorm', 'scale'), ('encoder', 'layer', '9', 'output', 'LayerNorm', 'bias'), ('encoder', 'layer', '9', 'output', 'LayerNorm', 'scale'), ('encoder', 'layer', '9', 'attention', 'output', 'dense', 'bias'), ('encoder', 'layer', '7', 'output', 'LayerNorm', 'bias'), ('encoder', 'layer', '10', 'output', 'LayerNorm', 'scale'), ('encoder', 'layer', '9', 'attention', 'output', 'LayerNorm', 'scale'), ('embeddings', 'position_embeddings', 'embedding'), ('encoder', 'layer', '3', 'output', 'LayerNorm', 'scale'), ('encoder', 'layer', '2', 'output', 'LayerNorm', 'scale'), ('encoder', 'layer', '1', 'output', 'LayerNorm', 'bias'), ('encoder', 'layer', '5', 'attention', 'output', 'dense', 'bias'), ('encoder', 'layer', '5', 'attention', 'output', 'LayerNorm', 'bias'), ('encoder', 'layer', '5', 'attention', 'output', 'LayerNorm', 'scale'), ('encoder', 'layer', '0', 'attention', 'output', 'LayerNorm', 'bias'), ('encoder', 'layer', '6', 'output', 'LayerNorm', 'bias'), ('encoder', 'layer', '2', 'output', 'LayerNorm', 'bias'), ('encoder', 'layer', '1', 'output', 'LayerNorm', 'scale'), ('encoder', 'layer', '4', 'attention', 'output', 'dense', 'bias'), ('encoder', 'layer', '5', 'output', 'LayerNorm', 'bias'), ('encoder', 'layer', '0', 'attention', 'output', 'dense', 'bias'), ('encoder', 'layer', '10', 'attention', 'output', 'dense', 'kernel'), ('encoder', 'layer', '9', 'attention', 'output', 'LayerNorm', 'bias'), ('encoder', 'layer', '8', 'attention', 'output', 'dense', 'kernel'), ('encoder', 'layer', '2', 'attention', 'output', 'dense', 'kernel'), ('encoder', 'layer', '11', 'attention', 'output', 'LayerNorm', 'bias'), ('encoder', 'layer', '4', 'output', 'LayerNorm', 'bias'), ('encoder', 'layer', '3', 'attention', 'output', 'dense', 'kernel'), ('encoder', 'layer', '0', 'output', 'LayerNorm', 'bias'), ('encoder', 'layer', '2', 'attention', 'output', 'LayerNorm', 'scale'), ('encoder', 'layer', '2', 'attention', 'output', 'LayerNorm', 'bias'), ('embeddings', 'LayerNorm', 'scale'), ('encoder', 'layer', '8', 'attention', 'output', 'dense', 'bias'), ('encoder', 'layer', '11', 'output', 'LayerNorm', 'bias'), ('encoder', 'layer', '4', 'attention', 'output', 'dense', 'kernel'), ('encoder', 'layer', '0', 'attention', 'output', 'dense', 'kernel'), ('encoder', 'layer', '11', 'attention', 'output', 'LayerNorm', 'scale'), ('encoder', 'layer', '0', 'output', 'LayerNorm', 'scale'), ('embeddings', 'token_type_embeddings', 'embedding'), ('embeddings', 'LayerNorm', 'bias'), ('encoder', 'layer', '5', 'attention', 'output', 'dense', 'kernel'), ('encoder', 'layer', '11', 'attention', 'output', 'dense', 'kernel'), ('encoder', 'layer', '10', 'output', 'LayerNorm', 'bias'), ('encoder', 'layer', '4', 'attention', 'output', 'LayerNorm', 'bias'), ('encoder', 'layer', '3', 'attention', 'output', 'LayerNorm', 'bias'), ('encoder', 'layer', '7', 'output', 'LayerNorm', 'scale'), ('encoder', 'layer', '0', 'attention', 'output', 'LayerNorm', 'scale'), ('encoder', 'layer', '7', 'attention', 'output', 'dense', 'kernel'), ('encoder', 'layer', '7', 'attention', 'output', 'dense', 'bias'), ('encoder', 'layer', '9', 'attention', 'output', 'dense', 'kernel'), ('encoder', 'layer', '6', 'attention', 'output', 'LayerNorm', 'scale'), ('encoder', 'layer', '1', 'attention', 'output', 'LayerNorm', 'scale'), ('encoder', 'layer', '10', 'attention', 'output', 'LayerNorm', 'bias'), ('encoder', 'layer', '6', 'output', 'LayerNorm', 'scale'), ('encoder', 'layer', '11', 'attention', 'output', 'dense', 'bias'), ('encoder', 'layer', '8', 'output', 'LayerNorm', 'scale'), ('encoder', 'layer', '8', 'output', 'LayerNorm', 'bias'), ('encoder', 'layer', '6', 'attention', 'output', 'dense', 'kernel'), ('encoder', 'layer', '7', 'attention', 'output', 'LayerNorm', 'scale'), ('encoder', 'layer', '7', 'attention', 'output', 'LayerNorm', 'bias'), ('encoder', 'layer', '4', 'output', 'LayerNorm', 'scale'), ('encoder', 'layer', '1', 'attention', 'output', 'dense', 'bias'), ('encoder', 'layer', '1', 'attention', 'output', 'dense', 'kernel'), ('encoder', 'layer', '3', 'attention', 'output', 'dense', 'bias'), ('embeddings', 'word_embeddings', 'embedding'), ('encoder', 'layer', '1', 'attention', 'output', 'LayerNorm', 'bias'), ('encoder', 'layer', '5', 'output', 'LayerNorm', 'scale'), ('encoder', 'layer', '11', 'output', 'LayerNorm', 'scale'), ('encoder', 'layer', '2', 'attention', 'output', 'dense', 'bias'), ('encoder', 'layer', '10', 'attention', 'output', 'dense', 'bias'), ('encoder', 'layer', '3', 'attention', 'output', 'LayerNorm', 'scale'), ('encoder', 'layer', '10', 'attention', 'output', 'LayerNorm', 'scale'), ('encoder', 'layer', '6', 'attention', 'output', 'LayerNorm', 'bias')}
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
Apparently almost all parameters are newly initialized, potentially because of different naming conventions e.g. the key `('embeddings', 'word_embeddings', 'weight')` exists but the key `('embeddings', 'word_embeddings', 'embedding')` is expected.
Using `from_pt=True` fixes the issue:
```python
from transformers import FlaxRobertaModel
model = FlaxRobertaModel.from_pretrained("roberta-base", from_pt=True)
```
I don't know if this behavior is intentional, I would have expected an error in that case.
## Expected behavior
I would expect no parameters to be randomly initialized, and the same keys to be unused as when running `RobertaModel.from_pretrained("roberta-base")`.
Here's a colab to reproduce the issue: https://colab.research.google.com/drive/12sDGObc3C1TxW9-k2j4MF69qgpM2kNkP?usp=sharing
Thanks for any help! | 07-06-2021 08:06:21 | 07-06-2021 08:06:21 | cc @patrickvonplaten @patil-suraj <|||||>Thank you for reporting this. Yes, the uploaded checkpoints seem to be wrong.
@patrickvonplaten we should verify and re-upload all Roberta flax checkpoints.<|||||>Oh oh this doesn't look good at all :-/ <|||||>I'll re-upload the roberta checkpoints. This definitely seems to be a bug<|||||>Ok this should be fixed! Thanks a lot for pointing it out @bminixhofer ! I also verified that `roberta-large` and other roberta models work fine.<|||||>Works for me too now. Thanks for the quick fix! |
transformers | 12,527 | closed | [Examples][Flax] AttributeError: 'DataTrainingArguments' object has no attribute 'test_file' | ## Description
While running run_summarization_flax.py with local files, currently we have only two DataTrainingArguments, one for training and another for validation file still we are validating test_file which is producing an error
```
Traceback (most recent call last):
File "transformers/examples/flax/summarization/run_summarization_flax.py", line 808, in <module>
main()
File "transformers/examples/flax/summarization/run_summarization_flax.py", line 352, in main
if data_args.test_file is not None:
AttributeError: 'DataTrainingArguments' object has no attribute 'test_file'
```
## Environment info
- `transformers` version: 4.9.0(master branch)
- Platform: TPU VM
- Python version: 3.9
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
@sgugger, @patil-suraj
### Possible Fix:
Either we can add test_file argument or remove test file validation section https://github.com/huggingface/transformers/blob/7d6285a921a23c06169e2d90c94faa0d92d00d78/examples/flax/summarization/run_summarization_flax.py#L352-L354
| 07-06-2021 04:33:27 | 07-06-2021 04:33:27 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.