repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 13,027 | closed | Get multiple results from Hugging face pipeline library | I am using simple transformers library to get auto suggested text for my question based on a context .
It give me a single suggestion, is there any possible way to get multiple results for the same.
text2text_generator = pipeline("text2text-generation")
text2text_generator("question: What is ? context: 42 is the answer to life, the universe and everything")
The "text2text-generation" model uses T5 model on its backend.
I tried something like:
```
from transformers import T5Tokenizer, T5Model
tokenizer = T5Tokenizer.from_pretrained('t5-small')
model = T5ForConditionalGeneration.from_pretrained('t5-small')
input_ids = tokenizer("question: What is ? context:42 is the answer to life, the universe and everything", return_tensors="pt").input_ids # Batch size 1
outputs = model.generate(input_ids,do_sample=True, top_k=5)
tokenizer.decode(outputs[0])
```
nothing works.! | 08-06-2021 12:42:49 | 08-06-2021 12:42:49 | Hello! The `Text2TextGenerationPipeline` accepts any keyword arguments to be handled by the `generate` method that does the generation under the hood. You can check the input signature of that method here to see what arguments it accepts: [generate](https://huggingface.co/transformers/main_classes/model.html#transformers.generation_utils.GenerationMixin.generate)
Namely, it accepts the `do_sample` argument. You can try it out:
```py
>>> from transformers import pipeline
>>> text2text_generator = pipeline("text2text-generation")
>>> text2text_generator("question: What is ? context: 42 is the answer to life, the universe and everything", do_sample=True, min_length=15)
[{'generated_text': '42 also is the answer to life, the universe and everything but the universe'}]
>>> text2text_generator("question: What is ? context: 42 is the answer to life, the universe and everything", do_sample=True, min_length=15)
[{'generated_text': '42 is the answer to life, the universe and everything but the universe'}]
>>> text2text_generator("question: What is ? context: 42 is the answer to life, the universe and everything", do_sample=True, min_length=15)
[{'generated_text': 'The answer to life, the universe and everything to everybody. 42 is the answer'}]
```<|||||>Great, that adds a lot of value to output. Thanks @LysandreJik <|||||>Can I get multiple results in a single generation?
I think it may be a bug for the num_return_sequences just take effect in
`
model_outputs = self.forward(model_inputs, **forward_params)
`
but in
`
outputs = self.postprocess(model_outputs, **postprocess_params)
`
the decode always return one result
`
record = {
f"{self.return_name}_text": self.tokenizer.decode(
model_outputs["output_ids"][0],
skip_special_tokens=True,
clean_up_tokenization_spaces=clean_up_tokenization_spaces,
)
}
` |
transformers | 13,026 | closed | Update model configs - Allow setters for common properties | # Update model configs - Allow setters for common properties
Not all models use the same naming for config values, e.g. `hidden_size` is called `n_embd` in GPT2Config. So far, getters had been implemented in the config classes to allow that a GPT2Config can be accessed via `config.hidden_size`.
But the setters were missing, so that this code fails so far:
```python
from transformers import GPT2Config
config = GPT2Config()
config.hidden_size = 4 # Fails
config = GPT2Config(hidden_size =4) # Fails
```
## Changes
This PR adds an `attribute_map` to the config classes that maps the config parameters. For GPT2, this map looks like this:
```python
attribute_map = {"hidden_size": "n_embd",
"max_position_embeddings": "n_positions",
"num_attention_heads": "n_head",
"num_hidden_layers": "n_layer"
}
```
The `PretrainedConfig` class overwrites the get & set attribute to check for the mappings in the attribute_map:
```python
def __setattr__(self, key, value):
if key in super().__getattribute__('attribute_map'):
key = super().__getattribute__('attribute_map')[key]
super().__setattr__(key, value)
def __getattribute__(self, key):
if key != 'attribute_map' and key in super().__getattribute__('attribute_map'):
key = super().__getattribute__('attribute_map')[key]
return super().__getattribute__(key)
```
## Advantages
- Setters work, i.e. you can use `config.hidden_size = 4` and `GPT2Config(hidden_size=4)`
- No need to write individual getter- or setter-methods in the config classes. They are derived from the `attribute_map`
## Detailed changes
- `PretrainedConfig`: Add `__setattr__` and `__getattribute__` methods. Added docstring for `attribute_map`
- `GPT2Config`: Add attribute map, remove old getters
- `test_configuration_common.py`: Update `create_and_test_config_common_properties` method so that it tests that setters exist and work
## ~~Work in Progress~~
~~So far I only updated the GPT2Config to get your feedback. Unit-Tests for other config classes that have not yet been updated (i.e. don't provide setters for the common fields) like the GPTNeo config class will fail.~~
~~Once the design of the solution is approved, I will update all other config classes.~~
Update: All config classes updated
## Fixes
- #12907
- #12183
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@sgugger @LysandreJik @NielsRogge
## Code to test the change
Besides the unit tests, you can use this code to test the changes quickly:
```python
from transformers import GPT2Config
config = GPT2Config()
config.hidden_size = 4
print("Hidden size", config.hidden_size, config.n_embd)
config.n_positions = 65
print("n_positions", config.max_position_embeddings, config.n_positions)
config.max_position_embeddings = 123
print("n_positions", config.max_position_embeddings, config.n_positions)
print("\n\n================\n\n")
## Note: conflicting arguments: hidden_size and n_embd are identical fields
# In that case, the synonym (hidden_size) will have higher priority
config = GPT2Config(hidden_size=4, n_embd=20, max_position_embeddings=80)
print("Hidden size", config.hidden_size, config.n_embd)
print("n_positions", config.max_position_embeddings, config.n_positions)
print("Export to json")
config.save_pretrained(".")
## Load config
print("Load from disc")
config = GPT2Config.from_pretrained('.')
print("Hidden size", config.hidden_size, config.n_embd)
print("n_positions", config.max_position_embeddings, config.n_positions)
assert config.hidden_size == config.n_embd
assert config.hidden_size == 4
assert config.max_position_embeddings == config.n_positions
assert config.max_position_embeddings == 80
``` | 08-06-2021 09:37:03 | 08-06-2021 09:37:03 | > The design looks good to me! I think we could have a few more common attributes, since we are in the process of adding them:
>
> * the vocab size (seems to be pretty consistent)
> * the embedding size
> * the inner size for the feed-forward layers
>
> Those on top of `max_position_embeddings` should all be included in `common_properties` so that we are sure they are common to each model.
My idea was to put this into an independent, new PR and too keep this PR focused on just changing the getter / setters.
My plan is to come up with some scheme which attributes should be common. Here we can differentiate between model types: text (differentiated between encoder only and encoder-decoder), image, audio.
I analyzed all 50+ config classes and these are the most common fields:
```
model_type 55
vocab_size 51
architectures 49
pad_token_id 42
max_position_embeddings 41
num_hidden_layers 40
initializer_range 36
eos_token_id 34
bos_token_id 32
hidden_size 32
layer_norm_eps 32
hidden_act 30
intermediate_size 30
num_attention_heads 29
hidden_dropout_prob 28
attention_probs_dropout_prob 26
transformers_version 25
type_vocab_size 23
attention_dropout 22
gradient_checkpointing 21
dropout 19
activation_dropout 18
d_model 17
init_std 17
activation_function 16
```
But as mentioned, I would put this in another PR.<|||||>Hi @sgugger @LysandreJik @patil-suraj @patrickvonplaten
I also updated all other config classes so that they all use the `attribute_map` so that common properties (like `hidden_size`) can also be set (`config.hidden_size = 123`) or passed as argument (`MyConfigClass(hidden_size = 123)`).
I kept the behavior for the config classes as is, i.e. no new getter-methods were added, config classes were just extended to allow setting of the common properties.
If a setter method cannot be implemented for a class, an exception is raised:
https://github.com/huggingface/transformers/blob/c8973d1b5b2a498703e4308cba5056b5cbdaef12/src/transformers/models/funnel/configuration_funnel.py#L176
All unit tests are passing.
Would be happy if you could have a look at this PR.<|||||>@sgugger Will add a note to the docs
@patrickvonplaten Throwing an error is not easy.
`GPT2Config` defines `n_embd=768` in the `__init__` method, so:
`config = GPT2Config(hidden_size=4)`
and
`config = GPT2Config(hidden_size=4, n_embd=768)`
are identical calls of the method. We would expect method 1 to work.
In order to throw an exception for method 2, we could do:
- Replace all default parameters with None, see if `hidden_size` is not set, then set `n_embd` to 768 => Major refactoring on all config classes would be needed with quite a lot of overhead. Further, default parameters would no longer be visible from the definition of the method.
- Check if `n_embd != hidden_size and n_embd != 768` => `config = GPT2Config(hidden_size=4, n_embd=8)` would throw an error, but `config = GPT2Config(hidden_size=4, n_embd=768)` would not raise an error (also not a nice solution). Also major refactoring would be needed as we would need to keep track of the default values for all parameters.
Do you have other ideas how this could be checked?<|||||>@sgugger
I updated the docs:
transformers/docs/source/main_classes/configuration.rst
And added a section on the common attributes. Please have a look.<|||||>Hi,
I just updated the PR with the newest commits from the master branch.
However, now the run_examples_torch fails in CircleCI:
```
==================================== ERRORS ====================================
______________ ERROR collecting examples/pytorch/test_examples.py ______________
ImportError while importing test module '/home/circleci/transformers/examples/pytorch/test_examples.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/usr/local/lib/python3.6/importlib/__init__.py:126: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
examples/pytorch/test_examples.py:51: in <module>
import run_image_classification
examples/pytorch/image-classification/run_image_classification.py:27: in <module>
from torchvision.transforms import (
E ModuleNotFoundError: No module named 'torchvision'
```
Not sure why this happens, as this PR is not touching run_image_classification.py
Is this an issue with CircleCI or with the specific unit test?<|||||>Hi @nreimers, it's not related to this PR. That test fails because `torchvision` is not installed on the CI ( which is required in `run_image_classification.py`) for examples test. I've proposed a fix here #13438<|||||>Hi @patil-suraj
Thanks for the quick response.
What are the next steps for this PR? Wait until #13438 is merged and then, when all tests are passing, merging this PR?
Who will be merging this PR? Should I do it once all tests are passing?<|||||>The failed test is not related to this PR and all of us has approved this PR, so feel free to merge if everything is ready :) |
transformers | 13,025 | closed | MT5-large model on hub has wrong config | MT5-large model [config](https://huggingface.co/google/mt5-large/blob/main/config.json) has wrong parameters `"architectures"` and `"tokenizer_class"` as
```json
{
"architectures": [
"T5ForConditionalGeneration"
],
"tokenizer_class": "T5Tokenizer"
}
```
, where it should be MT5 arch and tokenizer as
```json
{
"architectures": [
"MT5ForConditionalGeneration"
],
"tokenizer_class": "MT5Tokenizer"
}
```
@patrickvonplaten | 08-06-2021 09:00:40 | 08-06-2021 09:00:40 | great catch! Correcting it now<|||||>Reopening in 4.9.2 since it is difficult to read the model. The problem is either here:
https://github.com/huggingface/transformers/blob/v4.9.2/src/transformers/models/auto/tokenization_auto.py#L329 in`TOKENIZER_MAPPING` or here https://github.com/huggingface/transformers/blob/v4.9.2/src/transformers/models/mt5/__init__.py#L35
`MT5Tokenizer` is just `T5Tokenizer` so when we call `class.__name__` it reduces to `T5Tokenizer` and `MT5Tokenizer` is not in the list. The trick however is to pass `tokenizer_class=None` to `from_pretrained` and it reduces to `T5Tokenizer`:
```
AutoTokenizer.from_pretrained(
"google/mt5-large",
tokenizer_class=None,
)
```
Now, it works on master though.<|||||>@dkajtoch mt5-large tokenizer class is incorrect in config.json: https://huggingface.co/google/mt5-large/blob/main/config.json
It should be corrected as `"tokenizer_class": "T5Tokenizer",`.
Refer to mt5-base https://huggingface.co/google/mt5-base/blob/main/config.json and mt5-xl https://huggingface.co/google/mt5-xl/blob/main/config.json configs.<|||||>@dkajtoch interesting catch, but I do not understand why this problem is happening ?
Isn't `MT5Tokenizer` is already in the autotokenizer's list (as an alias for `T5Tokenizer`) ? @patrickvonplaten <|||||>@devrimcavusoglu it is but the class is not called `MT5Tokenizer`. It would have been if the authors did something like this
```
class MT5Tokenizer(T5Tokenizer):
pass
```
Instead of `MT5Tokenizer = T5Tokenizer`
because `tokenizer_class_from_name ` matches tokenizer via reference to class name i.e. `c.__name__`<|||||>> @devrimcavusoglu it is but the class is not called `MT5Tokenizer`. It would have been if the authors did something like this
>
> ```
> class MT5Tokenizer(T5Tokenizer):
> pass
> ```
>
> Instead of `MT5Tokenizer = T5Tokenizer`
>
> because `tokenizer_class_from_name ` matches tokenizer via reference to class name i.e. `c.__name__`
My mistake, I thought `MT5Tokenizer` was a class exactly like you said :sweat_smile: turns out I remember incorrectly. So the next step would be
1) create a class for `MT5Tokenizer` rather than a variable.
2) change `"tokenizer_class": "MT5Tokenizer"` as `"tokenizer_class": "T5Tokenizer"` in mt5 model configs.
I think (1) is more solid and nicer way. wdyt ? @patrickvonplaten @dkajtoch<|||||>Maybe the fix is necessary or maybe not since on master 4.10.0-dev0 it works by switching to T5Tokenizer. However, all previous versions will be broken so it is better to change the config back -> only large has `MT5Tokenizer` :P<|||||>@dkajtoch previous mt5-large config was incorrect. If you switch it back, it will work incorrectly.
You only need to fix tokenizer class in mt5-large config as `"tokenizer_class": "T5Tokenizer",`.<|||||>@fcakyon I see in history that not only tokenizer_class was changed 👍 Ok so just the tokenizer_class needs to be updated in config.json<|||||>@dkajtoch thats right 👍 <|||||>@dkajtoch @patrickvonplaten any ETA on the fix?<|||||>Sorry what's the problem here exactly?
```python
tok = AutoTokenizer.from_pretrained("google/mt5-large")
```
works fine when I try it.<|||||>`MT5Tokenizer` is an alias to `T5Tokenizer`, so it doesn't really matter which one we put in the config. For consistency, it's True that `T5Tokenizer` might make more sense<|||||>
@patrickvonplaten This happens at most in `4.9.2` and the current master version is ok.
<|||||>Gotcha! Thanks for clarifying! Updating the config now<|||||>Done! Sorry about that! |
transformers | 13,024 | closed | [Flax] Refactor gpt2 & bert example docs | # What does this PR do?
This PR mainly refactors the docs of the official Flax MLM, CLM examples. The CLM training script is also slightly changed for consistency with the MLM script. | 08-06-2021 08:43:16 | 08-06-2021 08:43:16 | |
transformers | 13,023 | closed | Disentangle auto modules from other modeling files | # What does this PR do?
This PR cleans up the auto modules to have them rely on string mappings and dynamically import the model when they are needed, instead of having a hard dependency on every modeling file.
There is no breaking changes are all the MAPPING classes are still present and will behave like regular dictionaries, just loading the objects as needed. On the internal tooling side, this allows us to remove the script that was extracting the names of the auto-mapping (since we have them now) and the file that stored them. | 08-06-2021 07:18:30 | 08-06-2021 07:18:30 | |
transformers | 13,022 | closed | GPT-J-6B | # What does this PR do?
Introduces the long awaited `GPT J` model class to HuggingFace! Concurrently with this PR being merged I will make a GPT J 6B checkpoint public on the EleutherAI HF page for people to use. The model has been evaluated as being within error tolerances of the GPT J 6B model we released in Jax two months ago.
@patil-suraj was very helpful in assisting me to understand HF philosophy and how to make this PR most in line with the rest of the codebase. Other than that, the major design consideration was to make the configs compatible with GPT-2 rather than GPT-Neo. GPT-Neo has some usability limitations due to its configs having names unrelated to GPT-2’s (see #12183 for details). Given those problems and my hope that GPT-Neo will have it’s configs updated in the future, it seemed like a clear choice to align GPT J with GPT-2.
Shout outs to @finetuneanon whose implementation this one is based off of, as well as @kumuruz for assistence optimizing and debugging.
Supersedes #12243 #13010 #13022
Closes #12098
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
It was discussed in Slack with @patil-suraj
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
- gpt2: @patrickvonplaten, @LysandreJik, @patil-suraj | 08-06-2021 05:18:32 | 08-06-2021 05:18:32 | There are six failed tests.
Four of them relate to Flax and TF models which I did not add. I may have left some boilerplate code indicating the existence of such models by accident.
One of them relates to docstring issues. I’ll double check the docstrings, but these issues have no impact on the functionality of the model.
One of them appears to be a basic quality assurance check. The code says
```
assert 2 == 3
def test_answer():
> assert 1 + 1 == 3
E assert 2 == 3
test_sample.py:2: AssertionError
```
but I have no idea what this means. Assistence it advice would be appreciated.<|||||>Looks like a boilerplate test that never got filled out/removed<|||||>I don't see any test_sample.py in your branch. I think this file is not committed and probably some example test that somehow ended up in your local copy?<|||||>There will also be a tricky merge with the result of #13023, let us know if you need any help with that.<|||||>I'm glad y'all approve :)
I made many of the recommended changes, but have some stuff to take care of today. I'll work on addressing them all and tag you guys when it's ready for re-review.<|||||>I have been trying to work on this, but I have [a few](https://xkcd.com/1070/) lingering questions that have been slowing my progress.
1. Is there a certain motivation for [`Attention`](https://github.com/huggingface/transformers/blob/24ac25a07ba80b9b7b6e396887305437478398ff/src/transformers/models/openai/modeling_openai.py#L156-L157), [`GPT2Attention`](https://github.com/huggingface/transformers/blob/24ac25a07ba80b9b7b6e396887305437478398ff/src/transformers/models/gpt2/modeling_gpt2.py#L150-L155), [`MLP`](https://github.com/huggingface/transformers/blob/24ac25a07ba80b9b7b6e396887305437478398ff/src/transformers/models/openai/modeling_openai.py#L237-L238), and [`GPT2MLP`](https://github.com/huggingface/transformers/blob/24ac25a07ba80b9b7b6e396887305437478398ff/src/transformers/models/gpt2/modeling_gpt2.py#L275-L276) to use [`torch.nn.Conv1D`](https://pytorch.org/docs/1.9.0/generated/torch.nn.Conv1d.html)? [`GPTNeoSelfAttention`](https://github.com/huggingface/transformers/blob/24ac25a07ba80b9b7b6e396887305437478398ff/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L321-L324), [`GPTNeoLocalSelfAttention`](https://github.com/huggingface/transformers/blob/24ac25a07ba80b9b7b6e396887305437478398ff/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L390-L393) and [`GPTNeoMLP`](https://github.com/huggingface/transformers/blob/24ac25a07ba80b9b7b6e396887305437478398ff/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L521-L522) buck the trend by using [`torch.nn.Linear`](https://pytorch.org/docs/1.9.0/generated/torch.nn.Linear.html) instead. The last instance also maintains the inappropriate prefix of `c_` in the layer name (which I presume is to indicate that it is a convolutional layer).
2. [`GPT2Config`](https://github.com/huggingface/transformers/blob/24ac25a07ba80b9b7b6e396887305437478398ff/src/transformers/models/gpt2/configuration_gpt2.py#L188-L202) and [`OpenAIGPTConfig`](https://github.com/huggingface/transformers/blob/24ac25a07ba80b9b7b6e396887305437478398ff/src/transformers/models/openai/configuration_openai.py#L162-L176) both alias four arguments of the constructor by defining four new properties. [`GPTNeoConfig`](https://github.com/huggingface/transformers/blob/24ac25a07ba80b9b7b6e396887305437478398ff/src/transformers/models/gpt_neo/configuration_gpt_neo.py#L174-L180) (in its perpetual inconsistency) only maintains two of them while using names that are also not consistent with `OpenAIGPTConfig` and `GPT2Config`. Why maintain these redundant properties? If the intention is to use them to rename constructor arguments rather than to provide dual access, they could be simply renamed in the initial block of the constructor. Even worse, many values that are initialized from the config do not maintain continuality in their naming, a problem that spans all four models discussed here. (To demonstrate, trace how [`GPT2Config.n_embd` is aliased to `GPT2Config.hidden_size`](https://github.com/huggingface/transformers/blob/24ac25a07ba80b9b7b6e396887305437478398ff/src/transformers/models/gpt2/configuration_gpt2.py#L192-L194) so that it can be [later accessed via `GPT2Config.hidden_size` and assigned to `GPT2Attention.embed_dim`](https://github.com/huggingface/transformers/blob/24ac25a07ba80b9b7b6e396887305437478398ff/src/transformers/models/gpt2/modeling_gpt2.py#L138))
3. Are we maintaining [`GPTJForSequenceClassification`](https://github.com/huggingface/transformers/blob/24ac25a07ba80b9b7b6e396887305437478398ff/src/transformers/models/gptj/modeling_gptj.py#L781-L892) or not? @StellaAthena [removed the reference to it](https://github.com/huggingface/transformers/pull/13022/commits/4efbbeca8fe4674abf42d253b6dcdd70077cdebf) but @patil-suraj [restored that reference](https://github.com/huggingface/transformers/pull/13022/commits/24ac25a07ba80b9b7b6e396887305437478398ff). Defining the scope of this PR is likely a good idea to prevent counterproductive work.<|||||>Hi @EricHallahan
1. The original GPT2 used the conv1d layer instead of the `linear` (it's essentially linear but just keeps weights transposed). It's rather confusing why they chose that name, so we try not to use it anymore. The names are there just for historical reasons :D, Linear is well known and easy to understand and no transpose is needed when doing the computation.
2. The `hidden_size`, `max_position_embeddings`, `num_hidden_layers`, `num_attention_heads` are common attribute across all configs. They help enable some common tests for configs and models. Note that GPT2 and GPT were added before introducing these attributes so now they are aliased. Simply renaming the constructor argument is not an option since it'll break backward compatibility. If constructor arguments are renamed thousands of GPT2 models on the hub will fail to load since their config is already defined. As much as we would like to do that, it's not an option.
Also, note that when GPT2 was added `transformers` was a bit new and has evolved a lot since then so things and guidelines have changed a bit. In general, we try to use these new names whenever possible, but for this model, I think it's fine to use the `n_layer`, `n_head` etc for consistency with the GPT2 config since Leo at some point had mentioned that it's useful to have those to be able to swap models easily.
But so far there were no issues from the community about these names, so I fail to see what's the big problem here.
3. ` GPTJForSequenceClassification` was type hinted in the main init, if a model class is used somewhere in main init then the quality tests requires that such class should be tested, added to the auto model, and documented. So the tests were failing, so I decided to add it back since the class was already defined. But no strong opinion about it, feel free to completely remove it.
Hope this answers your question :) <|||||>@patil-suraj That perfectly answers my questions. I had been making the assumption that GPT-2 was the benchmark for how the model should be structured, but relaxing that assumption resolves that conflict.
I've been working up cleaning up the mess left from the Attention mixin and consolidating those classes to the unified `GPTJAttention`. I should get some sleep, but hopefully I'll get that pushed sometime tomorrow.<|||||>Glad to know! I agree that the AttentionMixin is rather confusing. Thanks for working on it. Apart from few comments above the PR is already in very good shape! <|||||>I have committed the suggested changes by @sgugger, or at least as many as I could before my phone interface started acting weird. I’m coming back from vacation tomorrow and can go over the PR for real when I get back.
@patil-suraj thanks for the info! This is quite helpful. I have no problem supporting sequence classification if the code supports it. I had removed it because I didn’t think we could support it without writing a bunch more code.
It looks like we are quite close to getting this merged! Thanks for all the help @EricHallahan @kurumuz <|||||>@StellaAthena thanks for your work on this PR! Can you share details on the GPU (or AWS instance type) you used for testing your code? I plan to run some benchmarking with this model after this PR is merged (hopefully soon!).<|||||>> @StellaAthena thanks for your work on this PR! Can you share details on the GPU (or AWS instance type) you used for testing your code? I plan to run some benchmarking with this model after this PR is merged (hopefully soon!).
The original model was trained on TPUs, and my testing of this PyTorch port has been on 8x A100 clusters<|||||>Okay, well if that's the case, I'd love to see stats backing the claim that
I misunderstand.
Last I checked, fp32 weights for a 2.7B model itself is ~10 GB of disk
space. So this 6B model you are creating a PR for will likely take more
than 16 GB of disk space. So I do not see how you "absolutely can perform
inference on a 16 GB V100 GPU" with this GPT-J 6B model. Please
advise/correct my understanding since I'm still very much a newbie. Thanks!
On Mon, Aug 16, 2021, 9:38 PM Stella Biderman ***@***.***>
wrote:
> ***@***.**** commented on this pull request.
> ------------------------------
>
> In src/transformers/models/gptj/modeling_gptj.py
> <https://github.com/huggingface/transformers/pull/13022#discussion_r690027756>
> :
>
> > + output_attentions (:obj:`bool`, `optional`):
> + Whether or not to return the attentions tensors of all attention layers. See ``attentions`` under returned
> + tensors for more detail.
> + output_hidden_states (:obj:`bool`, `optional`):
> + Whether or not to return the hidden states of all layers. See ``hidden_states`` under returned tensors for
> + more detail.
> + return_dict (:obj:`bool`, `optional`):
> + Whether or not to return a :class:`~transformers.file_utils.ModelOutput` instead of a plain tuple.
> +"""
> +
> +
> ***@***.***_start_docstrings(
> + "The bare gptj Model transformer outputting raw hidden-states without any specific head on top.",
> + GPTJ_START_DOCSTRING,
> +)
> +class GPTJModel(GPTJPreTrainedModel):
>
> Again, I fear you misunderstand. You absolutely can perform inference on a
> 16 GB V100 GPU. At no point did I say that you needed 32 GB of memory to
> use this model.
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/13022#discussion_r690027756>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AA5MNWJ4WGL4LCRW3LVF7XTT5HRVPANCNFSM5BVGELLA>
> .
> Triage notifications on the go with GitHub Mobile for iOS
> <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
> or Android
> <https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email>
> .
>
<|||||>> Ultimately it's up to you - if it turns out that someone intending to utilize your PR is budget-constrained, they'll just be forced to end up making those changes locally by copy-pasting stuff from modeling_gpt2.py.
> Okay, well if that's the case, I'd love to see stats backing the claim that I misunderstand. Last I checked, fp32 weights for a 2.7B model itself is ~10 GB of disk space. So this 6B model you are creating a PR for will likely take more than 16 GB of disk space. So I do not see how you "absolutely can perform inference on a 16 GB V100 GPU" with this GPT-J 6B model. Please advise/correct my understanding since I'm still very much a newbie. Thanks!
You don't use fp32 weights for inference, you use BF16 weights without optimizer states for inference. BF16 weights without optimizer states for this model come out to 9GB. If you check out [the source repo](https://github.com/kingoflolz/mesh-transformer-jax), you can find a link to download them. As I've already said, you can find better optimized code at that repo, and if you're on a budget constraint you should not be using the `transformers` library.
As @EricHallahan says, the reality of large models is that you need large compute to run them. There is a fundamental limit to what you can do with a model that's too big to fit in your GPU, and that limit is best surpassed by buying a better GPU. We are well aware that this limits the accessibility of these models, but that's how to the world works. You can find other, smaller models that we have released [on our HF page](https://huggingface.co/EleutherAI) if our 6B model is outside your budgetary resources. This codebase also does not require that you use 6B parameters. You are welcome to use it to train a smaller model as well.<|||||>Hi @sgugger and @patil-suraj,
This PR's description says:
> the major design consideration was to make the configs compatible with GPT-2
I do not see the model classes being compatible though. For instance, `GPT2Model` supports `parallelize` (a very straightforward feature), `GPTJModel` does not at the moment. Authors of this PR don't seem keen on supporting this feature as you can see from the previous comments (including their edit history), but this is not their library, this is a community library. And usability by the community that makes this library famous and successful should indeed trump efficiency.
So should there be an addition of `parallelize` to this PR? What are your thoughts on this topic?
Thanks!
EDIT: I am fully aware I can fine-tune this model with standard gradient partitioning (ZeRO Stage 2) and at reasonably high TFLOPs/GPU with the usual old tricks, which I am not asking about. IMO the `parallelize` feature is immensely useful for already-trained models at deployment-time since it sets up a simple pipeline (unlike a formal `PipelineModule` sub-class expected for pipeline-parallel training) at inference-time for most consumers of this library, who may not be very technically savvy.<|||||>@g-karthik: I can back up the claims that it can be run in fp16 mode on any GPU with 16G of VRAM, but don't expect any large batching capabilities.<|||||>> Hi @sgugger and @patil-suraj,
>
> This PR's description says:
>
> > the major design consideration was to make the configs compatible with GPT-2
>
> I do not see the model classes being compatible though. For instance, `GPT2Model` supports `parallelize` (a very straightforward feature), `GPTJModel` does not at the moment. Authors of this PR don't seem keen on supporting this feature as you can see from the previous comments (including their edit history), but this is not their library, this is a community library. And usability by the community that makes this library famous and successful should indeed trump efficiency.
>
> So should there be an addition of `parallelize` to this PR? What are your thoughts on this topic?
>
> Thanks!
>
> EDIT: I am fully aware I can fine-tune this model with standard gradient partitioning (ZeRO Stage 2) and at reasonably high TFLOPs/GPU with the usual old tricks, which I am not asking about. IMO the `parallelize` feature is immensely useful for already-trained models at deployment-time since it sets up a simple pipeline (unlike a formal `PipelineModule` sub-class expected for pipeline-parallel training) at inference-time for most consumers of this library, who may not be very technically savvy.
This is a very uncharitable way to represent my assertion that I do not feel comfortable implementing a highly experimental feature that I didn't even know existed until you brought it up. I have no objection to it being implemented, and even explicitly invited you to do so.
If the HF team is interested in integrating this functionality across all of the `transformer` classes I have no objection to that whatsoever. However currently it has been implemented for 2 of the 66 (counting GPT-J) model classes.<|||||>@StellaAthena
My message says "Authors of this PR don't seem keen on supporting this feature as you can see from the previous comments (including their edit history)", where authors is **plural**. Eric is a co-author on this PR since I can see he has commits on this PR. I presume he is part of Eleuther-AI, since he was a co-author + you chose to apologize on his behalf and **then** invited me to push commits to your fork to add support for `parallelize`.
Your assertion was that you were not comfortable supporting `parallelize`, and your co-author Eric jumped the gun without bothering to look at the specific `parallelize` feature I referenced and made a broad assertion that parallelization "will not be considered" (which he later edited out after I responded, but I paste verbatim the original comment):
> GPT-J 6B easily fits in a 16 GB GPU for inference or 32 GB GPU for tuning (at FP16). CPU inference, while slow, absolutely remains an option and it isn't hard to get a server with 32 GB of memory. And besides, just because GPT-J 6B is the only model that exists today doesn't mean that other models cannot be created in the future at different scales. This is generic model class PR, not a GPT-J 6B PR.
Look, we understand that you are concerned with accessibility of large models. EleutherAI expects that those looking to use our large language models will need to find beefy hardware to run them. It is an unfortunate result of scaling, but it is something we cannot help with. You can try to do fancy optimizations and swapping weights in and out of memory but that is incompatible with the transformers design philosophy and out of scope for this PR.
If you want parallelization today, use Mesh Transformer JAX. Parallelization is out of scope for this PR, and will not be considered.
Given he was a co-author of your PR and the above observations, I think saying the "authors don't seem keen on supporting this feature" was actually most charitable. But to avoid clubbing you with your co-author, I shall rephrase to: "one author wasn't comfortable supporting this feature, and another author was just plain rude w.r.t this feature and gave out a flat no".
Anyway, I think I'm done w.r.t. this line of discussion, and shall wait until I hear from someone at Hugging Face (@sgugger or @patil-suraj) since I respect their design choices greatly and want to know their opinions on `parallelize` in GPT-J akin to GPT-2.<|||||>> @g-karthik: I can back up the claims that it can be run in fp16 mode on any GPU with 16G of VRAM, but don't expect any large batching capabilities.
@oborchers thanks for backing up their updated claim! The original (deleted, but reproduced by me since it's on email) claim was:
> Again, I fear you misunderstand. You absolutely can perform inference on a 16 GB V100 GPU. At no point did I say that you needed 32 GB of memory to use this model.
This does not state FP16 or BF16, which is why I presume the claim was quickly deleted.
One clearly cannot fit this GPT-J 6B model in a 16 GB GPU unless they use FP16 or BF16.
BUT, any consumer of a library as huge as Hugging Face transformers would prefer having the choice of whether they can/want to use FP32 or not. With the `parallelize()` and `deparallelize()` methods (supported in GPT-2), it would be possible for consumers to directly use FP32 weights on a 16 GB GPU because the model is split into pipeline stages for inference.
So, the argument here is simple. You have `modeling_gpt2.py` that `modeling_gptj.py` is supposedly meant to be riffing off of design-wise. The latter, however, currently does not support `parallelize()` and `deparallelize()`. If such support were added, power would lie in the hands of the consumer on whether or not to use FP32/FP16/BF16 for the use-case of their choice.<|||||>> Think the PR can be merged very soon :-)
>
> We should probably try to focus on making the tests pass and then the only things that would be great to slightly adapt are:
>
> * Remove the GPTJAttentionMixin and GPTJAttention class
> * Force the generation logits to be in fp32 so that the model can give good results in fp16 :-)
>
> Thanks a lot for all the work on this already!
@EricHallahan is hoping to push the first change either today or tomorrow. Once he has, I'm expecting it'll be a couple minutes of work to fix the failing tests and ensure fp32 generation.<|||||>I see that `GPTJAttention().attn_dropout` and `GPTJAttention().masked_bias` are passed as parameters to `GPTJAttention()._attn()` rather than being referenced directly within `GPTJAttention()._attn()` like GPT-2. Similarly, `causal_mask` is calculated before `GPTJAttention()._attn()` and passed as a parameter while GPT-2 calculates it within `GPTJAttention()._attn()`. What does Hugging Face prefer?<|||||>The final failed test appears to be something technical about how the testing code was written and the removal of the `configs.rotary` argument. @patil-suraj, as you wrote most of the testing code could you take a look and see if you can spot how to fix it?
The traceback reads
```!bash
_______________ GPTJModelTest.test_gptj_model_past_large_inputs ________________
[gw0] linux -- Python 3.7.11 /usr/local/bin/python
self = <tests.test_modeling_gptj.GPTJModelTest testMethod=test_gptj_model_past_large_inputs>
def test_gptj_model_past_large_inputs(self):
config_and_inputs = self.model_tester.prepare_config_and_inputs()
> self.model_tester.create_and_check_gptj_model_past_large_inputs(*config_and_inputs)
tests/test_modeling_gptj.py:386:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_modeling_gptj.py:277: in create_and_check_gptj_model_past_large_inputs
model = GPTJModel(config=config)
src/transformers/models/gptj/modeling_gptj.py:415: in __init__
self.h = nn.ModuleList([GPTJBlock(config, layer_id=i) for i in range(config.n_layer)])
src/transformers/models/gptj/modeling_gptj.py:415: in <listcomp>
self.h = nn.ModuleList([GPTJBlock(config, layer_id=i) for i in range(config.n_layer)])
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GPTJBlock(
(ln_1): LayerNorm((32,), eps=1e-05, elementwise_affine=True)
)
config = GPTJConfig {
"activation_function": "gelu_new",
"attention_probs_dropout_prob": 0.0,
"attn_pdrop": 0.0,
"bos_t...ts": true,
"transformers_version": "4.10.0.dev0",
"type_vocab_size": 16,
"use_cache": true,
"vocab_size": 99
}
layer_id = 0
def __init__(self, config, layer_id):
super().__init__()
inner_dim = config.intermediate_size if config.intermediate_size is not None else 4 * config.n_embd
self.ln_1 = nn.LayerNorm(config.n_embd, eps=config.layer_norm_epsilon)
> self.attn = GPTJAttention(config, layer_id)
E TypeError: __init__() takes 2 positional arguments but 3 were given
src/transformers/models/gptj/modeling_gptj.py:275: TypeError
```<|||||>Only two more tests to fix :-)
```
FAILED tests/test_modeling_gptj.py::GPTJModelTest::test_gptj_model_att_mask_past
FAILED tests/test_modeling_gptj.py::GPTJModelTest::test_gptj_model_past_large_inputs
```
@StellaAthena @EricHallahan those tests are pretty `transformers` specific and can be quite complex to fix - let me know if you want me to go into the PR to take a look :-)<|||||>> Only two more tests to fix :-)
>
> ```
> FAILED tests/test_modeling_gptj.py::GPTJModelTest::test_gptj_model_att_mask_past
> FAILED tests/test_modeling_gptj.py::GPTJModelTest::test_gptj_model_past_large_inputs
> ```
>
> @StellaAthena @EricHallahan those tests are pretty `transformers` specific and can be quite complex to fix - let me know if you want me to go into the PR to take a look :-)
Yeah that would be great! I would love it if you could take a look at what we are missing.<|||||>Eureka! [The result of the calculation intended for `attention_mask` was placed into `global_attention_mask` instead.](https://github.com/huggingface/transformers/blob/b6021cf0d9acc36fb96aaf3c7b457160f7f0b9d5/src/transformers/models/gptj/modeling_gptj.py#L482-L501) The code passes both of the failing tests in question after replacing that block of code with [the corresponding block from GPT-2](https://github.com/huggingface/transformers/blob/b6021cf0d9acc36fb96aaf3c7b457160f7f0b9d5/src/transformers/models/gpt2/modeling_gpt2.py#L696-L713). I'll integrate the change in a little bit after I do a little more testing.<|||||>@patrickvonplaten @sgugger @patil-suraj Looks like Eric saved the day! Let us know if there’s anything you’d like changed before it goes live.<|||||>Easy there @StellaAthena, we still need to verify that the slow tests pass. However, I am optimistic that we should be able to have this merged soon.<|||||>> Easy there @StellaAthena, we still need to verify that the slow tests pass. However, I am optimistic that we should be able to have this merged soon.
I’m under the impression that they need to manually approve the slow tests, no? That’s the “1 workflow awaiting approval” right?<|||||>I just ran them myself and they passed for me (after specifying `use_auth_token` in every call to `.from_pretrained()`), but I don't know if they will pass here.<|||||>I have a few matters that I think we should discuss/resolve before we consider merging:
1. I'll ask again because I haven't received a response yet: I see that [`GPTJAttention().attn_dropout` and `GPTJAttention().masked_bias` are passed as parameters to `GPTJAttention()._attn()`](https://github.com/huggingface/transformers/blob/d2c85a22abb7b09185363a67a270276465779182/src/transformers/models/gptj/modeling_gptj.py#L233-L234) rather than being [accessed directly within the method like GPT-2](https://github.com/huggingface/transformers/blob/d2c85a22abb7b09185363a67a270276465779182/src/transformers/models/gpt2/modeling_gpt2.py#L187-L194). Similarly, [`causal_mask` is calculated before `GPTJAttention()._attn()` and passed as a parameter](https://github.com/huggingface/transformers/blob/d2c85a22abb7b09185363a67a270276465779182/src/transformers/models/gptj/modeling_gptj.py#L225-L226) while [GPT-2 calculates it within `GPT2Attention()._attn()`](https://github.com/huggingface/transformers/blob/d2c85a22abb7b09185363a67a270276465779182/src/transformers/models/gpt2/modeling_gpt2.py#L185-L186). What does Hugging Face prefer? I think we should adapt the GPT-J implementation to be more like the GPT-2 implementation in this respect.
2. To resolve [a warning regarding `torch.where` deprecating uint8 condition tensors](https://github.com/pytorch/pytorch/blob/d7d399f3dfc780f3e49bcffe45694fb04e5db637/aten/src/ATen/native/TensorCompare.cpp#L330), [GPT-2 casts to bool after slicing `bias`](https://github.com/huggingface/transformers/blob/d2c85a22abb7b09185363a67a270276465779182/src/transformers/models/gpt2/modeling_gpt2.py#L186). I resolved the same warning by [casting the contents of the entire `bias` buffer at initialization](https://github.com/huggingface/transformers/blob/d2c85a22abb7b09185363a67a270276465779182/src/transformers/models/gptj/modeling_gptj.py#L79-L81). This seems to work fine, but if we need to change this for some reason I have not foreseen please tell me.
3. [GPT-2 calculates the scale factor for the attention weights and applies them within `GPT2Attention()._attn()`](https://github.com/huggingface/transformers/blob/b6021cf0d9acc36fb96aaf3c7b457160f7f0b9d5/src/transformers/models/gpt2/modeling_gpt2.py#L180-L181) when [a bool config variable is set](https://github.com/huggingface/transformers/blob/b6021cf0d9acc36fb96aaf3c7b457160f7f0b9d5/src/transformers/models/gpt2/modeling_gpt2.py#L147). [GPT-J used to have a dedicated buffer to store the scaling value](https://github.com/huggingface/transformers/blob/b6021cf0d9acc36fb96aaf3c7b457160f7f0b9d5/src/transformers/models/gptj/modeling_gptj.py#L91), but I modified this [to remove the unneeded buffer](). This made the model loader stop complaining about buffers that were not initialized from the checkpoint. This seems to work fine (and if it does I think we can remove [the check that ensures that `scale_attn` is initialized](https://github.com/huggingface/transformers/blob/d2c85a22abb7b09185363a67a270276465779182/src/transformers/models/gptj/modeling_gptj.py#L152)), but if we need to change this for some reason I have not foreseen please tell me.
4. I note that the weights currently staged on Model Hub are stored in [half precision (binary16)](https://en.wikipedia.org/wiki/Half-precision_floating-point_format), while the original released checkpoint was [bfloat16](https://en.wikipedia.org/wiki/Bfloat16_floating-point_format). As this is an inherently lossy conversion unlike casting to [single precision (binary32)](https://en.wikipedia.org/wiki/Single-precision_floating-point_format), I feel it important to ask the level that we should adhere to the [`transformers` philosophy of <q>[providing] state-of-the-art models with performances as close as possible to the original models</q>](https://huggingface.co/transformers/philosophy.html). I have been assured by @StellaAthena and @kingoflolz that the difference in downstream performance between binary16 and bfloat16/binary32 is minimal with GPT-J 6B (and the evaluations they presented to me support this claim), but if we assume that this implementation will be used for academic research at some point in the future it seems odd to be manipulating the original checkpoint in a way that could modify downstream performance.
The reason I bring this matter up here is firstly the fact that bfloat16 hardware is not as widespread/accessible as binary32 and binary16 hardware and secondly my assumption that a switch to a bfloat16 checkpoint would require changes in the implementation; If we decide that serving the checkpoint from Model Hub in bfloat16 is required to meet the goals of the `transformers` project, it is critical to ensure that it will be properly loaded/cast on platforms that do not support bfloat16 computation. I consider storing and serving the checkpoint in binary32 to be an unacceptable compromise to this conflict, as it would be double the size of the **11.7 GiB** binary16 checkpoint that is currently staged.
5. Would it be preferable to [include a tool that can convert the original checkpoint file to the Hugging Face format like GPT-Neo](https://github.com/huggingface/transformers/blob/d2c85a22abb7b09185363a67a270276465779182/src/transformers/models/gpt_neo/convert_gpt_neo_mesh_tf_to_pytorch.py), or is that something that is out of scope of this PR?
5. It is unclear to me if [casting `query` and `key` to single precision (binary32) in `GPTJAttention()._attn()` is actually preventing an overflow when running the model in half precision (binary16)](https://github.com/huggingface/transformers/blob/d2c85a22abb7b09185363a67a270276465779182/src/transformers/models/gptj/modeling_gptj.py#L145-L147). We should verify that this is required before merging.<|||||>Great work on this!
> 6\. It is unclear to me if [casting `query` and `key` to single precision (binary32) in `GPTJAttention()._attn()` is actually preventing an overflow when running the model in half precision (binary16)](https://github.com/huggingface/transformers/blob/d2c85a22abb7b09185363a67a270276465779182/src/transformers/models/gptj/modeling_gptj.py#L145-L147). We should verify that this is required before merging.
The cast is originally there from the gpt-neo implementation. Did not find any difference, other than speed, when I evaluated not casting to fp32, so it doesn't seem to be necessary. However, when running the model on certain GPUs (e.g. P100) in bf16 mode, doing the matmul fails, while the model remains basically operable in bf16 mode if the cast is kept.<|||||>> Great work on this!
>
> > 6. It is unclear to me if [casting `query` and `key` to single precision (binary32) in `GPTJAttention()._attn()` is actually preventing an overflow when running the model in half precision (binary16)](https://github.com/huggingface/transformers/blob/d2c85a22abb7b09185363a67a270276465779182/src/transformers/models/gptj/modeling_gptj.py#L145-L147). We should verify that this is required before merging.
>
> The cast is originally there from the gpt-neo implementation. Did not find any difference, other than speed, when I evaluated not casting to fp32, so it doesn't seem to be necessary. However, when running the model on certain GPUs (e.g. P100) in bf16 mode, doing the matmul fails, while the model remains basically operable in bf16 mode if the cast is kept.
@finetuneanon does a P100 have hardware support for bf16? Does other bf16 code run on it?
> I note that the weights currently staged on Model Hub are stored in half precision (binary16), while the original released checkpoint was bfloat16. As this is an inherently lossy conversion unlike casting to single precision (binary32), I feel it important to ask the level that we should adhere to the transformers philosophy of [providing] state-of-the-art models with performances as close as possible to the original models. I have been assured by @StellaAthena and @kingoflolz that the difference in downstream performance between binary16 and bfloat16/binary32 is minimal with GPT-J 6B (and the evaluations they presented to me support this claim), but if we assume that this implementation will be used for academic research at some point in the future it seems odd to be manipulating the original checkpoint in a way that could modify downstream performance.
Here’s the table of evaluation results @EricHallahan is talking about. All evaluations were done using [EleutherAI’s Eval Harness](www.github.com/eleutherai/lm-eval-harness):
| Task | bf16 | fp32 |
| --- | --- | --- |
| Lambada | 0.683 | 0.697 |
| Winogrande | 0.648 | 0.653 |
| PiQA | 0.761 | 0.765 |
| HellaSwag | 0.661 | 0.661 |
> The reason I bring this matter up here is firstly the fact that bfloat16 hardware is not as widespread/accessible as binary32 and binary16 hardware and secondly my assumption that a switch to a bfloat16 checkpoint would require changes in the implementation; If we decide that serving the checkpoint from Model Hub in bfloat16 is required to meet the goals of the transformers project, it is critical to ensure that it will be properly loaded/cast on platforms that do not support bfloat16 computation. I consider storing and serving the checkpoint in binary32 to be an unacceptable compromise to this conflict, as it would be double the size of the 11.7 GiB binary16 checkpoint that is currently staged.
This is a HF philosophy question more than anything else, but for what it’s worth I would opt for both. My understanding is that the `transformers` library currently hides from the user the level of precision it is working with. I think that changing this so that user is able to control the precision being used is a very good idea. Due to HF’s no inheritance policy, it’s unclear to me how much work this would be to implement. I fear that every transformer needs to implement it individually though. We could sidestep this by implementing a precision flag that allows for fp16, bf16, and fp32 but doing so would likely incur significant technical debt.<|||||>> @finetuneanon does a P100 have hardware support for bf16? Does other bf16 code run on it?
It doesn't have hardware support, but it seems to get emulated in software for all operations other than the attn matmul, which causes a cublas error, from what I remember.
To allow picking the desired precision level as well as whether the model should get initialized directly on the GPU and whether to perform the matmul in fp32, I have added a few configuration parameters to my version somewhat recently:
* https://github.com/finetuneanon/transformers/blob/gpt-neo-localattention3-rp-b/src/transformers/models/gpt_neo/configuration_gpt_neo.py#L153-L155
* https://github.com/finetuneanon/transformers/blob/gpt-neo-localattention3-rp-b/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L138-L161
* https://github.com/finetuneanon/transformers/blob/gpt-neo-localattention3-rp-b/src/transformers/models/gpt_neo/modeling_gpt_neo.py#L212-L215
* Also: L601, L810, L812
This is probably not clean enough to be included here.<|||||>> This is a HF philosophy question more than anything else, but for what it’s worth I would opt for both. My understanding is that the `transformers` library currently hides from the user the level of precision it is working with. I think that changing this so that user is able to control the precision being used is a very good idea. Due to HF’s no inheritance policy, it’s unclear to me how much work this would be to implement. I fear that every transformer needs to implement it individually though. We could sidestep this by implementing a precision flag that allows for fp16, bf16, and fp32 but doing so would likely incur significant technical debt.
The better solution would be to detect what the hardware supports and adapt accordingly; I think that we should serve the weights in bfloat16 and then cast them at load to the appropriate type (as long as this does not result in a significant usability penalty on resource-constrained machines).
To elaborate, we should prefer to load the model at its native bfloat16 unless the hardware fails to support it. After that decision resolving which format to cast to is tricky and we have a few options:
- We can simply call the problem out of scope let the end user cast to their preferred precision from bfloat16. This has issues with the "it just works" `transformers` mentality and would lead to problems when naively loading the model with a pipeline.
- We can cast the model to binary32 from bfloat16 and let the end user cast to their preferred precision (either binary16 or, in the case they need to override the detection, bfloat16). This has the issue of casting the weights to a larger type than they started as and hence potential OOM issues on resource constrained systems. We don't want to cast to binary32 without knowing the model will fit in memory, and ideally the heuristic should not cast to binary32 unless it is known to be the final type.
- We can detect if the hardware supports binary16 and then fallback on binary32 if that fails. This has the issue of the lossy conversion of the weights by default (which means that the end user may never know that they are working with a modified model that could be unsuitable for certain kinds of academic research). At first glance it may seem like the fallback to binary32 is another step that degrades the model, but because we are not loading the model until we decide on a type it isn't a problem here. This is the best solution for production use cases.
The issue with implementing something like these is that we don't know the device that the end user will be placing these models on, as models are placed on the device outside of the model constructor and `.from_pretrained()`. We can never know what features/formats are supported as they are out of scope of these methods. It would be possible to pass the intended device to the constructor but that would break with the "one consistent API" requirement. This seems to be a pretty tricky problem that is going to get more important to resolve as models continue to scale. @finetuneanon seems to have solved this, but not in a way that is acceptable to being integrated into `transformers`.<|||||>To assist in testing the code, I have made the EleutherAI GPT-J-6B checkpoint public. You can find it at https://huggingface.co/EleutherAI/gpt-j-6B, and the code should load from it by default.<|||||>> I think that we should serve the weights in bfloat16 and then cast them at load to the appropriate type (as long as this does not result in a significant usability penalty on resource-constrained machines).
As long as pytorch 1.9.1+ and CUDA 11.1+ are used and the attn matmul is cast to fp32, there should be no usability issues, but the model may be slower on GPUs that do not natively support bf16.
Also, I do not have any evaluation handy, but I think I observed at some point that casting the weights to fp16 and back to bf16 had very similar results to running in bf16 directly, which might indicate that the any runtime difference is mainly due to precision of activations and not so much of the weights. If this could be confirmed, casting the model to fp16 by default might be acceptable as well, as it could be cast back? Overall, casting the weights to fp16 does not cause any overflows and about 7k underflows where parameters became 0 where they weren't before.<|||||>@StellaAthena Thanks a lot to you and everyone involved here for all the work you've put into this.
I think there's a weird bug that I have verified doesn't happen with say distilgpt2 and may have to do with this model/PR specifically.
Basically, if you save the model and just load it again and start generating it will output gibberish, nothing close to what the model would output if you were to just load "EleutherAI/gpt-j-6B". Again, doesn't happen with distilgpt2, same exact setting.
```
import transformers
from transformers import AutoTokenizer, AutoConfig
from transformers import AutoModelForCausalLM
from transformers import Trainer
model_checkpoint = "EleutherAI/gpt-j-6B"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True)
config = AutoConfig.from_pretrained(model_checkpoint)
model = AutoModelForCausalLM.from_pretrained(model_checkpoint, config=config)
trainer = Trainer(
model=model
)
trainer.save_model("gpt-j-test")
```
Now if you load the model that you saved and try to generate you'll see gibberish:
```
model = AutoModelForCausalLM.from_pretrained("gpt-j-test", config=config)
input_str = "Ciara is a software engineer"
inputs = tokenizer.encode(input_str, return_tensors="pt")
generated_text = model.generate(inputs.cuda(), max_length=133, repetition_penalty=1)
print(tokenizer.decode(generated_text[0]))
```
This will output something like:
```
Ciara is a software engineer product masc plates Strategymus Cal nervesaha Goddess gift loGW Bear RobPacific Cros DAMBalt surgical biotech permissions boxes fuckin customized vary Near Problems boxes Near lob automated overturn Cadillac giveauldron givethread viral valley giveauldron give 156 KILLprodu giveBUTaid Credits customized Strategy Ethernet week vary lo disregocusWA coasts viral Ni Processorlaughchart ol concentration Bend Large rescue overturn ol cornerback Bend OccupyStonesrcStonesrcStone give Starsonna customizedStonesrc Rosenstein CBD ashesStoneoddyStoneordered ol fond particip Bert Marion IN Dys Spe refugeeLikeLAN turbines disregLOC mouldLikeothal TiffLike Methodsiorsesi streak Eclipse Fuck space olLike Methods adjustmentsonna disreg 156
```
The normal output when you load "EleutherAI/gpt-j-6B" is something like:
```
Ciara is a software engineer at Google. She is also a member of the Google Cloud Platform team, and is responsible for the development of the Google Cloud Platform’s Kubernetes service.
In this post, Ciara will share her experience of working on Kubernetes at Google, and how she got started with Kubernetes.
Ciara’s journey with Kubernetes
I started working with Kubernetes in 2015, and I’ve been working on it ever since. I’ve been working on Kubernetes at Google for the last two years.
```
<|||||>@emaconta Thanks for the report! We'll see of we can replicate this.<|||||>With saving:
> Ciara is a software engineerarlAndre library harvestedHEADrazelement 1899 camoufl regener photography reveal Mes- leukemia optional optional optional optional optional optional optionalusal Rex scissors sinkunker optional travers optional many optional stained optional Gi floor optionalrose arrangement optional Sugar WittNetMessage optionalJake optionalarov obtained optionalTorMissing optionalarov obtained optional bluesimmigrant obtained Slinventory Lantern optionalJake must columnist cannonput arrangementmbuds accurately cannon foot SlMonster cannon experiments testing arrangementarov obtaineddes Think BreweraezWalker obtainedarov obtained punk Ref McConnell HT symbolicavezbig liberals selections proves Witt Spread★★ ApplicantarovLocateddes selectionsMet ChickJesusMet focusdes selections recruits excessivelyMet objectmor vacanciesdes pubsheat enactCleanv Child
Without saving
> Ciara is a software engineer at Google. She is also a member of the Google Cloud Platform team, and is responsible for the development of the Google Cloud Platform’s Kubernetes service.
>
> In this post, Ciara will share her experience of working on Kubernetes at Google, and how she got started with Kubernetes.
>
> Ciara’s journey with Kubernetes
>
> I started working with Kubernetes in 2015, and I’ve been working on it ever since. I’ve been working on Kubernetes at Google for the last two years.
Seems like a real issue. I'll look into it.<|||||>@emaconta It looks like `lm_head.weight` was blacklisted from being exported in checkpoints. Removing it from the blacklist looks to have solved the issue.
I meanwhile found out that attempting to load the model at binary16 (passing `torch_dtype=torch.half` to `.from_pretrained()`) crashes on CPU because [the construction of `GPTJAttention().scale_attn` utilizes `torch.sqrt()`](https://github.com/huggingface/transformers/blob/223bda109af8ceb36530fd6fae65daca48161516/src/transformers/models/gptj/modeling_gptj.py#L95). We'll have to fix that somehow.<|||||>I note that [`GPTJMLP()` still uses the names `c_fc` and `c_proj` (names that imply it uses convolutional layers when it uses `torch.nn.Linear`)](https://github.com/huggingface/transformers/blob/ad567a9e260eeea0c5e9878f0a2752f22719df19/src/transformers/models/gptj/modeling_gptj.py#L255-L256). We should change this for the purpose of readability.<|||||>@amantalion Have you tried loading the model at binary16/float16/half-precision? The Tensor Cores on the T4 do not include hardware for bfloat16. (bfloat16 computations are emulated on those cards.)<|||||>> Also, what's the speed that you are getting on your GPU that supports `bfloat16`?
A single A100 is finishing your benchmark in around 4.5 to 5 seconds, though that is a really poor point of comparison as an A100 has a TDP over three times larger than a T4 (250W >> 75W).
> It looks like it takes ~10.7 seconds to perform inference with GPU (T4 with 16GB VRAM) on the example that @emaconta provided. In comparison, it only takes 3.72 seconds with TPU (took from the sample Colab: https://colab.research.google.com/github/kingoflolz/mesh-transformer-jax/blob/master/colab_demo.ipynb).
I wouldn't look into that comparison too deeply without understanding the numerous differences between the conditions in which each benchmark was held. The codebase (this port vs. Mesh Transformer JAX), the framework (PyTorch vs. JAX), the platform (T4 vs. TPU v2-8), and the architecture (unparallelized on a single device vs. model-parallel across eight devices) are all different. TPU v2-8s are quite fast, especially with a heavily TPU-optimized codebase like Mesh Transformer JAX. If you are expecting the performance that the Colab notebook provides out of a T4, you unfortunately need to adjust your expectations.<|||||>> I have a few matters that I think we should discuss/resolve before we consider merging:
>
> 1. I'll ask again because I haven't received a response yet: I see that [`GPTJAttention().attn_dropout` and `GPTJAttention().masked_bias` are passed as parameters to `GPTJAttention()._attn()`](https://github.com/huggingface/transformers/blob/d2c85a22abb7b09185363a67a270276465779182/src/transformers/models/gptj/modeling_gptj.py#L233-L234) rather than being [accessed directly within the method like GPT-2](https://github.com/huggingface/transformers/blob/d2c85a22abb7b09185363a67a270276465779182/src/transformers/models/gpt2/modeling_gpt2.py#L187-L194). Similarly, [`causal_mask` is calculated before `GPTJAttention()._attn()` and passed as a parameter](https://github.com/huggingface/transformers/blob/d2c85a22abb7b09185363a67a270276465779182/src/transformers/models/gptj/modeling_gptj.py#L225-L226) while [GPT-2 calculates it within `GPT2Attention()._attn()`](https://github.com/huggingface/transformers/blob/d2c85a22abb7b09185363a67a270276465779182/src/transformers/models/gpt2/modeling_gpt2.py#L185-L186). What does Hugging Face prefer? I think we should adapt the GPT-J implementation to be more like the GPT-2 implementation in this respect.
> 2. To resolve [a warning regarding `torch.where` deprecating uint8 condition tensors](https://github.com/pytorch/pytorch/blob/d7d399f3dfc780f3e49bcffe45694fb04e5db637/aten/src/ATen/native/TensorCompare.cpp#L330), [GPT-2 casts to bool after slicing `bias`](https://github.com/huggingface/transformers/blob/d2c85a22abb7b09185363a67a270276465779182/src/transformers/models/gpt2/modeling_gpt2.py#L186). I resolved the same warning by [casting the contents of the entire `bias` buffer at initialization](https://github.com/huggingface/transformers/blob/d2c85a22abb7b09185363a67a270276465779182/src/transformers/models/gptj/modeling_gptj.py#L79-L81). This seems to work fine, but if we need to change this for some reason I have not foreseen please tell me.
> 3. [GPT-2 calculates the scale factor for the attention weights and applies them within `GPT2Attention()._attn()`](https://github.com/huggingface/transformers/blob/b6021cf0d9acc36fb96aaf3c7b457160f7f0b9d5/src/transformers/models/gpt2/modeling_gpt2.py#L180-L181) when [a bool config variable is set](https://github.com/huggingface/transformers/blob/b6021cf0d9acc36fb96aaf3c7b457160f7f0b9d5/src/transformers/models/gpt2/modeling_gpt2.py#L147). [GPT-J used to have a dedicated buffer to store the scaling value](https://github.com/huggingface/transformers/blob/b6021cf0d9acc36fb96aaf3c7b457160f7f0b9d5/src/transformers/models/gptj/modeling_gptj.py#L91), but I modified this to remove the unneeded buffer. This made the model loader stop complaining about buffers that were not initialized from the checkpoint. This seems to work fine (and if it does I think we can remove [the check that ensures that `scale_attn` is initialized](https://github.com/huggingface/transformers/blob/d2c85a22abb7b09185363a67a270276465779182/src/transformers/models/gptj/modeling_gptj.py#L152)), but if we need to change this for some reason I have not foreseen please tell me.
> 4. I note that the weights currently staged on Model Hub are stored in [half precision (binary16)](https://en.wikipedia.org/wiki/Half-precision_floating-point_format), while the original released checkpoint was [bfloat16](https://en.wikipedia.org/wiki/Bfloat16_floating-point_format). As this is an inherently lossy conversion unlike casting to [single precision (binary32)](https://en.wikipedia.org/wiki/Single-precision_floating-point_format), I feel it important to ask the level that we should adhere to the [`transformers` philosophy of [providing] state-of-the-art models with performances as close as possible to the original models](https://huggingface.co/transformers/philosophy.html). I have been assured by @StellaAthena and @kingoflolz that the difference in downstream performance between binary16 and bfloat16/binary32 is minimal with GPT-J 6B (and the evaluations they presented to me support this claim), but if we assume that this implementation will be used for academic research at some point in the future it seems odd to be manipulating the original checkpoint in a way that could modify downstream performance.
> The reason I bring this matter up here is firstly the fact that bfloat16 hardware is not as widespread/accessible as binary32 and binary16 hardware and secondly my assumption that a switch to a bfloat16 checkpoint would require changes in the implementation; If we decide that serving the checkpoint from Model Hub in bfloat16 is required to meet the goals of the `transformers` project, it is critical to ensure that it will be properly loaded/cast on platforms that do not support bfloat16 computation. I consider storing and serving the checkpoint in binary32 to be an unacceptable compromise to this conflict, as it would be double the size of the **11.7 GiB** binary16 checkpoint that is currently staged.
> 5. Would it be preferable to [include a tool that can convert the original checkpoint file to the Hugging Face format like GPT-Neo](https://github.com/huggingface/transformers/blob/d2c85a22abb7b09185363a67a270276465779182/src/transformers/models/gpt_neo/convert_gpt_neo_mesh_tf_to_pytorch.py), or is that something that is out of scope of this PR?
> 6. It is unclear to me if [casting `query` and `key` to single precision (binary32) in `GPTJAttention()._attn()` is actually preventing an overflow when running the model in half precision (binary16)](https://github.com/huggingface/transformers/blob/d2c85a22abb7b09185363a67a270276465779182/src/transformers/models/gptj/modeling_gptj.py#L145-L147). We should verify that this is required before merging.
Hey @EricHallahan,
Thanks a lot for the very in-detail summary of the remaining TODO's. Here is my opinion:
1) Agree! I'll change this directly in the PR now (hope that's fine for you)
2) That's totally fine! The reason we created the mask in `torch.int8` for GPT2 was that back then we wanted to keep supporting PyTorch 1.1.0 which didn't support the `torch.bool` type. Now our minimum required version for torch is 1.3.0 -> so creating it directly in `torch.bool` is preferred. So this looks good to me!
3) Here we can remove the `if ...` check since the attention is always scaled -> I'll also adapt this in the PR directly.
4) That's a very important one here! We would prefer to actually store the weights in fp32 on the hub as all of our model weights are stored in fp32 and this is what the user expects. As you said storing it in fp16 is not a good idea because of the lossy conversion and since we want to provide the models as similar as possible to the original implementation as possible. The other option is to store them in bfloat16, which would be closer the original implementation as possible, but the problem is that lots of hardware doesn't support bfloat16 yet in PyTorch and we would like the model to work out of the box, *e.g.* without the user having to know that `.to(torch.float32)` is required to run the model on CPU. Experienced users that are aware of bfloat16 can then cast the checkpoint to `bfloat16` without loosing precision etc...Also all of our "officially supported" models are stored in float32 on the hub so I'd prefer to not make an exception here (*e.g.* all the T5 weights are also stored in fp32 even though the official weights are bf16). Let me know what you think here!
5) Yes such a conversion file would be nice! IMO this can be done in a follow-up PR though (happy to help on this as well).
6) Also a very important point. After discussing with @LysandreJik a bit, I think we should keep hard-coded casts to fp32 in the modeling code if necessary so that the model can run in float16 at the moment. By necessary we mean that if a model requires some weights/operation to be cast to `float32` in order to prevent over-/underflowing issues in `float16` then we should keep it in float32. If the model doesn't require the `float32` cast to run in float16, then we can remove it. Another example where this is used is T5's special layer norm Module: https://github.com/huggingface/transformers/blob/f689743e7454b93f6cab4343026de03fa530bfb9/src/transformers/models/t5/modeling_t5.py#L242. We don't support bfloat16 officially in the library yet, so I would like to base the decision here first just on float32 vs. float16. Once `bfloat16` is officially supported, we will do a pass on all models and make sure they work correctly in bfloat16. Let me know what you think here!
7) RE: the naming of `c_proj` - I agree with you we should rename the variables here since there is no convolution layer anymore - I'll also adapt this in this PR <|||||>I have applied the changes as mentioned in 1.), 3.), 4.) and 6.), 7.) and also changed the official weights accordingly here: https://huggingface.co/EleutherAI/gpt-j-6B/commit/a16214f67cb321168930e17ee9a508794721bc3d . Note that the weights are in fp32 now which is why the model weights file is twice as big.
The output logits are also now "hard"-casted to `float32` to make sure generation works correctly in fp16 as stated by @patil-suraj.
Since the weights are stored now in fp32 on the hub, one needs to load the model with `torch_dype=torch.float16` if there is not enough RAM on the GPU (I also left some notes in the docs about this).
```python
from transformers import AutoTokenizer, GPTJForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", torch_dtype=torch.float16).to("cuda")
def gen(input_str):
input_ids = tokenizer(input_str, return_tensors="pt").input_ids.to(model.device)
gen_out = model.generate(input_ids, do_sample=True)
print(tokenizer.batch_decode(gen_out))
gen("Today is a nice")
```
Let me know if this is ok for you @EricHallahan @StellaAthena or if you would like me to revert a commit or whether we should do anything else before merging!
IMO, the PR is good to merge (cc @LysandreJik) <|||||>I fear there’s a little confusion about the model weights I want to clarify.
The original model training occurred in fp32. Because fp32 is so memory intensive, we created bf16 weights with the expectation that they would be used for inference. Since the transformers package puts a strong emphasis on accessibility, we felt that the fp32 and bf16 weights would not be appropriate for the HF model. The fp32 weights are too large to use on a consumer GPU, and many GPUs don’t support bf16 yet. Since bf16 isn’t commonly supported, I made the decision to upload the weights cast as fp16 to follow HF philosophy of valuing accessibility. When @patrickvonplaten moved the weights to fp32, he didn’t use the real fp32 weights: he took the fp16 weights and cast them as fp32. These weights, despite being listed as fp32, as no better than using fp16 and is really just padded with 11 GB of null.
As a result, there are four sets of possible weights:
1. True fp32 weights. These are inaccessible to most people due to their size, and will not even download on most consumer systems
2. bf16 weights: these are the current widespread best practices for working with a model like GPT-J, but most GPUs don’t support bf16 yet.
3. fp16 weights: these are widely accessible and have nearly identical performance to bf16 for inference in our testing.
4. The currently uploaded weights: these have the precision of fp16 while taking up the space of fp32. These should not be used.
I think it makes the most sense to have two models: `eleutherai/gpt-j-6B` and `eleutherai/gpt-j-6B-fp32` or something like that. People who have systems that have enough space for the fp32 weights can use them, but the overwhelming majority of people will instead use the smaller fp16 weights. fp16 weights can be losslessly cast to bf16 by people who have hardware support for it.<|||||>> I fear there’s a little confusion about the model weights I want to clarify.
>
> The original model training occurred in fp32. Because fp32 is so memory intensive, we created bf16 weights with the expectation that they would be used for inference. Since the transformers package puts a strong emphasis on accessibility, we felt that the fp32 and bf16 weights would not be appropriate for the HF model. The fp32 weights are too large to use on a consumer GPU, and many GPUs don’t support bf16 yet. Since bf16 isn’t commonly supported, I made the decision to upload the weights cast as fp16 to follow HF philosophy of valuing accessibility. When @patrickvonplaten moved the weights to fp32, he didn’t use the real fp32 weights: he took the fp16 weights and cast them as fp32. These weights, despite being listed as fp32, as no better than using fp16 and is really just padded with 11 GB of null.
>
> As a result, there are four sets of possible weights:
>
> 1. True fp32 weights. These are inaccessible to most people due to their size, and will not even download on most consumer systems
> 2. bf16 weights: these are the current widespread best practices for working with a model like GPT-J, but most GPUs don’t support bf16 yet.
> 3. fp16 weights: these are widely accessible and have nearly identical performance to bf16 for inference in our testing.
> 4. The currently uploaded weights: these have the precision of fp16 while taking up the space of fp32. These should not be used.
>
> I think it makes the most sense to have two models: `eleutherai/gpt-j-6B` and `eleutherai/gpt-j-6B-fp32` or something like that. People who have systems that have enough space for the fp32 weights can use them, but the overwhelming majority of people will instead use the smaller fp16 weights. fp16 weights can be losslessly cast to bf16 by people who have hardware support for it.
Thanks for your comment here @StellaAthena!
The current weights are indeed not ideal, as they are just padded fp16 weights.
Would it be ok for you however to upload the "true" fp32 weights? People can quite easily convert the weights from fp32 to fp16 under the hood via:
```python
model = GPTJForCausalLM.from_pretrained('...', torch_dtype=torch.float16)
```
and then move the casted weights to GPU. From our side this would be the preferred behavior - would that be ok for you?<|||||>@patrickvonplaten Can we do both fp32 and fp16 models? I care a lot about Google Colab because it's the way that most people have access to the necessary GPUs to use these models and the fp32 weights won't download into Colab's RAM.<|||||>I am not only concerned about Colab either: I expect that a common system configuration for deploying GPT-J 6B at home with `transformers` will be an RTX 3090 with 16 GiB of system memory. Attempting to load GPT-J 6B at single precision will OOM before it can be cast and loaded onto the GPU, despite it being possible to run GPT-J 6B on this system configuration otherwise.<|||||>We actually picked the size of GPT-J specifically to allow it to be loaded onto Google Colab or a RTX 3090 with enough room left over to fine-tune it at what we judged to be a reasonable rate. ~11 GiB is the largest you can make a model that will be functional on consumer GPUs under current technology. I really think that requiring people to download it in fp32 will be disastrous for adoption.<|||||>I share the above concerns about accessibility.
As an example use case, I use a finetuned GPT-J in my social media bot. For users who are curious about how the bot works, I have a [Colab inference demo](https://colab.research.google.com/drive/1pdRISGaypV8DwolvD3uv77vlmMX0xZNL?usp=sharing). The demo currently uses @finetuneanon’s fork and an fp16 checkpoint, the same checkpoint I use for inference in the bot itself. It works fine on an ordinary Colab instance with a T4 GPU and 12 GB system RAM.
Most of my bot’s curious users don’t have access to more specialized hardware. It’s great to be able to provide these people with direct access to such a powerful model.<|||||>I see! It's indeed very nice to be able to run the model in a Google Colab for which we need half-precision. I think we should opt then for two models.
What do you think about adding a `gpt-j-6b-fp16` checkpoint in float16 precision in addition to the `gpt-j-6b` checkpoint that will be kept in full fp32 precision?<|||||>> I see! It's indeed very nice to be able to run the model in a Google Colab for which we need half-precision. I think we should opt then for two models.
>
> What do you think about adding a `gpt-j-6b-fp16` checkpoint in float16 precision in addition to the `gpt-j-6b` checkpoint that will be kept in full fp32 precision?
I liked the idea of making the fp32 on model hub the real fp32 and not the bf16 padded to fp32, If EleutherAI is willing to do that.<|||||>> I see! It's indeed very nice to be able to run the model in a Google Colab for which we need half-precision. I think we should opt then for two models.
>
> What do you think about adding a `gpt-j-6b-fp16` checkpoint in float16 precision in addition to the `gpt-j-6b` checkpoint that will be kept in full fp32 precision?
I would prefer to do `gpt-j-6B` and `gpt-j-6B-fp32` for two reasons:
1. It makes more sense to me that the version with the simplest name be the most used verison.
2. In practice this is a breaking change. Yes, it hasn’t been officially released yet, but this model has been in widespread use for a while. Since I made the checkpoint public it has been downloaded over 1500 times, and since the weights were changed to fp32 I have already fielded questions from five individuals and one company as to why their model suddenly stopped working.
More generally, I am surprised and confused by this proposal because it makes me feel like I don’t understand HF’s philosophy. But hey, it’s your package. I would be happy to do it either way – my main priority is having a fp16 version.<|||||>The philosophy behind having the "main" model in fp32 is that it's as close as possible to the original model. We don't have any models saved in fp16 on the model hub, but always in fp32 so that the user has the freedom to decide which dtype suits better. Also just to make sure I'm not on the wrong page here - as I understand the original weights: https://the-eye.eu/public/AI/GPT-J-6B/step_383500.tar.zstd are in fp32 format no?
I understand that this is different here because the full model fp32 cannot be loaded, *e.g.* in a Google Colab though and that the model is particularly large -> therefore I think it's a good idea to also provide the lossy fp16 weights, but I don't think it justifies having the default weights being a compressed version of the true original weights.
In the end, it's totally up to you what is better as the model is under your organization. So we are happy to go along with having fp16 as the default if you prefer - it would however be great to also have the fp32 version exposed so that there is access to the full precision. My goal was mainly to speed-up the merging process here :-)
On a slightly different aspect, it might also be nicer to actually just have one "gpt-j-6b" model instead of two. We could use a git branch to differentiate between `fp16` and `fp32`. So with the git-lfs version control we could either save the fp16 or fp32 weights under a branch called `fp16` or `fp32` and then have the main branch be the other version.
Right now fp32 is the main version, so doing the following:
```python
from transformers import GPTJForCausalLM
model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B")
```
will load the 23GB file
while
```python
from transformers import GPTJForCausalLM
model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", revision="fp16")
```
will load the fp16 12GB file that can be used in a Google colab because I added a branch `fp16` to the model here: https://huggingface.co/EleutherAI/gpt-j-6B/tree/fp16 . I think this is a bit nicer instead of having two different models on the hub, no?
=> So to summarize I think we have 4 options here:
1. `fp16` is the main model and we create a branch `fp32`:
```python
from transformers import GPTJForCausalLM
model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B") # <- for fp16
model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", revision="fp32") # <- for original full weights
```
2. `fp32` is the main model and we create a branch `fp16` (just leave the branch we have now)
```python
from transformers import GPTJForCausalLM
model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B") # <- for original full weights
model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", revision="fp16") # <- for fp16
```
That's more or less what we have now
3. Create two models with `EleutherAI/gpt-j-6B` being in fp16 and `EleutherAI/gpt-j-6B-fp32` being in fp32
4. Create two models with `EleutherAI/gpt-j-6B` being in fp32 and `EleutherAI/gpt-j-6B-fp16` being in fp16
Maybe you can decide and then I can help to upload the weights correctly. <|||||>Yes seconding what @patrickvonplaten said, @StellaAthena it's your model so it's your call!
Let us know which option you prefer <|||||>I just finished modifying the conversion script so that it can spit out `transformers` checkpoints at any precision from Mesh Transformer JAX checkpoints, so it should be a pretty painless process to upload newly processed weights when we decide what we want to do.<|||||>My opinion is that the only sensible way of maintaining multiple checkpoints of the same model is with multiple branches in a single model repo and exposing the choice to the end user via the revisions system.
I think my preferred configuration is to name the default branch *`float32`* for the single precision checkpoint and create a second branch called *`float16`* for the half precision checkpoint. (There is nothing stopping us from creating a third branch for *`bfloat16`*, but I expect demand for it to be very low and so is probably not worth the effort.) I think *`float32`* and *`float16`* are the obvious choice for the branch name pattern as [they mirror the `torch_dtype` of each model](https://huggingface.co/transformers/main_classes/configuration.html?highlight=torch_dtype). I also recommend committing freshly converted checkpoints to these branches to verify they are as clean as possible before we merge this PR.
We will need to make it abundantly clear that multiple branches exist within the model cards. I am thinking of placing a identifier directly under the first level heading so that it is visible above the fold on Model Hub, plus a short paragraph with an explanation of when to use that branch over others (drafts below). I also suggest more detailed code examples than normal that specifically demonstrate how to mix the model `revision` and `torch_dtype` for the purpose of clarifying their usage and interaction.
<figure>
<dl>
<dt><em><code>float32</code></em></dt>
<dd>This checkpoint of GPT-J 6B is stored in <a href="https://en.wikipedia.org/wiki/Single-precision_floating-point_format">single precision</a> and is most suitable for academic and research applications that require as close to original downstream performance. This 23.4 GiB checkpoint can be readily cast to lower-precision formats such as half precision and bfloat16 after loading. Given that there is no statistically significant difference in downstream performance when GPT-J 6B is run with reduced precision, it is recommended to use the alternative half precision checkpoint in prototyping and production applications.</dd>
<dt><em><code>float16</code></em></dt>
<dd>This checkpoint of GPT-J 6B is stored in <a href="https://en.wikipedia.org/wiki/Half-precision_floating-point_format">half precision<a> and is most suitable for prototyping and production applications where speed and resource-constraints are critical factors. This 11.7 GiB checkpoint can be readily cast to other floating-point formats for use on hardware that does not support half precision, a usage that saves both time and storage space over a higher-precision checkpoint. Half precision comes at the cost of a slightly different performance in downstream tasks, and it is recommended to use the alternative single precision checkpoint in the academic and research applications where this is not acceptable.</dd>
</dl>
</figure><|||||>I really like everything @EricHallahan said. And I think @patrickvonplaten's idea is brilliant: if I had known that was an option I would have proposed it.<|||||>Great! @StellaAthena @EricHallahan let me know if you need any help uploading the checkpoints to the branches! <|||||>@EricHallahan - could you send me a link to your updated conversion script? I'v tried using this one: https://github.com/kingoflolz/mesh-transformer-jax/pull/108 . The converted output is a dict with all layers saved as individual files. Is this the official conversion script? <|||||>Just a note that for organizing models outside the hugging face cache, it is more convenient to have subfolders or separate repos for different content, because git-lfs can be very slow filtering many gigabytes when switching branches. Not planning on arguing the point, just making sure the use-case is shared.<|||||>@patrickvonplaten The script currently within Mesh Transformer JAX builds a split checkpoint in the format used by the @finetuneanon fork of `transformers`, and I needed to heavily modify it to generate checkpoints in the HF format. @kingoflolz has already asked me to make a PR to update the script.<|||||>If we feel that the solution I outlined is the best solution, I can put that plan into action and update the repo on Model Hub. Maybe vote 👍/👎 on this?<|||||>@g-karthik I have ported over the experimental parallelization code from GPT-2 into GPT-J. I wouldn't personally recommend using it to anyone unless they need to, but it should work in a pinch when a better solution is unavailable.
(I note that this bears no resemblance to the implementation of model pararallelism in Mesh Transformer JAX and should not be thought as an equivalent implementation or replacement for that implementation.)<|||||>@EricHallahan: Thank you very much for the awesome work on the issue!! there is one thing to remark regarding the 16 VRAM configuration:
As far as I can tell, even with the floating point revision set correctly (and also when only the local files are fp16, local_files_only=True), the model will still be loaded in float32 until model.half() is called, thus requiring the 23G RAM to be available before the model.half() and before the model.to(device) is called.
By extension this means, that a text generation pipeline cannot be loaded with device=0 in a VRAM<23G setting, as .half() isn’t called automatically anywhere. In this case the model must be loaded, .halved(), and then passed to the pipeline via the argument.
Correct me if this observation is wrong. Is there any way of loading/moving the model iteratively to GPU so that the 23G RAM limitation can be circumvented, similar as done in @finetuneanon repository? (Probably out of scope for this very PR, but likely a problem for larger models in general in the future). Presumably this can be done using the state dict, but I‘m not deep enough into the inner working to judge this.
Also tagging @patrickvonplaten <|||||>@oborchers Yes, we have had multiple people test this via Colab and they have reported the same issue.
I have verified that choosing the `"float16"` revision loads the model at float32. I don't understand why it doesn't load the model at half precision, especially because [I explicitly set `"torch_dtype": "float16"` in the model config on the *`float16`* branch](https://huggingface.co/EleutherAI/gpt-j-6B/commit/b380a4785c3184468823bb0bd98dcd4453c9d604). Maybe I am interpreting the fuctionality of that config parameter wrong, but my understanding is that it explicitly tells the model loader to use the specified type as the default.
(I also want to make sure to point out that naively loading the model is not particularly useful at this time, as the Model Hub repo still has an extraneous *`main`* branch that I have been unable to remove and replace with the *`float32`* branch.)
The multi-part loading scheme used by the @finetuneanon fork was purposefully built to bypass the suboptimal way that `transformers` loads checkpoints so that resource-constrained systems could load GPT-Neo (and later GPT-J) without running out of memory. In order to meet the requirements for integration into `transformers` we had to adapt that code to instead use the existing single-file checkpoint format. It is up to the `transformers` maintainers to consider an alternative/optimized checkpoint loading pipeline, and I assume that such a system would need a separate PR considering the changes probably needed to `PretrainedModel`.<|||||>> @oborchers Yes, we have had multiple people test this via Colab and they have reported the same issue.
>
> I have verified that choosing the `"float16"` revision loads the model at float32. I don't understand why it doesn't load the model at half precision, especially because [I explicitly set `"torch_dtype": "float16"` in the model config on the _`float16`_ branch](https://huggingface.co/EleutherAI/gpt-j-6B/commit/b380a4785c3184468823bb0bd98dcd4453c9d604). Maybe I am interpreting the fuctionality of that config parameter wrong, but my understanding is that it explicitly tells the model loader to use the specified type as the default.
> (I also want to make sure to point out that naively loading the model is not particularly useful at this time, as the Model Hub repo still has an extraneous _`main`_ branch that I have been unable to remove and replace with the _`float32`_ branch.)
>
> The multi-part loading scheme used by the @finetuneanon fork was purposefully built to bypass the suboptimal way that `transformers` loads checkpoints so that resource-constrained systems could load GPT-Neo (and later GPT-J) without running out of memory. In order to meet the requirements for integration into `transformers` we had to adapt that code to instead use the existing single-file checkpoint format. It is up to the `transformers` maintainers to consider an alternative/optimized checkpoint loading pipeline, and I assume that such a system would need a separate PR considering the changes probably needed to `PretrainedModel`.
Thanks a lot for the detailed message here. What we currently do in `.from_pretrained(...)` is definitely suboptimal if one is sure that all the loaded parameters are correct and complete. What happens under-the-hood is that:
1. A random model with the correct configuration is instantiated meaning all layers are randomly initialized as defined in the config. Random initialization is always happening in fp32.
2. Then the `state_dict` is loaded (the correct weights)
3. All layers of the random model are compared to the "real" layers that can be found in the state dict
4. All weights of the layers that are present in the state dict are dropped and overwritten by the state dict
5. All layers are casted to the correct dtype (as defined by `torch_dtype`)
=> we have this logic mainly for models like BERT for which one would load the "base"-model and than add a randomely initialized head for the specific downstream task. It becomes quite clear however that this make less sense for GPT-like models.
There is an open issue to solve this problem: https://github.com/huggingface/transformers/issues/12274 but I don't think it'll be that easy to solve.
If ok with you guys (@EricHallahan @StellaAthena) we would merge the PR and then try to fast-track the issue - what do you think?
<|||||>FYI: Model acceleration for GPT-J via deepspeed in the making: https://github.com/microsoft/DeepSpeed/issues/1332<|||||>## Important
We will merge GPT-J now to master. Note that at the moment GPT-J **cannot** be run on a free google colab GPU since loading the model weights in fp16 requires too much CPU RAM. At the moment one needs at least **26 GB** of CPU RAM in order to load GPT-J in fp16-precision. We are working on fixing the problem so that in a next step one can load GPT-J with just 12 GB of CPU of RAM.<|||||>I feel the need to reiterate that there [remains a redundant *`main`* branch in Model Hub](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main) that is neither [the true single precision checkpoint found in *`float32`*](https://huggingface.co/EleutherAI/gpt-j-6B/tree/float32) nor [the half precision checkpoint found in *`float16`*](https://huggingface.co/EleutherAI/gpt-j-6B/tree/float16). This means that naive usage (i.e. not specifying `revision="float32"` or `revision="float16"`) will not download the proper checkpoints.<|||||>> reiterate
Thanks for letting me know - is it ok if I put the "correct fp32 weigts" in the main branch for now? Or do you prefer "fp16"? Both are fine with us :-) Think we can't completely delete the "main" branch for now (cc @LysandreJik)<|||||>> Think we can't completely delete the "main" branch for now
That is my understanding.
> is it ok if I put the "correct fp32 weigts" in the main branch for now? Or do you prefer "fp16"? Both are fine with us :-)
Putting the single precision weights in *`main`* should be fine for now.<|||||>> > Think we can't completely delete the "main" branch for now
>
> That is my understanding.
>
> > is it ok if I put the "correct fp32 weigts" in the main branch for now? Or do you prefer "fp16"? Both are fine with us :-)
>
> Putting the single precision weights in _`main`_ should be fine for now.
+1 this<|||||>Ok great - just uploaded the correct weigths to "main". You can see that the sha256 between "main": https://huggingface.co/EleutherAI/gpt-j-6B/blob/main/pytorch_model.bin and "float32" https://huggingface.co/EleutherAI/gpt-j-6B/blob/float32/pytorch_model.bin match now :-) |
transformers | 13,021 | closed | TypeError: __init__() got an unexpected keyword argument 'save_strategy' | ### Environment
- `transformers` version: 4.2.2
- Platform: Linux-4.15.0-151-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Not explicitly (but probably)
- Using distributed or parallel set-up in script?: Not explicitly (but probably)
### Who can help
@sgugger
### Details
I am using RoBERTa for seq classification, but that is not where my issue is coming from. My issue is coming from the Trainer API. Specifically, when I try to specify save_strategy=epoch in TrainingArguments, I get the error message in the issue title. I tried updating to a more recent version of Transformers as per another issue, but that didn't work. I'm not sure what to do about it. | 08-06-2021 02:36:02 | 08-06-2021 02:36:02 | You probably did not properly install it. The environment above shows 4.2.2 and `save_strategy` was introduced later on.
You can check the version of Transformers executed by your script by adding
```
import transformers
print(transformers.__version__)
```
at the top of it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,020 | closed | RobertaForMaskedLM loss calculated wrong(?) | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.1 (or github main as of 2021.8.5)
- Platform: MacOS (or any)
- Python version: 3.9 (or any 3.x)
- PyTorch version (GPU?): 1.9.0 cpu
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
@LysandreJik, @sgugger
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): XLM-R/Roberta
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name) MaskedLM
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. [Tutorial example here](https://huggingface.co/transformers/model_doc/xlmroberta.html#transformers.XLMRobertaForMaskedLM)
2. [Source code here](https://github.com/huggingface/transformers/blob/60e448c87eff29b166bf2821f5389056a72343e3/src/transformers/models/roberta/modeling_roberta.py#L1105)
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The logic of calculating MLM seems wrong. Shouldn't CrossEntropyLoss only run on **masked tokens** rather than all tokens? I see no operation here.
<!-- A clear and concise description of what you would expect to happen. -->
| 08-05-2021 23:18:32 | 08-05-2021 23:18:32 | The calculation is correct. However, it's the responsibility of the user to prepare the labels for the model, so you need to make sure you set the labels to -100 for positions where you don't want to incur a loss (as -100 is the `ignore_index` of PyTorch's `CrossEntropyLoss`).<|||||>Ah, that makes sense, thanks!. I think you should modify the tutorial page in documentation page? Or put what you said somewhere in the doc pagr?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,019 | closed | GPU Out of Memory when repeatedly running large models (`hyperparameter_search`) | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.1
- Platform: Linux-4.19.0-17-cloud-amd64-x86_64-with-debian-10.10
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0 (True)
- Using GPU in script?: yes (4 x GPUs)
- Using distributed or parallel set-up in script?: There are 4x GPU on this machine; I'm letting the `trainer` do its default thing here. I see that `trainer.is_model_parallel = False`.
### Who can help
Looks like @sgugger has some related activity in trainer...maybe he can point toward the right person to help?
## Information
Model I am using (Bert, XLNet ...): `disilbert-base-uncased`
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. I'm running fine-tuning for sentence classification using `distilbert-base-uncased`, using the code below. Training set is limited to 10k sentences with binary labels. Eval consists of 500 sentences.
2. Hyperparameter search runs fine for the first ~2 iterations, and then I reliably see a CUDA out-of-memory error `RuntimeError: CUDA out of memory...` (full error pasted at the bottom of this issue).
Looking at my wandb logs, I see that GPU memory is not freed between tuning runs.

(purple is run-0, gray is run-1, blue is run-2).
3. I think this is very closely related/possibly the same as the issue in #1742.
4. I have found that adding some additional lines within the `run_hp_search_optuna` fn to explicitly delete the model and de-allocate memory between runs seems to resolve the problem (see below).
### Code that produces the issue
Running the following code yields the error after ~2 hyperparameter tuning runs.
```python
## setup data
from datasets import DatasetDict
paths = {
"train": train_file,
"dev": dev_file,
"test": test_file,
"unlabeled": to_classify_file
}
raw_datasets = DatasetDict.from_json(paths)
## setup tokenizer
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
def tokenize_function(x):
return tokenizer(x["sentence"], x["source_column"], padding="max_length", truncation=True)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
tokenized_datasets.set_format("torch")
## setup model and metrics
import torch
import gc
from transformers import AutoModelForSequenceClassification
from datasets import load_metric
prec = load_metric("precision")
rec = load_metric("recall")
acc = load_metric("accuracy")
f1 = load_metric("f1")
def model_init():
return AutoModelForSequenceClassification.from_pretrained(
"distilbert-base-uncased", num_labels=2, return_dict=True)
def f_b(p, r, b):
num = (1 + b**2) * p * r
den = (b**2 * p) + r
if den == 0:
return 0.
return num/den
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = predictions.argmax(axis=-1)
result = {}
for mtrc in [prec, rec, acc, f1]:
mtrc_result = mtrc.compute(predictions=predictions, references=labels)
result.update(mtrc_result)
result["f0.5"] = f_b(result["precision"], result["recall"], 0.5)
return result
def compute_objective(metrics):
return metrics["eval_f0.5"]
## run hyperparam tuning
from transformers import Trainer, TrainingArguments
gpus_per_trial = 1
n_hyperparam_search_examples = 10000
training_args = TrainingArguments(
"ls_classifier_distilbert_hyperparams",
overwrite_output_dir=True,
do_train=True,
do_eval=True,
num_train_epochs=2,
evaluation_strategy="steps",
eval_steps=250,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
warmup_steps=0,
weight_decay=0.1,
logging_dir="./logs",
report_to="wandb",
load_best_model_at_end=True
)
trainer = Trainer(
model_init=model_init,
args=training_args,
tokenizer=tokenizer,
train_dataset=tokenized_datasets["train"].shuffle(seed=123).select(range(n_hyperparam_search_examples)),
eval_dataset=tokenized_datasets["dev"],
compute_metrics=compute_metrics
)
trainer.hyperparameter_search(
backend="optuna",
compute_objective=compute_objective,
n_trials=4,
direction="maximize",
)
```
### Updates to remedy the issue
If I re-write the `hyperparameter_search` fn with the following additions to `run_hp_search_optuna` (following advice in #1742), then the memory does appear to get de-allocated between tuning runs:
```python
from transformers.trainer_utils import HPSearchBackend, default_hp_space
def run_hp_search_optuna(trainer, n_trials, direction, **kwargs):
import optuna
def _objective(trial, checkpoint_dir=None):
checkpoint = None
if checkpoint_dir:
for subdir in os.listdir(checkpoint_dir):
if subdir.startswith(PREFIX_CHECKPOINT_DIR):
checkpoint = os.path.join(checkpoint_dir, subdir)
#################
## UPDATES START
#################
if not checkpoint:
# free GPU memory
del trainer.model
gc.collect()
torch.cuda.empty_cache()
trainer.objective = None
trainer.train(resume_from_checkpoint=checkpoint, trial=trial)
# If there hasn't been any evaluation during the training loop.
if getattr(trainer, "objective", None) is None:
metrics = trainer.evaluate()
trainer.objective = trainer.compute_objective(metrics)
return trainer.objective
timeout = kwargs.pop("timeout", None)
n_jobs = kwargs.pop("n_jobs", 1)
study = optuna.create_study(direction=direction, **kwargs)
study.optimize(_objective, n_trials=n_trials, timeout=timeout, n_jobs=n_jobs)
best_trial = study.best_trial
return BestRun(str(best_trial.number), best_trial.value, best_trial.params)
def hyperparameter_search(trainer, compute_objective, n_trials, direction, **kwargs):
trainer.hp_search_backend = HPSearchBackend.OPTUNA
trainer.hp_space = default_hp_space[HPSearchBackend.OPTUNA]
trainer.hp_name = None
trainer.compute_objective = compute_objective
best_run = run_hp_search_optuna(trainer, n_trials, direction, **kwargs)
self.hp_search_backend = None
return best_run
```
### Full error / trace
```
[W 2021-08-05 17:21:10,456] Trial 2 failed because of the following error: RuntimeError('Caught RuntimeError in replica 0 on device 0.\nOriginal Traceback (most recent call last):\n File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker\n output = module(*input, **kwargs)\n File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl\n return forward_call(*input, **kwargs)\n File "/opt/conda/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 632, in forward\n return_dict=return_dict,\n File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl\n return forward_call(*input, **kwargs)\n File "/opt/conda/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 495, in forward\n return_dict=return_dict,\n File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl\n return forward_call(*input, **kwargs)\n File "/opt/conda/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 315, in forward\n x=hidden_state, attn_mask=attn_mask, head_mask=head_mask[i], output_attentions=output_attentions\n File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl\n return forward_call(*input, **kwargs)\n File "/opt/conda/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 264, in forward\n output_attentions=output_attentions,\n File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl\n return forward_call(*input, **kwargs)\n File "/opt/conda/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 192, in forward\n scores = torch.matmul(q, k.transpose(2, 3)) # (bs, n_heads, q_length, k_length)\nRuntimeError: CUDA out of memory. Tried to allocate 768.00 MiB (GPU 0; 14.76 GiB total capacity; 12.82 GiB already allocated; 727.75 MiB free; 12.93 GiB reserved in total by PyTorch)\n')
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/optuna/study/_optimize.py", line 213, in _run_trial
value_or_values = func(trial)
File "/opt/conda/lib/python3.7/site-packages/transformers/integrations.py", line 140, in _objective
trainer.train(resume_from_checkpoint=checkpoint, trial=trial)
File "/opt/conda/lib/python3.7/site-packages/transformers/trainer.py", line 1280, in train
tr_loss += self.training_step(model, inputs)
File "/opt/conda/lib/python3.7/site-packages/transformers/trainer.py", line 1773, in training_step
loss = self.compute_loss(model, inputs)
File "/opt/conda/lib/python3.7/site-packages/transformers/trainer.py", line 1805, in compute_loss
outputs = model(**inputs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply
output.reraise()
File "/opt/conda/lib/python3.7/site-packages/torch/_utils.py", line 425, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 632, in forward
return_dict=return_dict,
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 495, in forward
return_dict=return_dict,
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 315, in forward
x=hidden_state, attn_mask=attn_mask, head_mask=head_mask[i], output_attentions=output_attentions
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 264, in forward
output_attentions=output_attentions,
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 192, in forward
scores = torch.matmul(q, k.transpose(2, 3)) # (bs, n_heads, q_length, k_length)
RuntimeError: CUDA out of memory. Tried to allocate 768.00 MiB (GPU 0; 14.76 GiB total capacity; 12.82 GiB already allocated; 727.75 MiB free; 12.93 GiB reserved in total by PyTorch)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/tmp/ipykernel_10884/1040859948.py in <module>
35 compute_objective=compute_objective,
36 n_trials=4,
---> 37 direction="maximize",
38 )
39 # trainer.is_model_parallel
/opt/conda/lib/python3.7/site-packages/transformers/trainer.py in hyperparameter_search(self, hp_space, compute_objective, n_trials, direction, backend, hp_name, **kwargs)
1698
1699 run_hp_search = run_hp_search_optuna if backend == HPSearchBackend.OPTUNA else run_hp_search_ray
-> 1700 best_run = run_hp_search(self, n_trials, direction, **kwargs)
1701
1702 self.hp_search_backend = None
/opt/conda/lib/python3.7/site-packages/transformers/integrations.py in run_hp_search_optuna(trainer, n_trials, direction, **kwargs)
148 n_jobs = kwargs.pop("n_jobs", 1)
149 study = optuna.create_study(direction=direction, **kwargs)
--> 150 study.optimize(_objective, n_trials=n_trials, timeout=timeout, n_jobs=n_jobs)
151 best_trial = study.best_trial
152 return BestRun(str(best_trial.number), best_trial.value, best_trial.params)
/opt/conda/lib/python3.7/site-packages/optuna/study/study.py in optimize(self, func, n_trials, timeout, n_jobs, catch, callbacks, gc_after_trial, show_progress_bar)
407 callbacks=callbacks,
408 gc_after_trial=gc_after_trial,
--> 409 show_progress_bar=show_progress_bar,
410 )
411
/opt/conda/lib/python3.7/site-packages/optuna/study/_optimize.py in _optimize(study, func, n_trials, timeout, n_jobs, catch, callbacks, gc_after_trial, show_progress_bar)
74 reseed_sampler_rng=False,
75 time_start=None,
---> 76 progress_bar=progress_bar,
77 )
78 else:
/opt/conda/lib/python3.7/site-packages/optuna/study/_optimize.py in _optimize_sequential(study, func, n_trials, timeout, catch, callbacks, gc_after_trial, reseed_sampler_rng, time_start, progress_bar)
161
162 try:
--> 163 trial = _run_trial(study, func, catch)
164 except Exception:
165 raise
/opt/conda/lib/python3.7/site-packages/optuna/study/_optimize.py in _run_trial(study, func, catch)
262
263 if state == TrialState.FAIL and func_err is not None and not isinstance(func_err, catch):
--> 264 raise func_err
265 return trial
266
/opt/conda/lib/python3.7/site-packages/optuna/study/_optimize.py in _run_trial(study, func, catch)
211
212 try:
--> 213 value_or_values = func(trial)
214 except exceptions.TrialPruned as e:
215 # TODO(mamu): Handle multi-objective cases.
/opt/conda/lib/python3.7/site-packages/transformers/integrations.py in _objective(trial, checkpoint_dir)
138 checkpoint = os.path.join(checkpoint_dir, subdir)
139 trainer.objective = None
--> 140 trainer.train(resume_from_checkpoint=checkpoint, trial=trial)
141 # If there hasn't been any evaluation during the training loop.
142 if getattr(trainer, "objective", None) is None:
/opt/conda/lib/python3.7/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1278 tr_loss += self.training_step(model, inputs)
1279 else:
-> 1280 tr_loss += self.training_step(model, inputs)
1281 self.current_flos += float(self.floating_point_ops(inputs))
1282
/opt/conda/lib/python3.7/site-packages/transformers/trainer.py in training_step(self, model, inputs)
1771 loss = self.compute_loss(model, inputs)
1772 else:
-> 1773 loss = self.compute_loss(model, inputs)
1774
1775 if self.args.n_gpu > 1:
/opt/conda/lib/python3.7/site-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs)
1803 else:
1804 labels = None
-> 1805 outputs = model(**inputs)
1806 # Save past state if it exists
1807 # TODO: this needs to be fixed and made cleaner later.
/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py in forward(self, *inputs, **kwargs)
166 return self.module(*inputs[0], **kwargs[0])
167 replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
--> 168 outputs = self.parallel_apply(replicas, inputs, kwargs)
169 return self.gather(outputs, self.output_device)
170
/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py in parallel_apply(self, replicas, inputs, kwargs)
176
177 def parallel_apply(self, replicas, inputs, kwargs):
--> 178 return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
179
180 def gather(self, outputs, output_device):
/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py in parallel_apply(modules, inputs, kwargs_tup, devices)
84 output = results[i]
85 if isinstance(output, ExceptionWrapper):
---> 86 output.reraise()
87 outputs.append(output)
88 return outputs
/opt/conda/lib/python3.7/site-packages/torch/_utils.py in reraise(self)
423 # have message field
424 raise self.exc_type(message=msg)
--> 425 raise self.exc_type(msg)
426
427
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 632, in forward
return_dict=return_dict,
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 495, in forward
return_dict=return_dict,
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 315, in forward
x=hidden_state, attn_mask=attn_mask, head_mask=head_mask[i], output_attentions=output_attentions
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 264, in forward
output_attentions=output_attentions,
File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/transformers/models/distilbert/modeling_distilbert.py", line 192, in forward
scores = torch.matmul(q, k.transpose(2, 3)) # (bs, n_heads, q_length, k_length)
RuntimeError: CUDA out of memory. Tried to allocate 768.00 MiB (GPU 0; 14.76 GiB total capacity; 12.82 GiB already allocated; 727.75 MiB free; 12.93 GiB reserved in total by PyTorch)
``` | 08-05-2021 18:05:26 | 08-05-2021 18:05:26 | Thanks for the issue and the investigation. It looks like you have found the right fix, would you mind making a PR with it?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I'm experiencing the exact same problem. Sadly, the suggested solution doesn't work for me. At first I had the impression that the OutOfMemoryError shows up a bit later now (sometimes after 6–8 instead of 2 iterations), but that might be a coincidence.
I'm using Python 3.10.11, PyTorch 2.0.1, 1 GPU with 24 GiB GPU Memory, Platform: Linux (Ubuntu 20.04.1) with x86_64 architecture on AWS.<|||||>I too am experiencing the same error. Memory increases at every parameter change until an OOM is reached.

|
transformers | 13,018 | closed | Unable to resume training from checkpoint on TPU v3-8 | I'm facing a similar issue as #11326. When trying to resume training from checkpoint on TPUs, it crashes with error message `ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group`
## Environment info
- `transformers` version: 4.10.0.dev0
- Platform: Linux-4.19.0-17-cloud-amd64-x86_64-with-debian-10.10
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No, TPU v3 on GCP
- Using distributed or parallel set-up in script?: yes, `xla_spawn`
### Who can help
@sgugger
## Information
Model I am using: RoBERTa model
The problem arises when using:
[* ] my own modified scripts:
A barely modified version of `run_mlm.py` (check [here](https://gist.github.com/finiteautomata/bef480d508d12e2028fdeae19a92b350))
## To reproduce
Steps to reproduce the behavior:
1. Run `python xla_spawn.py run_mlm.py config.json`
2. Save checkpoint
3. Run again `python xla_spawn.py run_mlm.py config.json` (with `resume_from_checkpoint` set to `true`)
```config.json
{
"train_dir": "data/tweets/train",
"eval_dir": "data/tweets/test",
"pad_to_max_length": true,
"max_seq_length": 128,
"tokenize_on_the_fly": true,
"do_train": true,
"do_eval": true,
"seed": 123456,
"max_steps": 225000,
"eval_steps": 6000,
"save_steps": 1500,
"max_eval_samples": 150000,
"logging_steps": 200,
"logging_strategy": "steps",
"logging_dir": "./logs/",
"evaluation_strategy": "steps",
"config_name": "models/twerto-base-uncased",
"tokenizer_name": "models/twerto-base-uncased",
"output_dir": "models/twerto-base-uncased-trained",
"tokenization_batch_size": 81920,
"weight_decay": 0.01,
"adam_beta1": 0.9,
"adam_beta2": 0.98,
"adam_epsilon": 1e-6,
"learning_rate": 6e-4,
"max_grad_norm": 0,
"warmup_ratio": 0.06,
"resume_from_checkpoint": true,
"ignore_data_skip": true,
"per_device_train_batch_size": 128,
"per_device_eval_batch_size": 128,
"gradient_accumulation_steps": 4
}
```
### Error trace
```python
[INFO|trainer.py:1053] 2021-08-05 14:43:49,852 >> Loading model from models/twerto-base-uncased-trained/checkpoint-21000).
Traceback (most recent call last):
File "bin/run_mlm.py", line 507, in <module>
main(time.time())
File "bin/run_mlm.py", line 449, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/jmperez/.cache/pypoetry/virtualenvs/finetune-vs-scratch-gHiQbun3-py3.7/lib/python3.7/site-packages/transformers/trainer.py", line 1153, in train
self._load_optimizer_and_scheduler(resume_from_checkpoint)
File "/home/jmperez/.cache/pypoetry/virtualenvs/finetune-vs-scratch-gHiQbun3-py3.7/lib/python3.7/site-packages/transformers/trainer.py", line 1612, in _load_optimizer_and_scheduler
self.optimizer.load_state_dict(optimizer_state)
File "/home/jmperez/.cache/pypoetry/virtualenvs/finetune-vs-scratch-gHiQbun3-py3.7/lib/python3.7/site-packages/torch/optim/optimizer.py", line 145, in load_state_dict
raise ValueError("loaded state dict contains a parameter group "
ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group
```
## Expected behavior
Training resumes from checkpoint
| 08-05-2021 14:59:12 | 08-05-2021 14:59:12 | It looks like you are not using the `run_mlm` script but a modified version of it, as there are parameters you are passing that are not in this script. Could you share your modified version?<|||||>Sure. This is the modified version => https://gist.github.com/finiteautomata/cb1fba94202c1535d2a516eef2215baf
Main changes are that an extra seed and using a custom `IterableDataset`. Running without `xla_spawn.py` seems to be yielding the same error
This is the model configuration (`models/twerto-base-uncased/config.json`)
```json
{
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 130,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.9.1",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 30000
}
```
Update: you can repeat this file a couple of times and change `train_dir`/`test_dir` to point to the directory containing it
https://gist.github.com/finiteautomata/38bf8893ad0035e7001653a91a5f7ec3<|||||>After some digging, I gathered some extra information of the crash. It seems that the first saved params (`saved_groups[0]["params"]`) contains 77 elements, while the new optimizer has 76. This raises the exception
```python
1149 if delay_optimizer_creation:
1150 self.create_optimizer_and_scheduler(num_training_steps=max_steps)
1151
1152 # Check if saved optimizer or scheduler states exist
-> 1153 self._load_optimizer_and_scheduler(resume_from_checkpoint)
1154
1155 # important: at this point:
1156 # self.model is the Transformers Model
1157 # self.model_wrapped is DDP(Transformers Model), Deepspeed(Transformers Model), etc.
1158
1159 # Train!
1160 num_examples = (
1161 self.num_examples(train_dataloader) if train_dataset_is_sized else total_train_batch_size * args.max_steps
1162 )
/home/jmperez/.cache/pypoetry/virtualenvs/finetune-vs-scratch-gHiQbun3-py3.7/lib/python3.7/site-packages/transformers/trainer.py(1612)_load_optimizer_and_scheduler()
1602 if is_torch_tpu_available():
1603 # On TPU we have to take some extra precautions to properly load the states on the right device.
1604 optimizer_state = torch.load(os.path.join(checkpoint, "optimizer.pt"), map_location="cpu")
1605 with warnings.catch_warnings(record=True) as caught_warnings:
1606 lr_scheduler_state = torch.load(os.path.join(checkpoint, "scheduler.pt"), map_location="cpu")
1607 reissue_pt_warnings(caught_warnings)
1608
1609 xm.send_cpu_data_to_device(optimizer_state, self.args.device)
1610 xm.send_cpu_data_to_device(lr_scheduler_state, self.args.device)
1611 import ipdb; ipdb.set_trace()
-> 1612 self.optimizer.load_state_dict(optimizer_state)
1613 self.lr_scheduler.load_state_dict(lr_scheduler_state)
1614 else:
1615 map_location = "cpu" if is_sagemaker_mp_enabled() else self.args.device
1616 self.optimizer.load_state_dict(
1617 torch.load(os.path.join(checkpoint, "optimizer.pt"), map_location=map_location)
1618 )
1619 with warnings.catch_warnings(record=True) as caught_warnings:
1620 self.lr_scheduler.load_state_dict(torch.load(os.path.join(checkpoint, "scheduler.pt")))
1621 reissue_pt_warnings(caught_warnings)
> /home/jmperez/.cache/pypoetry/virtualenvs/finetune-vs-scratch-gHiQbun3-py3.7/lib/python3.7/site-packages/torch/optim/optimizer.py(144)load_state_dict()
134 state_dict = deepcopy(state_dict)
135 # Validate the state_dict
136 groups = self.param_groups
137 saved_groups = state_dict['param_groups']
138
139 if len(groups) != len(saved_groups):
140 raise ValueError("loaded state dict has a different number of "
141 "parameter groups")
142 param_lens = (len(g['params']) for g in groups)
143 saved_lens = (len(g['params']) for g in saved_groups)
--> 144 if any(p_len != s_len for p_len, s_len in zip(param_lens, saved_lens)):
145 raise ValueError("loaded state dict contains a parameter group "
146 "that doesn't match the size of optimizer's group")
147
148 # Update the state
149 id_map = {old_id: p for old_id, p in
150 zip(chain.from_iterable((g['params'] for g in saved_groups)),
151 chain.from_iterable((g['params'] for g in groups)))}
152
153 def cast(param, value):
ipdb> len(saved_groups[1]["params"])
127
ipdb> len(groups[1]["params"])
127
ipdb> len(saved_groups[0]["params"])
77
ipdb> len(groups[0]["params"])
76
ipdb> saved_groups[0]["params"]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76]
ipdb> type(groups[0]["params"])
<class 'list'>
ipdb> groups[0]["params"][0]
Parameter containing:
tensor([[ 0.0166, 0.0198, 0.0155, ..., 0.0200, -0.0159, 0.0022],
[ 0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[ 0.0469, 0.0027, -0.0038, ..., 0.0280, 0.0718, 0.0199],
...,
[ 0.0048, 0.0189, -0.0068, ..., -0.0642, -0.0060, 0.0320],
[-0.0138, -0.0080, 0.0119, ..., 0.0585, -0.0214, -0.0042],
[ 0.0244, 0.0121, -0.0498, ..., -0.0162, -0.0110, -0.0159]],
device='xla:1', requires_grad=True)
```<|||||>What's the architecture used? It could be a model that adds some parameters during training for some reason (`twerto-base-uncased-trained` does not help me ;-) )<|||||>It is a `RobertaForMaskedLM`
I changed and re-ran everything changing the `transformers_version` (I noticed there was a mismatch between environment version and the one in the config file) and adding the architecture with no success.
```json
{
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"classifier_dropout": null,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 130,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"torch_dtype": "float32",
"transformers_version": "4.10.0.dev0",
"type_vocab_size": 1,
"use_cache": true,
"vocab_size": 30000
}
```<|||||>Ok, it seems that `RoBERTa` is the problem here. Changing
```json
{
"architectures": [
"BertForMaskedLM"
],
# ...
"model_type": "bert",
}
```
enables checkpoint recovery.
Is this problem in my config or is this a bug?<|||||>Good to know it's specific to Roberta! I think it may be due to some parameter being created dynamically during training. Will investigate more tomorrow.<|||||>More digging: when trying to reload the checkpoint, it seems that the missing parameter name (that is, the one that the new optimizer is not willing to load) is `['lm_head.decoder.weight']`. <|||||>A Colab notebook reproducing the error on TPU, without any custom script
https://colab.research.google.com/drive/1GvOktm36m3Q43KWLv681QU8VydubOTAQ?usp=sharing
This notebook is barely the same but using GPUs, and it works
https://colab.research.google.com/drive/1GMUgpSNIAdGTk9mOk6ua5pACj-Qlgs9v?usp=sharing
So the problem is `RoBERTa`+TPUs<|||||>I have taken a deep dive into this issue, and it made me discover that all the weight tying in Transformers was thrown into the bins the moment the model is placed on an XLA device, which is why your state dict comports more tensors than your model expects.
#13030 should fix the issue.<|||||>Great work @sgugger. I can confirm that the notebook in the last comment now reloads the checkpoint.
Just one extra question: should I use a saved checkpoint with the previous code or is it now useless? I'm not sure if there was a problem during training too<|||||>You should definitely start from scratch (sorry) as your previous trainings don't have the proper weights for the decoder (they are not saved since they are supposed to be the same as the embeddings so you can't even retrieve them)<|||||>Great! Thanks again |
transformers | 13,017 | closed | Fix VisualBert Embeddings | # What does this PR do?
This PR addresses the issue mentioned in #13001. The `self.input_embeds` has been replaced with `self.position_ids` as suggested by @NielsRogge. | 08-05-2021 14:50:30 | 08-05-2021 14:50:30 | Did you verify that it fixes the error mentioned in the issue?
i.e. does the following work:
```
from transformers import BertTokenizer, VisualBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = VisualBertModel.from_pretrained('uclanlp/visualbert-vqa-coco-pre')
inputs = tokenizer("The capital of France is Paris.", return_tensors="pt")
visual_embeds = torch.zeros((1,36,2048)) #example of ROI features
visual_token_type_ids = torch.ones(visual_embeds.shape[:-1], dtype=torch.long) #example
visual_attention_mask = torch.ones(visual_embeds.shape[:-1], dtype=torch.float)
inputs.update({{
"visual_embeds": visual_embeds,
"visual_token_type_ids": visual_token_type_ids,
"visual_attention_mask": visual_attention_mask
}})
outputs = model(**inputs)
```<|||||>@NielsRogge
Yes. I tried it for the example and it works fine.<|||||>@NielsRogge Does this look okay?<|||||>Idont understand |
transformers | 13,016 | closed | FX submodule naming fix | This PR is related to HFTracer, the class responsible for allowing torch.fx symbolic tracing on transformers models.
It enhances the way dynamically inserted modules are named, making the name of the submodule inserted to the parent more explicit and close to what the submodule represents. It also solves issues related to the way submodule were inserted: instead of using `setattr`, `nn.Module.add_module()` is used. | 08-05-2021 13:55:34 | 08-05-2021 13:55:34 | |
transformers | 13,015 | closed | Fix TYPE_CHECKING not imported | # What does this PR do?
Fixes omitted import of TYPE_CHECKING in xlm_prophetnet model code's __init__.py.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| 08-05-2021 13:46:38 | 08-05-2021 13:46:38 | Actually the whole init needs to be rewritten to be the same as other models, it was somehow missed when we converted all models. Would like to amend your PR in that direction?<|||||>Your rebase has introduced many file changes in the diff that make the PR unreadable. Once you're satisfied with your branch, could you close this PR and open a new one? This should make the diff better.<|||||>Sure thing. |
transformers | 13,014 | closed | T5 with past ONNX export | This PR enables the export of T5 with past keys and values to ONNX.
It also enhances the ONNX export when using past keys and values by making the inputs and outputs names for past_key_values more explicit and easy to understand. | 08-05-2021 12:33:33 | 08-05-2021 12:33:33 | |
transformers | 13,013 | closed | Update generate method - Fix floor_divide warning | # What does this PR do?
Starting with PyTorch 1.9, a HF translation model (or any generation model) gives the following warning message:
```
/home/reimers/miniconda3/envs/easynmt/lib/python3.8/site-packages/torch/_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at /opt/conda/conda-bld/pytorch_1623448234945/work/aten/src/ATen/native/BinaryOps.cpp:467.)
return torch.floor_divide(self, other)
```
Here the code to produce this warning:
```
import warnings
warnings.filterwarnings("error") #Turn warning into an exception for traceback
from transformers import MarianTokenizer, MarianMTModel
model_name = 'Helsinki-NLP/opus-mt-de-en'
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
model.eval()
inputs = tokenizer(["Hallo Welt"], return_tensors="pt")
translated = model.generate(**inputs, num_beams=3)
print(translated)
```
Here the responsible line:
https://github.com/huggingface/transformers/blob/a6d62aaba01ce4ff1b2ee8705bf113904672c345/src/transformers/generation_utils.py#L1838
The // operator is translated to floor_divide, which is deprecated starting PyTorch 1.9:
https://pytorch.org/docs/stable/generated/torch.floor_divide.html
We replace this line with:
```
next_indices = (next_tokens/vocab_size).long()
```
which is is compatible with any PyTorch version and yields identical results to `next_tokens // vocab_size`.
Here a test to show this:
```
import random
import torch
for _ in range(100):
a = torch.tensor([random.randint(1, 1000)])
b = torch.tensor([random.randint(1, 100)])
c = a // b
d = (a/b).long()
assert torch.equal(c,d)
````
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@sgugger
| 08-05-2021 12:04:57 | 08-05-2021 12:04:57 | According to the document of [`torch.div`,](https://pytorch.org/docs/stable/generated/torch.div.html#torch.div) it is more suitable to change `next_indices = next_tokens // vocab_size` to `next_indices = torch.div(next_tokens, vocab_size, rounding_mode='floor')`.
`"floor"` - rounds the results of the division down. Equivalent to floor division in Python (the // operator)<|||||>> According to the document of [`torch.div`,](https://pytorch.org/docs/stable/generated/torch.div.html#torch.div) it is more suitable to change `next_indices = next_tokens // vocab_size` to `next_indices = torch.div(next_tokens, vocab_size, rounding_mode='floor')`.
>
> `"floor"` - rounds the results of the division down. Equivalent to floor division in Python (the // operator)
rounding_method was only introduced in pytorch 1.8. Using this method would break with any Pytorch version before 1.8. `(next_tokens/vocab_size).long()` is compatible with any pytorch version.<|||||>This makes sense! |
transformers | 13,012 | closed | [Flax T5] Speed up t5 training | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR makes sure that no jax functionality is used in the preprocessing to make sure the TPU is not unnecessarly blocked. This small change leads to a **5x** factor speed-up in training T5.
🚨🚨 **Note**: It is extremely important to verify that no DeviceArrays are created during the data preprocessing to make sure that the training step can run asynchronously on TPU while the preprocessing runs on CPU. A good rule is to make sure that in the training step look, only the function `p_train_step` uses JAX/Flax code and all other functions run on CPU. Other relevant links: https://jax.readthedocs.io/en/latest/async_dispatch.html#async-dispatch 🚨🚨
| 08-05-2021 10:45:57 | 08-05-2021 10:45:57 | The PR is tested here: https://huggingface.co/patrickvonplaten/t5-base-norwegian/tensorboard (check train loss graph which shows that time is reduced to < 5h now) |
transformers | 13,011 | closed | The traced Encoder of LEDForConditionalGeneration does not allow dynamic batching | We traced the encoder of LEDForConditionalGeneration using TorchScript and passed a different batch size to the traced encoder as follows.
```
import torch
from transformers import LEDForConditionalGeneration
class WrappedModel(torch.nn.Module):
def __init__(self):
super(WrappedModel, self).__init__()
self.model = LEDForConditionalGeneration.from_pretrained("allenai/led-base-16384", torchscript=True).led.encoder
def forward(self, data):
return self.model(data)
example = torch.zeros((1,128), dtype=torch.long)+ 10 # bsz , seqlen
pt_model = WrappedModel().eval()
traced_script_module = torch.jit.trace(pt_model, example)
example_dynamic_batch = torch.zeros((4,128), dtype=torch.long) # bsz , seqlen
traced_script_module(example_dynamic_batch)
```
Being able to vary the batch size during deployment is necessary for dynamic batching to work (for instance, when using Triton inference server).
Passing a different batch size than the one used during tracing results in the following error:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-4-5e85f8f70ee7> in <module>
10 pt_model = WrappedModel().eval()
11 traced_script_module = torch.jit.trace(pt_model, example)
---> 12 traced_script_module(example.repeat(4,1))
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/transformers/models/led/modeling_led.py(447): _sliding_chunks_query_key_matmul
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/transformers/models/led/modeling_led.py(202): forward
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/torch/nn/modules/module.py(709): _slow_forward
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/torch/nn/modules/module.py(725): _call_impl
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/transformers/models/led/modeling_led.py(725): forward
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/torch/nn/modules/module.py(709): _slow_forward
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/torch/nn/modules/module.py(725): _call_impl
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/transformers/models/led/modeling_led.py(914): forward
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/torch/nn/modules/module.py(709): _slow_forward
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/torch/nn/modules/module.py(725): _call_impl
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/transformers/models/led/modeling_led.py(1838): forward
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/torch/nn/modules/module.py(709): _slow_forward
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/torch/nn/modules/module.py(725): _call_impl
<ipython-input-4-5e85f8f70ee7>(7): forward
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/torch/nn/modules/module.py(709): _slow_forward
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/torch/nn/modules/module.py(725): _call_impl
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/torch/jit/_trace.py(940): trace_module
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/torch/jit/_trace.py(742): trace
<ipython-input-4-5e85f8f70ee7>(11): <module>
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/IPython/core/interactiveshell.py(3441): run_code
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/IPython/core/interactiveshell.py(3361): run_ast_nodes
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/IPython/core/interactiveshell.py(3170): run_cell_async
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/IPython/core/async_helpers.py(68): _pseudo_sync_runner
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/IPython/core/interactiveshell.py(2944): _run_cell
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/IPython/core/interactiveshell.py(2899): run_cell
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/ipykernel/zmqshell.py(539): run_cell
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/ipykernel/ipkernel.py(302): do_execute
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/tornado/gen.py(234): wrapper
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/ipykernel/kernelbase.py(538): execute_request
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/tornado/gen.py(234): wrapper
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/ipykernel/kernelbase.py(261): dispatch_shell
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/tornado/gen.py(234): wrapper
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/ipykernel/kernelbase.py(358): process_one
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/tornado/gen.py(775): run
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/tornado/gen.py(814): inner
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/tornado/ioloop.py(741): _run_callback
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/tornado/ioloop.py(688): <lambda>
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/asyncio/events.py(88): _run
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/asyncio/base_events.py(1758): _run_once
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/asyncio/base_events.py(523): run_forever
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/tornado/platform/asyncio.py(199): start
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/ipykernel/kernelapp.py(619): start
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/traitlets/config/application.py(845): launch_instance
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/site-packages/ipykernel_launcher.py(16): <module>
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/runpy.py(85): _run_code
/dccstor/gpandey11/anaconda3/envs/test/lib/python3.7/runpy.py(193): _run_module_as_main
RuntimeError: shape '[12, 1, 512, 513]' is invalid for input of size 12607488
```
Dynamic batching works fine with the BERT model. For example, the following code gives the correct output.
```
import torch
from transformers import BertForSequenceClassification
class WrappedModel(torch.nn.Module):
def __init__(self):
super(WrappedModel, self).__init__()
self.model = BertForSequenceClassification.from_pretrained('bert-base-uncased', torchscript=True)
def forward(self, data):
return self.model(data)
example = torch.zeros((1,128), dtype=torch.long)+ 10 # bsz , seqlen
pt_model = WrappedModel().eval()
traced_script_module = torch.jit.trace(pt_model, example)
example_dynamic_batch = torch.zeros((4,128), dtype=torch.long) # bsz , seqlen
traced_script_module(example_dynamic_batch)
```
## Environment
```
- `transformers` version: 4.3.2
- Platform: Linux-4.18.0-240.22.1.el8_3.x86_64-x86_64-with-redhat-8.3-Ootpa
- Python version: 3.7.0
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
```
| 08-05-2021 09:11:56 | 08-05-2021 09:11:56 | Hey @gauravpandeyamu,
This seems to be a rather edge-casy and difficult error to debug! I'm not sure if I manage to have the time to look into it. In a first step could you try to use current master instead of Transformers 4.3.2 to see if this changed anything in the error message?<|||||>@patrickvonplaten Sure, I will try it today and let you know. Thanks<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,010 | closed | GPT-J | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-05-2021 08:40:34 | 08-05-2021 08:40:34 | |
transformers | 13,009 | closed | Problem saving tf wav2vec in savedmodel format | 
this is my code | 08-05-2021 07:29:29 | 08-05-2021 07:29:29 | Hey @ahmed451 - could you please add a code snippet to reproduce the error instead of a screenshot? Thanks!<|||||>sure
```
from transformers import TFWav2Vec2ForCTC
model= TFWav2Vec2ForCTC.from_pretrained('patrickvonplaten/wav2vec2-base-timit-demo',from_pt = True)
model.save_pretrained("/content/test",saved_model = True) ```<|||||>I've run your code snippet in the following environment:
```
- `transformers` version: 4.10.0.dev0
- Platform: Linux-4.15.0-112-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.5
- Tensorflow version (GPU?): 2.3.0 (False)
```
and to me it looks like this is a problem coming from Tensorflow directly. I.E. the error output is:
```
UnimplementedError: The Conv2D op currently does not support grouped convolutions on the CPU. A grouped convolution was attempted to be run because the input depth of 768 does not match the filter input depth of 48 [Op:Conv2D]
```
Also gently pinging @will-rice here in case he has seen something like this before :-)<|||||>This is/was a TensorFlow limitation, but according to [this](https://github.com/tensorflow/tensorflow/issues/29005) `2.6` may have solved it. First I would try upgrading to `2.6` or the latest nightly. Another option could be the [workaround](https://github.com/tensorflow/tensorflow/issues/40044) for this problem in TFLite. I will say the workaround is slower though.<|||||>@patrickvonplaten How do I install the transformers 4.10.0 version?<|||||>It's current master :-) So `!pip install git+https://github.com/huggingface/transformers.git@master` should do<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,008 | closed | [Flax Encoder Decoder] Make Flax GPT2 working with cross attention | # What does this PR do?
The current Flax's GPT2 doesn't support cross attention, while PyTorch's GPT2 does. This PR add cross attention to Flax's GPT2, closely following the codes in PyTorch's GPT2 and Flax's Bart models.
However, I add one more thing, which is the projection from the encoder's last hidden state to the dimension size of the decoder's hidden states. I think this is useful when we want to combine GPT2 with different pretrained encoders (in particular, image encoders like ViT or CLIPVision).
```
project_encoder = getattr(self.config, "project_encoder", None)
if project_encoder:
encoder_hidden_states = self.encoder_projection_ln(encoder_hidden_states)
feed_forward_hidden_states = self.encoder_projection_mlp(
encoder_hidden_states, deterministic=deterministic
)
# residual connection
encoder_hidden_states = feed_forward_hidden_states
```
If HuggingFace thinks it is better not to include this (so it would be more identical to PyTorch's version), I will remove it.
Finally, is there any documentation file I need to edit for this change? If so, could you indicate me which file(s), please?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
@patil-suraj
| 08-05-2021 07:05:19 | 08-05-2021 07:05:19 | This is a great PR @ydshieh! Thanks a lot for working on this! :-) The PR looks great - that's exactly how I would have implemented it as well.
It would be great if you could remove the encoder->decoder projection layer in a first PR to make it consistent with PyTocrh. Also we will probably have to add a `FlaxEncoderDecoder` model architecture file in addition to this to showcase how GPT2 can be used with cross attention and to test this new feature.
The `FlaxEncoderDecoder` model should look very similar to the PyTorch implementation: https://github.com/huggingface/transformers/blob/master/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py . We can also get some inspiration from https://github.com/gchhablani/multilingual-image-captioning/blob/main/models/flax_clip_vision_mbart/modeling_clip_vision_mbart.py . We'll have to make it more general though cc @gchhablani @bhadreshpsavani
=> It's important that we test newly added features (such as GPT2's cross attention layer) so I think we'll have to add `modeling_flax_encoder_decoder.py` right away. This will definitely require some more work. If you are interested in giving it a shot @ydshieh that would be great - otherwise I can also continue this PR next week :-) <|||||>@patrickvonplaten Thanks for the feedback, I will remove the encoder->decoder projection layer.
Yes, I would like to work on `FlaxEncoderDecoder`, it is a great learning chance. If I understand correctly, you prefer `FlaxEncoderDecoder` being included in this PR, rather than in a separate PR, right?<|||||>Excatly let's include it in this PR so that we can then also add a first test for it with GPT2, like this one for PyTorch: https://github.com/huggingface/transformers/blob/9870093f7b31bf774fe6bdfeed5e08f0d4649b07/tests/test_modeling_encoder_decoder.py#L721<|||||>@ydshieh This is great! Do let me know if I can help in any way.<|||||>Hi, @patrickvonplaten
Here is my first attempt to `FlaxEncoderDecoderModel`. However, I have 3 main questions - when you have time, could you give some suggestions for them, please?
1. The `__call__/encode/decode` methods in Flax models (and modules) don't seem to have `**kwargs`, at least, not in `FlaxBartModel` code.
The current version of `FlaxEncoderDecoderModel` don't have `token_type_ids` parameter, and might have problems when the decoder module is `FlaxBertModule`, because it requires `token_type_ids` argument.
Do you have a better idea to deal with the `token_type_ids` parameter?
- Try to add it explicity in the methods' parameters, like `position_ids`?
- Or there is a good way to use `**kwargs` in this case?
2. In `self.__call__()`, when `decoder_input_ids` is `None`, we use `shift_tokens_right()` and it requires `decoder_start_token_id`.
However, `self.config` (EncoderDecoderConfig), or even `self.config.decoder`, might not have `decoder_start_token_id defined`.
- Should we try to add `decoder_start_token_id` in `self.from_encoder_decoder_pretrained()`, using similar logic in `generation_utils._get_decoder_start_token_id()`?
- Or we just leave the users to specify it (when it is not already in the config)?
3. In `modeling_encoder_deocer.EncoderDecoderModel.prepare_inputs_for_generation()`, we use the decoder model's
`prepare_inputs_for_generation()`:
decoder_inputs = self.decoder.prepare_inputs_for_generation(decoder_input_ids, ...)
However, in Flax's version, we only have the decoder module, not the
decoder model. Is the current `FlaxEncoderDecoderModel.prepare_inputs_for_generation()` implementation OK?
There are 5 other comments starting with "# Q: ". It would be great if you can also have some feedbacks on them, but they are less important.<|||||>Hey @ydshieh,
The PR already seems to be in a great shape - awesome job! Thanks a lot for being so clear about your questions - I answered most of them in the comments above.
In short:
- let's just remove `token_type_ids` for FlaxEncoderDecoder for now
- `decoder_input_ids` should never be generated from `input_ids` here, the user should be forced to pass them
- `we should define a `decode` function and `prepare_inputs_for_generation` similar to how it's done for `FlaxBart`
- The goal of this PR should really be to enable tests like those: https://github.com/huggingface/transformers/blob/e46ad22cd6cb28f78f4d9b6314e7581d8fd97dc5/tests/test_modeling_encoder_decoder.py#L721
Note that this PR won't (yet) enable generic ImageToText but just generic TextToText with GPT2 as the decoder. In a follow-up PR we will then define a new `FlaxImageEncoderDecoder` class specifically for ImageToText. However it's the much better approach in my opinion to start with TextToText (as you're doing it here) where we can more or less translate most of the code from PyTorch.
Please let me know if anything is unclear! I'm more than happy to also take a deeper look if you're stuck somewhere :-)<|||||>Hey @patrickvonplaten Thanks for all the feedback. I will continue the remaining work, including the test script as you mentioned.
Since we decide not to consider `token_type_ids` for now, I will need to change the example in the model file from `bert2gpt2 = ...` to `gpt2togpt2 = ...`, otherwise the example won't run (can't even initialize the model). I tested locally with
```
FlaxEncoderDecoderModel.from_encoder_decoder_pretrained('gpt2', 'gpt2')
```
and it at least can run `__call__`. Unless you have other ideas for a pair for the example, I am going for it :)
<|||||>Hi @patrickvonplaten , I have made FlaxEncoderDecoder available to the library. It remains to add the test file :)
<|||||>Hi, @patrickvonplaten , I finished the work by adding the test file, which is copied from `test_modeling_encoder_decoder.py` and modified it. There are a few tests been removed, for example:
- The part related to `shared_weights` (tie encoder decoder): I can't find something similar to the following for Flax
https://github.com/huggingface/transformers/blob/a13c8145bc2810e3f0a52da22ae6a6366587a41b/src/transformers/modeling_utils.py#L602 so currently, `FlaxEncoderDecoderModel` doesn't deal with `tie_encoder_decoder`.
- The part related to `EncoderDecoderModel(encoder=encoder_model, decoder=decoder_model)`, because in Flax version, model's `__init__` doesn't accept models as arguments.
Let's me know if there is anything missing or to be changed :)
## Updates
- Current `GPT2_INPUTS_DOCSTRING` in `modeling_gpt2.py` doesn't include `encoder_hidden_states` & `encoder_attention_mask`. (and same for the new Flax's version)
https://github.com/huggingface/transformers/blob/a13c8145bc2810e3f0a52da22ae6a6366587a41b/src/transformers/models/gpt2/modeling_gpt2.py#L456
Is it OK to include a fix for this in this PR?
- Current `GPT2Model` doesn't return `all_cross_attentions` when outputting tuple:
https://github.com/huggingface/transformers/blob/a13c8145bc2810e3f0a52da22ae6a6366587a41b/src/transformers/models/gpt2/modeling_gpt2.py#L825
I included a fix for this issue in this PR.
- There is another issue in `GPT2Model`:
https://github.com/huggingface/transformers/blob/a13c8145bc2810e3f0a52da22ae6a6366587a41b/src/transformers/models/gpt2/modeling_gpt2.py#L808
```
if self.config.add_cross_attention:
all_cross_attentions = all_cross_attentions + (outputs[3 if use_cache else 2],)
```
This causes exception in the following example. In Bart, the condition is `if encoder_hidden_states is not None:`, (and this makes sense), so we can do the same for GPT2.
```
import torch
from transformers import GPT2Model, GPT2Config
config = GPT2Config.from_pretrained('gpt2', add_cross_attention=True)
model = GPT2Model.from_pretrained('gpt2', config=config)
o = model(input_ids=torch.tensor([[1, 2, 3]], dtype=torch.int32), output_hidden_states=True, output_attentions=True)
```
Here Exception:
```
Traceback (most recent call last):
File "C:\Users\33611\Desktop\Projects\transformers-dev-2\transformers\tests.py", line 10, in <module>
o = model(input_ids=torch.tensor([[1, 2, 3]], dtype=torch.int32), output_hidden_states=True, output_attentions=True)
File "C:\Users\33611\miniconda3\envs\py39\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "c:\users\33611\desktop\projects\transformers-dev-2\transformers\src\transformers\models\gpt2\modeling_gpt2.py", line 809, in forward
all_cross_attentions = all_cross_attentions + (outputs[3 if use_cache else 2],)
IndexError: tuple index out of range
```
<|||||>Hi, @patrickvonplaten , I saw you pushed a new FlaxBertModel to make `token_type_ids` . That's great -> I will change to Bert2GPT2 later as you suggested. Thanks<|||||>Hey @ydshieh,
This PR is already in a very good shape. I'm very impressed by how well you implemented this class! The `EncoderDecoderModel` is one of the most difficult classes to implement.
I've added a Bert2GPT2 test that verifies that your PR works correctly (it does - great job ;-)). I think the only thing left to do now is to change the examples and tests from `"GPT2toGPT2"` to `"BERT2GPT2"` and then we can merge this one :-)<|||||>Hi @patrickvonplaten , I changed all remaining examples & tests to bert2gpt2, and rename `EncoderDecoderModelTest` to `FlaxEncoderDecoderModelTest`. The only remark is: `FlaxEncoderDecoderModel` doesn't treat `position_ids` and `token_type_ids`, because it all depends on each encoder/decoder models (modules actually), and it seems to me we don't pass `**kwargs` to `module.apply`. (It would be great If you can say something about this - I am not sure, just my observation).
Other than this, I think the task is done :)<|||||>@ydshieh amazing job on adding the Flax encoder decoder class! This lays the foundation for the `FlaxVisionEncoderDecoder` framework :-)
I'm currently working on adding a `SpeechEncoderDecoder` model here: https://github.com/huggingface/transformers/blob/19106d1c5548b3083c1d5ced667de6854367f1e0/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py - the `FlaxVisionEncoderDecoder` would be added in a similar spirit. If you would be interested we could try to add this class in a follow-up PR :-) <|||||>@patrickvonplaten Sure, I would like to continue with it. Actually, I just finished `TFEncoderDecoderModel` and add cross attention to some TF models (Bert/GPT2/Roberta/Electra). In particular, the test for `test_bert2gpt2_summarization` and `test_bert2bert_summarization` works in TF version now (after some bug fixes in the library though). I tested them locally with @slow disabled.
I need to finalize it, and will request a review (maybe for someone else? not sure if you work with TF)
I think the implementation for `VisionEncoderDecoder` will be straightforward, right? I mean basically, just change the parameters to pixel_values, and probably add some feature extraction part.
Here is a preview for `TFEncoderDecoderModel` :)
#13222
|
transformers | 13,007 | closed | Importing hides underlying error | ## Environment info
(Couldn't run due to bug this issue is about but did my best to fill it in)
- `transformers` version: 4.9.1
- Platform: Linux-3.10.0-1160.31.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core
- Python version: 3.7.11
- PyTorch version (GPU?): 1.7.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
No person for general `transformers` issues is listed
## To reproduce
Steps to reproduce the behavior:
1. Run `pip install torch-scatter==2.0.6 -f https://pytorch-geometric.com/whl/torch-1.7.0+cpu.html` (install version that has a bug)
2. Use a machine with CUDA support.
3. Run `python -c 'from transformers import AutoModelForCausalLM'` to see the following output:
```python
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: cannot import name 'AutoModelForCausalLM' from 'transformers' (venv/lib/python3.7/site-packages/transformers/__init__.py)
```
4. Run `transformers-cli env` to see the following output:
```python
Traceback (most recent call last):
File "venv/lib/python3.7/site-packages/torch_scatter/__init__.py", line 14, in <module>
f'{library}_{suffix}', [osp.dirname(__file__)]).origin)
AttributeError: 'NoneType' object has no attribute 'origin'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "venv/bin/transformers-cli", line 5, in <module>
from transformers.commands.transformers_cli import main
File "venv/lib/python3.7/site-packages/transformers/commands/transformers_cli.py", line 23, in <module>
from .run import RunCommand
File "venv/lib/python3.7/site-packages/transformers/commands/run.py", line 17, in <module>
from ..pipelines import SUPPORTED_TASKS, TASK_ALIASES, Pipeline, PipelineDataFormat, pipeline
File "venv/lib/python3.7/site-packages/transformers/pipelines/__init__.py", line 30, in <module>
from .automatic_speech_recognition import AutomaticSpeechRecognitionPipeline
File "/venv/lib/python3.7/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 20, in <module>
from .base import Pipeline
File "venv/lib/python3.7/site-packages/transformers/pipelines/base.py", line 43, in <module>
from ..models.auto.modeling_auto import AutoModel
File "venv/lib/python3.7/site-packages/transformers/models/auto/modeling_auto.py", line 271, in <module>
from ..tapas.modeling_tapas import (
File "venv/lib/python3.7/site-packages/transformers/models/tapas/modeling_tapas.py", line 51, in <module>
from torch_scatter import scatter
File "venv/lib/python3.7/site-packages/torch_scatter/__init__.py", line 17, in <module>
raise AttributeError(e)
AttributeError: 'NoneType' object has no attribute 'origin'
```
(I edited the stack trace to remove the parts of the path outside the virtual environment for improved readability.)
I was fortunate to find this easy-to-create stack trace when writing up this issue. It was actually difficult to find out what was the cause. I had to find the failing line in `transformers` (`from torch_scatter import scatter`) in a much more annoying manner instead.
## Expected behavior
Regular importing of the form `from transformers import ...` should display the full stack trace of the underlying error to provide a usable error message for debugging.
I obviously do not expect this underlying error to be fixed, given that it's not part of `transformers`. However, given that a comment near the failure considers `torch_scatter` a `soft dependency`, it might be a good idea to emit a warning when the package fails to import instead of causing the entire `transformers` library to fail. I'm not using a model that uses `torch_scatter` in the first place, so it shouldn't be required this way. | 08-04-2021 22:12:29 | 08-04-2021 22:12:29 | I cannot reproduce your exact issue, but installing a version of torch-scatter with an incompatible CUDA version will indeed kill the runtime with an `OSError`. We could catch that error when defining the `_scatter_available` variable, which currently only looks at the package installation.
WDYT @sgugger @NielsRogge ?<|||||>Yes, this should be checked as easy as possible to yield an easier error message.
As for the initial issue, there is no try/except that are ignored on our side, so I'm afraid this is a more general problem with the Python import system. It's not the first time I see it hiding the underlying issues. If you have any idea of what we could do on our side to display those error messages, I'm all ears.<|||||>It's understandable that you can't get my specific problem reproduced. Therefore, I created a much simpler proof-of-concept:
1. Run `pip install transformers==4.9.1 torch==1.9.0`.
2. Run `python -c 'from transformers.file_utils import is_scatter_available; print(is_scatter_available())'` to confirm that `torch_scatter` is not found by `transformers`.
3. Create `setup.py`:
```python
from setuptools import setup
setup(name='torch_scatter')
```
3. Create `torch_scatter.py`:
```python
scatter = None
```
4. Run `pip install .`
5. Run `python -c 'from transformers.file_utils import is_scatter_available; print(is_scatter_available())'` to confirm that `torch_scatter` is being found by `transformers`.
6. Run `python -c 'from transformers import AutoModelForCausalLM'` to see there is no error thrown.
7. Add `None.origin` to a new line at the top of `torch_scatter.py`.
8. Run `python -c 'from transformers import AutoModelForCausalLM'` again to see the following error
```python
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: cannot import name 'AutoModelForCausalLM' from 'transformers' (/Users/ahedges/.pyenv/versions/scatter/lib/python3.7/site-packages/transformers/__init__.py)
```
Given that I could see complete errors when using the wrong CUDA version of `torch_scatter`, I decided to try multiple exception-triggering statements on the first line of `torch_scatter.py`:
- `None.origin`: `ImportError: cannot import name 'AutoModelForCausalLM' from 'transformers'`
- `foo`: `NameError: name 'foo' is not defined`
- `raise RuntimeError("This is a message")`: `RuntimeError: This is a message`
- `[0][2]`: `IndexError: list index out of range`
- `0 / 0`: `ZeroDivisionError: division by zero`
All but the first had properly displayed errors and stack traces. This led me to believe that the specific issue isn't exceptions getting ignored but `AttributeError`s in particular.
I do not have a strong knowledge of `transformers`'s import system or the Python import system in general, but I used [`transformers/__init__.py`](https://github.com/huggingface/transformers/blob/v4.9.1/src/transformers/__init__.py) and [`transformers/file_utils.py`](https://github.com/huggingface/transformers/blob/v4.9.1/src/transformers/file_utils.py) to create a very simplified script that reproduces the problem. I made two files:
- `test.py`:
```python
import importlib
import os
import sys
from types import ModuleType
class _LazyModule(ModuleType):
def __init__(self, name, module_file, import_structure, extra_objects=None):
super().__init__(name)
self._modules = set(import_structure.keys())
self._class_to_module = {}
for key, values in import_structure.items():
for value in values:
self._class_to_module[value] = key
self.__all__ = list(import_structure.keys()) + sum(import_structure.values(), [])
self.__file__ = module_file
self.__path__ = [os.path.dirname(module_file)]
self._objects = {} if extra_objects is None else extra_objects
self._name = name
self._import_structure = import_structure
def __getattr__(self, name: str):
if name in self._objects:
return self._objects[name]
if name in self._modules:
value = self._get_module(name)
elif name in self._class_to_module.keys():
module = self._get_module(self._class_to_module[name])
value = getattr(module, name)
else:
raise AttributeError(f"module {self.__name__} has no attribute {name}")
setattr(self, name, value)
return value
def _get_module(self, module_name: str):
return importlib.import_module("." + module_name, self.__name__)
_import_structure = {"auto": ["AutoModel"]}
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure)
```
- `auto.py`:
```python
from torch_scatter import scatter
AutoModel = None
```
I could trigger the same kinds of errors that I got with `transformers` with `python -c 'from test import AutoModel'`. I then modified `_LazyModule.__getattr__()` to always `raise AttributeError()`, and I end up getting `ImportError: cannot import name 'AutoModel' from 'test'` with the following stack trace:
```python
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: cannot import name 'AutoModel' from 'test' (/Users/ahedges/Downloads/scatter_test/test.py)
```
Replacing the `AttributeError` with `RuntimeError` gets a more detailed stack trace:
```python
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "<frozen importlib._bootstrap>", line 1032, in _handle_fromlist
File "/Users/ahedges/Downloads/scatter_test/test.py", line 24, in __getattr__
raise RuntimeError()
RuntimeError
```
If I replace `test.py` with the line `from auto import AutoModel`, then the `AttributeError` stack trace is displayed properly. This lends evidence to the fact that this bug is related to how `transformers` implements importing.
From these experiments, I think the problem is that the `transformers` importing machinery is specifically ignoring any `AttributeError`s while allowing others to propagate freely. Annoyingly, I can't find any mention of such behavior in the Python docs, so I can't tell if this is part of any official interface.
I'm unsure of how to better resolve this issue, though. It might make sense to modify `_LazyModule._get_module()` (the only part of the class that should be able to throw such an error without messing with the import machinery itself) to have better handling of `AttributeError`s. Maybe printing stack traces for them, but that could get annoying. Maybe embed it in a more general type, but I have no clue how that will interfere with the importing system.
I apologize for the very long read, but I hope this helps.
<|||||>The reason is `pip` can't get you the right version of `torch_scatter`. I add a `try-catch` to prevent instant kill and throw an elegant message instead. #13040<|||||>@JetRunner, your PR is for a different but related issue than the one I reported here.
Plus, I'd like to point out that `pip` can get you the right version of `torch_scatter`. You just need to installing using the appropriate wheel page with the `-f` option.<|||||>Thank you for the detailed analysis @aphedges
First, could you confirm if the problem appears again on master? It should not and you have an env setup for debugging so you should be able to see that quickly. I'll follow the steps of your reproducer later today if you can't confirm.
Then, from your deep analysis, it looks like there is a problem with the `AttributeError` in the import machinery, somewhere. We don't ignore them in the `_LazyModule` part, or any part of the Transformers library dedicated to imports, so my first thought is that it comes from Python itself, but I'll need to investigate more to be sure.<|||||>@sgugger, I can confirm that your commit 9870093 prevents this issue for me. The unclear error message will still be a problem for anyone importing from `modeling_tapas.py`, such as the following:
```python
$ python -c 'from transformers import TapasModel'
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: cannot import name 'TapasModel' from 'transformers' (/Users/ahedges/.pyenv/versions/scatter/lib/python3.7/site-packages/transformers/__init__.py)
```
However, this is much more limited in scope than before, which is very good. I don't use TAPAS, so I should be fine now.
I agree with you that the error getting lost seems to come from Python itself. I could not find `AttributeError`s being caught by importing in `transformers` during my investigation, but I unfortunately couldn't find any documentation of similar behavior in the official Python documentation, either. Part of the reason that this debugging was so difficult was because large portions of the stack were in Python internals that PyCharm's debugger couldn't reach. I'm not sure what `transformers` should (or can) do anything to deal with this.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,006 | closed | [Flax] Correct pt to flax conversion if from base to head | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Currently loading a base model into a head model while using `from_pt=True` is broken in Flax.
E.g. the following fails:
```python
from transformers import RobertaModel, FlaxRobertaForMaskedLM, RobertaConfig
model = RobertaModel(RobertaConfig())
model.save_pretrained("./")
FlaxRobertaForMaskedLM.from_pretrained("./", from_pt=True)
```
It's not that trivial to correct it, since the conversion PT => Flax requires some renaming which is a bit "hacky". To solve the problem this PR now always checks whether both the weight name with and withouth base model prefix is expected. If one of the is expected -> then the weight name is correctly changed.
Tests are added to ensure that all models will be correctly converted from PyTorch in the future.
| 08-04-2021 21:13:17 | 08-04-2021 21:13:17 | |
transformers | 13,005 | closed | HyperParameter search in sagemaker | - `transformers` version: 4.6.1 (higher is not supported on Sagemaker)
- Platform: Sagemaker
### Who can help
Models:
- gpt2: @patrickvonplaten, @LysandreJik
Library:
- ray/raytune: @richardliaw, @amogkam
## Information
Model I am using GPT2-Medium
The problem arises when using:
I was following this guide https://huggingface.co/docs/sagemaker/train#prepare-a-transformers-fine-tuning-script
But I also wanted to do hyperparameter search https://huggingface.co/blog/ray-tune
I got everything to work on google colab, but on amazon sagemaker, I run into this error when using raytune
```
/opt/conda/lib/python3.6/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
"update your install command.", FutureWarning)
2021-08-04 21:09:18,214#011INFO services.py:1247 -- View the Ray dashboard at #033[1m#033[32mhttp://127.0.0.1:8265#033[39m#033[22m
Traceback (most recent call last):
File "train.py", line 148, in <module>
best = hyperParamSearch_trainer.hyperparameter_search(direction="minimize", hp_space=my_hp_space_ray, n_trials =args.numTrials )
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1668, in hyperparameter_search
best_run = run_hp_search(self, n_trials, direction, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/integrations.py", line 236, in run_hp_search_ray
**kwargs,
File "/opt/conda/lib/python3.6/site-packages/ray/tune/tune.py", line 297, in run
_ray_auto_init()
File "/opt/conda/lib/python3.6/site-packages/ray/tune/tune.py", line 670, in _ray_auto_init
ray.init()
File "/opt/conda/lib/python3.6/site-packages/ray/_private/client_mode_hook.py", line 82, in wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/ray/worker.py", line 940, in init
hook()
File "/opt/conda/lib/python3.6/site-packages/ray/tune/registry.py", line 197, in flush
self.references[k] = ray.put(v)
File "/opt/conda/lib/python3.6/site-packages/ray/_private/client_mode_hook.py", line 82, in wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/ray/worker.py", line 1597, in put
object_ref = worker.put_object(value)
File "/opt/conda/lib/python3.6/site-packages/ray/worker.py", line 287, in put_object
serialized_value = self.get_serialization_context().serialize(value)
File "/opt/conda/lib/python3.6/site-packages/ray/serialization.py", line 331, in serialize
return self._serialize_to_msgpack(value)
File "/opt/conda/lib/python3.6/site-packages/ray/serialization.py", line 311, in _serialize_to_msgpack
self._serialize_to_pickle5(metadata, python_objects)
File "/opt/conda/lib/python3.6/site-packages/ray/serialization.py", line 271, in _serialize_to_pickle5
raise e
File "/opt/conda/lib/python3.6/site-packages/ray/serialization.py", line 268, in _serialize_to_pickle5
value, protocol=5, buffer_callback=writer.buffer_callback)
File "/opt/conda/lib/python3.6/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 73, in dumps
cp.dump(obj)
File "/opt/conda/lib/python3.6/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 580, in dump
return Pickler.dump(self, obj)
TypeError: can't pickle _thread.RLock objects
```
I also tried optuna but since there is no option for garabage cleaning, I always run into CUDA out of memory | 08-04-2021 21:10:49 | 08-04-2021 21:10:49 | Hey @MarcM0 this has been fixed in the later transformer versions.
Since you can't upgrade the version, can you use this workaround instead: https://github.com/huggingface/transformers/issues/11249#issuecomment-860144744<|||||>now I get this error
```
1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
"update your install command.", FutureWarning)
2021-08-04 21:42:50,092#011INFO services.py:1247 -- View the Ray dashboard at #033[1m#033[32mhttp://127.0.0.1:8265#033[39m#033[22m
Traceback (most recent call last):
File "train.py", line 149, in <module>
best = hyperParamSearch_trainer.hyperparameter_search(direction="minimize", hp_space=my_hp_space_ray, n_trials =args.numTrials )
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1668, in hyperparameter_search
best_run = run_hp_search(self, n_trials, direction, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/integrations.py", line 236, in run_hp_search_ray
**kwargs,
File "/opt/conda/lib/python3.6/site-packages/ray/tune/tune.py", line 297, in run
_ray_auto_init()
File "/opt/conda/lib/python3.6/site-packages/ray/tune/tune.py", line 670, in _ray_auto_init
ray.init()
File "/opt/conda/lib/python3.6/site-packages/ray/_private/client_mode_hook.py", line 82, in wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/ray/worker.py", line 940, in init
hook()
File "/opt/conda/lib/python3.6/site-packages/ray/tune/registry.py", line 197, in flush
self.references[k] = ray.put(v)
File "/opt/conda/lib/python3.6/site-packages/ray/_private/client_mode_hook.py", line 82, in wrapper
return func(*args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/ray/worker.py", line 1597, in put
object_ref = worker.put_object(value)
File "/opt/conda/lib/python3.6/site-packages/ray/worker.py", line 287, in put_object
serialized_value = self.get_serialization_context().serialize(value)
File "/opt/conda/lib/python3.6/site-packages/ray/serialization.py", line 331, in serialize
return self._serialize_to_msgpack(value)
File "/opt/conda/lib/python3.6/site-packages/ray/serialization.py", line 311, in _serialize_to_msgpack
self._serialize_to_pickle5(metadata, python_objects)
File "/opt/conda/lib/python3.6/site-packages/ray/serialization.py", line 271, in _serialize_to_pickle5
raise e
File "/opt/conda/lib/python3.6/site-packages/ray/serialization.py", line 268, in _serialize_to_pickle5
value, protocol=5, buffer_callback=writer.buffer_callback)
File "/opt/conda/lib/python3.6/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 73, in dumps
cp.dump(obj)
File "/opt/conda/lib/python3.6/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 580, in dump
return Pickler.dump(self, obj)
File "pyarrow/io.pxi", line 1021, in pyarrow.lib.Buffer.__reduce_ex__
AttributeError: module 'pickle' has no attribute 'PickleBuffer'
```<|||||>what version of ray do you have? how was it installed?<|||||>Since I can't directly access the terminal of the computer where the train script is run, I put this line in my train script
`subprocess.check_call([sys.executable, "-m", "pip", "install", "ray[tune]==1.5.1"])`
<|||||>This seems to be an option as well but I can't find any documentation for how to do it
https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face

<|||||>@MarcM0 can you also do a `pip install pickle5`. I think that should do the trick.<|||||>That worked, thank you! |
transformers | 13,004 | closed | Create perplexity.rst | Updating the import for load_dataset
# What does this PR do?
Fixes # (issue)
Fixes the old way of loading datasets
## Who can review?
@patrickvonplaten
| 08-04-2021 19:59:07 | 08-04-2021 19:59:07 | |
transformers | 13,003 | closed | Not getting the same results with run_qa and run_qa_no_trainer scripts | ## Environment info
Just followed the default setup instructions in a new conda environment:
```shell
git clone https://github.com/huggingface/transformers
cd transformers
pip install .
pip install -r examples/pytorch/question_answering/requirements.txt
```
### Who can help
@sgugger @patil-suraj
## Information
Model I am using: https://huggingface.co/prajjwal1/bert-tiny
My goal is to run the finetuning example on the bert-tiny model and Squad dataset, with and without the Trainer class, and to obtain the same results.
## The problem
Running with the Trainer class with:
```shell
CUDA_VISIBLE_DEVICES=0 python run_qa.py \
--model_name_or_path prajjwal1/bert-tiny \
--dataset_name squad \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /trainer_squad/
```
ends with: `eval_exact_match = 31.3245` and `eval_f1 = 43.3655`.
Then, running the same setup but without the Trainer with:
```shell
CUDA_VISIBLE_DEVICES=0 python run_qa_no_trainer.py \
--model_name_or_path prajjwal1/bert-tiny \
--dataset_name squad \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /no_trainer_squad/
```
ends with: `Evaluation metrics: {'exact_match': 19.981078524124882, 'f1': 32.57782310536579}`.
It's interesting that I obtain the same results (`F1 = 49.73` and `EM = 48.6`) when I run with and without Trainer class, but with a different dataset: `--dataset_name squad_v2` and `--version_2_with_negative`. | 08-04-2021 19:53:27 | 08-04-2021 19:53:27 | The two scripts are different, they do not have the same defaults for the `seed` and even then, they do not randomize the data the same way. It's impossible for the two of them to give you the same results. <|||||>Do you have any suggestions on what to modify in the `run_qa_no_trainer.py` to get a bit better results (possibly closer to "the baseline" with `run_qa.py`)?
Based on the same results I got with the `squad_v2`, I assumed that both scripts are doing the same thing, and that the one without the Trainer just provides more flexibility to modify stuff in the train/eval loops. <|||||>On one GPU, you could maybe achieve the same results by passing the same seeds, making sure it's set at the right place, but that's a big maybe. Reproducibility is a hard enough problem with one script, so with two scripts that start with different assumptions and use different APIs, it's next to impossible.<|||||>Okay, got it! I then just misunderstood their roles in the examples project. Thanks for a quick reply. |
transformers | 13,002 | closed | TF CLM example fix typo | Fixes a one-line typo in the TF CLM example - it was still using `MODEL_FOR_MASKED_LM_MAPPING` | 08-04-2021 18:00:28 | 08-04-2021 18:00:28 | Also oop, this looks like it was based on a slightly older branch - it won't cause any problems, but the "Files changed" tab lists some changes in other files that are already merged - the only one actually affected is `run_clm.py`<|||||>There might be more such remnants from the MLM script, see https://github.com/huggingface/transformers/pull/14014
cc @Rocketknight1 |
transformers | 13,001 | closed | VisualBERT - ModuleAttributeError | ## Environment info
- `transformers` version: 4.9.1
- Platform: macOS-10.14.6-x86_64-i386-64bit
- Python version: 3.9.0
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.5.0-rc1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help
@gchhablani
## Information
I am using the recent VisualBERT model.
When giving the inputs to model, a ModuleAttributeError occurs, since internally the class VisualBertEmbeddings calls self.input_embeds but that class does not have that attribute (e.g., in the _init_), thus the error.
`class VisualBertEmbeddings(nn.Module):`
(...)
`token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.input_embeds.device)`
(self.input_embeds does not exist before this line)
The problem arises when using:
* the official example scripts:
```
from transformers import BertTokenizer, VisualBertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = VisualBertModel.from_pretrained('uclanlp/visualbert-vqa-coco-pre')
inputs = tokenizer("The capital of France is Paris.", return_tensors="pt")
visual_embeds = torch.zeros((1,36,2048)) #example of ROI features
visual_token_type_ids = torch.ones(visual_embeds.shape[:-1], dtype=torch.long) #example
visual_attention_mask = torch.ones(visual_embeds.shape[:-1], dtype=torch.float)
inputs.update({{
"visual_embeds": visual_embeds,
"visual_token_type_ids": visual_token_type_ids,
"visual_attention_mask": visual_attention_mask
}})
outputs = model(**inputs)
```
## To reproduce
Steps to reproduce the behavior:
1. Follow the official example script
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.9/site-packages/transformers/models/visual_bert/modeling_visual_bert.py", line 807, in forward
embedding_output = self.embeddings(
File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.9/site-packages/transformers/models/visual_bert/modeling_visual_bert.py", line 126, in forward
token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.input_embeds.device)
File "/usr/local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 778, in __getattr__
raise ModuleAttributeError("'{}' object has no attribute '{}'".format(
torch.nn.modules.module.ModuleAttributeError: 'VisualBertEmbeddings' object has no attribute 'input_embeds
```
## Expected behavior
The class VisualBertEmbeddings should have self.input_embeds before calling it, otherwise the VisualBert will not work, since internally there is that bug.
Thank you in advance for your help!
| 08-04-2021 14:00:27 | 08-04-2021 14:00:27 | There seem to be a bug in the `VisualBertEmbeddings` class indeed. Mind opening a PR? You can probably just replace `self.input_embeds.device` by `self.position_ids.device`.<|||||>Great, thank you for looking and suggesting the fix! I would not mind, but I will go on vacation very soon, so I won’t be able to follow up on this topic. <|||||>@RitaRamo Thanks for trying it out. Sorry for the late response.
@NielsRogge Thanks for suggesting a fix. I'll do it asap. <|||||>Fixed in #13017 |
transformers | 13,000 | closed | Newly trained tokenizers not adding [CLS] and [SEP] tokens | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.10.0.dev0 (installed from source)
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic (Google Colab)
- Python version: 3.7.11
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
- `tokenizers` version: 0.10.3
### Who can help
@patrickvonplaten, @LysandreJik
## Information
Running into an issue with the newly trained tokenizers not being able to add the '[CLS]' and '[SEP]' special tokens, even after explicitly setting `add_special_tokens=True`.
The problem arises when using:
* [x] the official example scripts: `run_qa.py`
* [x] my own modified scripts: (see snippets below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQuAD
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import AutoTokenizer
swahili_tokenizer = AutoTokenizer.from_pretrained("flax-community/bert-base-uncased-swahili")
swahili_tokenizer.tokenize('Si kila mwenye makucha simba.', add_special_tokens=True)
# Output:
['si', 'kila', 'mwenye', 'makucha', 'simba', '.']
# Expected:
['[CLS]', 'si', 'kila', 'mwenye', 'makucha', 'simba', '.', '[SEP]']
```
This is not only happening to this specific BERT tokenizer, the same was observed to RoBERTa tokenizers, and potentially other models as well.
The issue crashes fine-tuning for QA using the official `run_qa.py` script.
For example,
```sh
cd transformers/examples/pytorch/question-answering
python run_qa.py \
--model_name_or_path 'flax-community/bert-base-uncased-swahili' \
--dataset_name squad_v2 \
--do_train \
--do_eval \
--per_device_train_batch_size 4 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 128 \
--doc_stride 32 \
--output_dir /tmp/debug_squad/
```
... halts by throwing an exception:
```python
Traceback (most recent call last):
File "run_qa.py", line 645, in <module>
main()
File "run_qa.py", line 433, in main
desc="Running tokenizer on train dataset",
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 1682, in map
desc=desc,
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py", line 397, in wrapper
out = func(self, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2020, in _map_single
offset=offset,
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 1906, in apply_function_on_filtered_inputs
function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "run_qa.py", line 375, in prepare_train_features
cls_index = input_ids.index(tokenizer.cls_token_id)
ValueError: 2 is not in list
```
The script that was used to train the tokenizer could be found [here](https://huggingface.co/flax-community/bert-base-uncased-swahili/blob/main/train_tokenizer.py).
For more example, see this [colab notebook](https://colab.research.google.com/drive/1cjof6VJYwXIijwqW7kFcjo4IrjY08xkT?usp=sharing).
## Expected behavior
When setting `add_special_tokens=True` the tokenizer is expected to add `'[CLS]'` and `'[SEP]'` tokens.
Here is an old tokenizer that behaves as expected:
```python
from transformers import AutoTokenizer
bert_tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
bert_tokenizer.tokenize('Si kila mwenye makucha simba.', add_special_tokens=True)
# Output:
['[CLS]', 'si', 'ki', '##la', 'mw', '##en', '##ye', 'ma', '##ku', '##cha', 'sim', '##ba', '.', '[SEP]']
```
Thank you! | 08-04-2021 12:04:40 | 08-04-2021 12:04:40 | Hello! Looking at your `train_tokenizer.py` file, I see no post-processor. Without a post-processor, the tokenizer is unaware of what tokens it should add after tokenizing.
See the quick tour of the tokenizers library here: https://huggingface.co/docs/tokenizers/python/latest/quicktour.html#post-processing<|||||>@LysandreJik, thank you for the answer. That was on point.
I trained a new tokenizer with pos_processor and it worked as expected. |
transformers | 12,999 | closed | pad_to_multiple_of added to DataCollatorForWholeWordMask | There is a small bug in `DataCollatorForWholeWordMask`: it has an argument `pad_to_multiple_of`, however, when doing `_collate_batch` inside `__call__` method, this argument is not provided. This commit adds the usage of the argument. | 08-04-2021 11:57:06 | 08-04-2021 11:57:06 | Have checked it, works fine, both `batch_input` and `batch_mask` are padded to a multiple of `pad_to_multiple_of` value.<|||||>Ok, will do that some of these days! |
transformers | 12,998 | closed | DataCollatorForWholeWordMask does not return attention_mask | Hi,
** DataCollatorForWholeWordMask** does not output `attention_mask`. According to the `__call__` method:
`return {"input_ids": inputs, "labels": labels}`.
Is there a peculiar motivation behind it or is a small bug? From where I see, when we do the pre-training, most instances will **not** be of the same length and applying _Attention_ for all the tokens (including the padding) may cause imprecise results. | 08-04-2021 11:06:27 | 08-04-2021 11:06:27 | Not sure where you are seeing this. `DataCollatorWithPadding` can also pad attention masks, as it applies `tokenizer.pad()` on the inputs as can be seen [here](https://github.com/huggingface/transformers/blob/c7faf2ccc05a095870c3c905d232179fb323797d/src/transformers/data/data_collator.py#L118).<|||||>@NielsRogge am sorry for confusing, meant **DataCollatorForWholeWordMask**, will correct that now<|||||>There is also a bug with `pad_to_multiple_of` argument: it is not passed to `_collate_batch` inside `__call__`. Have made a pull request https://github.com/huggingface/transformers/pull/12999 to add its usage.<|||||>That class should never have been merged as it is, it was a mistake on our side. It contains multiple bugs and only works for BERT models. It need to be rewritten from scratch to be model agnostic.<|||||>For me it works perfectly for any arbitrary model (checked it is doing everything correct), except for `attention_mask` generation and `pad_to_multiple_of` usage (both can be corrected manually though, however, it is always better not to invent a bicycle).<|||||>It cant' give good results on a tokenizer that is not like BERT, since it relies on the "##" to detect if something is inside a word.<|||||>Is there any progress on this issue? It seems that this issue still exists. |
transformers | 12,997 | closed | how to user class_weight in transformers.trainer | In tensorflow,I can use class_weight for unbalanced data. Now, I want to train the model through transformers.trainer,how to use class_weight in transformers.trainer, This is an introduction to it
1.https://huggingface.co/transformers/custom_datasets.html#fine-tuning-with-trainer
2.https://huggingface.co/transformers/main_classes/trainer.html#transformers.TFTrainer
Please take a look at this question. Thank you | 08-04-2021 10:15:48 | 08-04-2021 10:15:48 | For training related questions, refer to our [forum](https://discuss.huggingface.co). We like to keep Github issues for bugs/feature requests.
[This post](https://discuss.huggingface.co/t/how-can-i-use-class-weights-when-training/1067) for example will probably answer your question.
Therefore closing this issue. |
transformers | 12,996 | closed | Perceiver IO | # 🌟 New model addition
## Model description
Perceiver is a general architecture that works on many kinds of data, including images, video, audio, 3D point clouds, language and symbolic inputs, multimodal combinations, etc. Perceivers can handle new types of data with only minimal modifications. Perceivers process inputs using domain-agnostic Transformer-style attention. Unlike Transformers, Perceivers first map inputs to a small latent space where processing is cheap and doesn't depend on the input size. This makes it possible to build very deep networks even when using large inputs like images or videos.
Perceiver IO is a generalization of Perceiver to handle arbitrary outputs in addition to arbitrary inputs. The original Perceiver only produced a single classification label. In addition to classification labels, Perceiver IO can produce (for example) language, optical flow, and multimodal videos with audio. This is done using the same building blocks as the original Perceiver. The computational complexity of Perceiver IO is linear in the input and output size and the bulk of the processing occurs in the latent space, allowing us to process inputs and outputs that are much larger than can be handled by standard Transformers. This means, for example, Perceiver IO can do BERT-style masked language modeling directly using bytes instead of tokenized inputs.
https://arxiv.org/pdf/2107.14795.pdf
## Open source status
* [x] the model implementation is available: https://github.com/deepmind/deepmind-research/tree/master/perceiver (JAX)
* [x] the model weights are available: https://storage.googleapis.com/perceiver_io/language_perceiver_io_bytes.pickle pretrained masked language model (https://github.com/deepmind/deepmind-research/blob/master/perceiver/colabs/masked_language_modelling.ipynb)
* [x] who are the authors: **DeepMind** Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu,
David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, Olivier Hénaff,
Matthew M. Botvinick, Andrew Zisserman, Oriol Vinyals, João Carreira
| 08-04-2021 03:43:43 | 08-04-2021 03:43:43 | I want to do it unless someone else did it by September 8th. <|||||>@cronoik
I've implemented perceiver io on pytorch: [link](https://github.com/esceptico/perceiver-io)
Now we need to adapt it for Transformers :)
But I have not (yet) added positional Fourier encoding and multimodal decoder
<|||||>Don't forget about the `transformers-cli` tool for adding new models.
Edit: [link](https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_model)<|||||>@esceptico I am not interested in doing the job twice or in a race. If you're already working on it, I'll find something else. :)<|||||>@cronoik
I'm not working on adaptation of my implementation for Transformers yet
I mean that I will only be glad if you want to use my repository for this :)<|||||>HI all, I just wanted to know if this issue is in active development or is waiting for a developer to do it.<|||||>Hi @tonibagur, I believe @NielsRogge is currently working on it<|||||>Hi @LysandreJik, thanks for your reply. @NielsRogge I am interested in giving a try to the PerceiverIO model, if you need a tester don't hessitate to ask :)
Regards, |
transformers | 12,995 | closed | Option for `(Distributed)LengthGroupedSampler` to treat groups as a hard constraint | # 🚀 Feature request
An option for `(Distributed)LengthGroupedSampler` to treat groups as a hard constraint. I.e., all batches returned will have exactly the same length. (Some straggler batches will then have a smaller batch size.)
## Motivation
I asked a [question](https://discuss.huggingface.co/t/multiple-choice-with-variable-number-of-choices/8607) on the forums about using a classification model to do multiple choice with a variable number of choices.
The simplest implementation I can see using HF Transformers, though it's a bit of a hack, is to use `--group_by_length` and set `--length_column_name` to be the number of choices. That way, the `1` dimension, which tells the model the number of multiple choice options, is consistent throughout a batch.
This _almost_ works. The issue is that `(Distributed)LengthGroupedSampler` is a soft constraint, so some batches still end up with multiple "lengths" (choices).
During training, I check each batch in the collator, and simply throw away samples that don't have the same number of choices.
The issue is that during evaluation, I realized I can't skip over any examples. In comparing methods, we need to report results on the entire evaluation set.
I totally understand if you'd rather not add this feature for the purpose of what is, admittedly, a hack. If that's the case, I'd greatly appreciate any advice on how you'd run a multiple choice model with a variable number of choices! | 08-03-2021 19:02:42 | 08-03-2021 19:02:42 | This is definitely a hack and very narrow use case, so it's unlikely we will add this feature ;-).
For your purpose, I think you need to rewrite a new model head, starting from the code of `XxxForMultipleChoice`, where `Xxx` is the model you are using.<|||||>Oh interesting, I hadn't considered changing the model, since it's already agnostic to the number of answers. But I suppose it could do the final bit of data preprocessing in the model itself!
I'm a bit surprised that there are no multiple choice datasets with a variable number of answers. Maybe this is something we'll see NLP tackle soon :-) <|||||>After thinking about this a bit more, it seemed like a custom (batch) sampler was the most elegant way to accomplish this.
Since multiple choice models are already agnostic to the number of choices, we just need something to feed it batches where each batch has a consistent number of choices.
Here's an example implementation of a single-process batch sampler that accomplishes this. It simply groups by feature upon construction, then provides iterators that yield batches with a particular value of that feature in common.
```python
class FeatureGroupedBatchSampler(Sampler):
"""Yields a batch of indices at a time, with the hard constraint that all indices
will have the same value for `feature`.
From pytorch docs:
"Mutually exclusive with batch_size, shuffle, sampler, and drop_last."
NOTE: shuffle, drop_last not yet implemented. Will if needed.
"""
dataset: datasets.Dataset
feature: str
batch_size: int
val2idxes: Dict[Any, List[int]]
num_batches: int
def __init__(
self, dataset: datasets.Dataset, feature: str, batch_size: int
) -> None:
if not isinstance(dataset, datasets.Dataset):
raise ValueError("`dataset` must be a (HuggingFace) datasets.Dataset")
if feature not in dataset.features:
raise ValueError(f"Feature '{feature}' must exist on dataset")
self.dataset = dataset
self.feature = feature
self.batch_size = batch_size
val2idxes = defaultdict(list)
for i, val in enumerate(dataset[self.feature]):
val2idxes[val].append(i)
self.val2idxes = val2idxes
# NOTE: Only the indices (dict's values) are ever used. Could remove features
# (dict's keys) entirely if desired and store as List[List[int]].
# Cache the number of batches so we don't need to recompute it on calls to
# __len__().
num_batches = 0
for idxes in self.val2idxes.values():
for start in range(0, len(idxes), self.batch_size):
num_batches += 1
self.num_batches = num_batches
def __iter__(self) -> Iterator[List[int]]:
"""Yields a batch of indicies at a time. Maximum of `self.batch_size`, but
always with identical values for `self.feature`.
"""
for idxes in self.val2idxes.values():
for start in range(0, len(idxes), self.batch_size):
yield idxes[start : start + self.batch_size]
def __len__(self) -> int:
"""At least the way HF Transformers uses this, it means number of *batches*,
not number of *instances* (since this is sent as a batch_sampler)."""
# return len(self.dataset) # this would be for num. instances
return self.num_batches # num. batches
```
I use the above during evaluation by overriding the`Trainer`'s `get_eval_dataloader()` and loading it, providing the "number of choices" column of the dataset as the feature to group by.
However, there's one issue: it doesn't seem like 🤗 Transformers supports a custom batch sampler. It expects the batch size to be known by the data loader, which is not the case if a custom batch sampler is provided.
https://github.com/huggingface/transformers/blob/1fec32adc6a4840123d5ec5ff5cf419c02342b5a/src/transformers/trainer.py#L2172
This leads to a crash a few lines later:
https://github.com/huggingface/transformers/blob/1fec32adc6a4840123d5ec5ff5cf419c02342b5a/src/transformers/trainer.py#L2217
I have a one-line fix that I'll propose in a PR, which is to simply use the observed batch size, which was calculated a few lines before:
https://github.com/huggingface/transformers/blob/1fec32adc6a4840123d5ec5ff5cf419c02342b5a/src/transformers/trainer.py#L2208
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,994 | closed | Add BEiT | # What does this PR do?
It adds [BEiT: BERT Pre-Training of Image Transformers](https://arxiv.org/abs/2106.08254) to the library. It's the first paper that enables self-supervised pre-trained Vision Transformers (ViTs) to outperform their supervised pre-training counterparts. As a picture says more than a thousand (or 16x16?) words, this is a good summary of the approach:
<img width="544" alt="Schermafbeelding 2021-08-03 om 17 26 19" src="https://user-images.githubusercontent.com/48327001/128042500-538a6fef-8d92-42b0-92f5-31d06ca6ae36.png">
The authors used OpenAI's [DALL-E](https://github.com/openai/DALL-E)'s encoder to map images to tokens, which the model then needs to predict based on masked patches. There are 3 models defined: `BEiTModel`, `BEiTForMaskedImageModeling` and `BEiTForImageClassification`.
This PR also cleans up some scripts from the library, namely those that defined id2label dicts for several datasets. I have removed `imagenet_classes.py` and `coco_classes.py` from the utils directory. Instead, id2label's are now defined on the hub in their [own repository](https://huggingface.co/datasets/huggingface/label-files). These can then be used in conversion scripts using the `huggingface_hub` library.
## To do
- [x] Add all checkpoints to the hub, under the "Microsoft" namespace. Perhaps discuss the model names, because for example `microsoft/beit_base_patch16_224_pt22k_ft22k_to_1k` is getting out of hand
- [ ] Would be cool to have a working colab for the `BEiTForMaskedImageModeling` model. For this, tagging one of the original authors: @donglixp
In a future PR, I also plan to add the semantic segmentation model, which obtains SOTA on Ade20k.
| 08-03-2021 15:32:37 | 08-03-2021 15:32:37 | I've uploaded all checkpoints to the hub: https://huggingface.co/models?search=microsoft/beit
I've renamed the checkpoints which are fine-tuned on ImageNet-1k (after being intermediately fine-tuned on ImageNet-22k) to be just `microsoft/beit-base-patch16-224`, etc.
@donglixp if you're interested, could you write model cards for these models? Model cards are READMEs that describe the models in detail. You can take inspiration from ViT's [model card](https://huggingface.co/google/vit-base-patch16-224).
Also, I do have a notebook for `BEiTForMaskedImageModeling`, but it's not working as expected. Could you please take a look? https://colab.research.google.com/drive/1Mjt-3jHw9HYMXECmSdDlbiG59ZAw-Z0T?usp=sharing<|||||>@NielsRogge great work, any news on the future PR, to add the semantic segmentation model and the pretrained Ade20k? Thanks!<|||||>@JStumpp say no more, it's added ;) |
transformers | 12,993 | closed | Gloabl attention not recognised in longformer pretrained MLM model to get sentence vector? | ## Objective:
Fetching Sentence embeddings using **longformer** model sentence by sentence from `<s>` token. By assigning attention_mask[:, [0,-1]] = 2 that is `<s> and </s> token will have values 2`.
- `transformers` **version:3.0.2**
- Platform:
- Python version: **Python 3.6.12 :: Anaconda, Inc.**
- PyTorch version (GPU?):**1.7.1**
- Tensorflow version (GPU?): **2.3.0**
- Using GPU in script?: **Yes**
- Using distributed or parallel set-up in script?: **parallel**
### Who can help
@patrickvonplaten
##Models:
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
## Information
Model I am using longformer:
The problem arises when using:
* my own modified scripts: (give details below)
The tasks I am working on is:
* my own task or dataset: (give details below)
## Code:
```
from transformers import LongformerModel, LongformerTokenizer
model = LongformerModel.from_pretrained('allenai/longformer-base-4096',output_hidden_states = True)
tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
# Put the model in "evaluation" mode, meaning feed-forward operation.
model.eval()
```
```
text=["I like to play cricket"] #For this code I want to fetch embedding.
def sentence_bert():
list_of_emb=[]
for i in range(len(all_content)):
SAMPLE_TEXT = text[i] # long input document
print(tokenizer.encode(SAMPLE_TEXT,padding=True,add_special_tokens=True,max_length=20)) #,max_length=10
print(tokenizer.decode(tokenizer.encode(SAMPLE_TEXT)))
input_ids = torch.tensor(tokenizer.encode(SAMPLE_TEXT)).unsqueeze(0)
attention_mask = torch.ones(input_ids.shape, dtype=torch.long, device=input_ids.device) # initialize to local attention
**attention_mask[:, [0,-1]] = 2**
with torch.no_grad():
outputs = model(input_ids, attention_mask=attention_mask)
hidden_states = outputs[2]
token_embeddings = torch.stack(hidden_states, dim=0)
# Remove dimension 1, the "batches".
token_embeddings = torch.squeeze(token_embeddings, dim=1)
# Swap dimensions 0 and 1.
token_embeddings = token_embeddings.permute(1,0,2)
token_vecs_sum = []
# For each token in the sentence...
for token in token_embeddings:
#but preferrable is
sum_vec=torch.sum(token[-4:],dim=0)
# Use `sum_vec` to represent `token`.
token_vecs_sum.append(sum_vec)
h=0
for i in range(len(token_vecs_sum)):
h+=token_vecs_sum[i]
list_of_emb.append(h)
return list_of_emb
f=sentence_bert()
```
**Output**
```
length of string: 5
[0, 38, 101, 7, 310, 5630, 2]
`<s>` I like to play cricket `</s>`
input_ids: tensor([[ 0, 38, 101, 7, 310, 5630, 2]])
Number of layers: 13 (initial embeddings + 12 BERT layers)
Number of batches: 1
Number of tokens: 512
Number of hidden units: 768
```
## Doubts/Question:
1. When I pass `attention_mask[:, [0,-1]] = 2` for global attention to `<s>` token. It doesn't seems to work. Then I pull `0th` token from `512` token from last layer as sentence embedding. Makes sense?
2. Even after passing `max_length=20` I see tensor of size=sentence_length, however ideally it should be padded with max size?
3. Why I see `Number of tokens: 512` ? I think it should be based on our `sentence length`. When I pass one sentence of length `7 ` to get embedding the token in hidden state should be 7? Based on my sentence what are 512 tokens??
4. How can I reduce number of tokens to sentence length instead of 512 ? Every-time I input a new sentence, it should pick up that length. Can we do this for `longformer `?
## Expected behavior
Document1: Embeddings
Document2: Embeddings
| 08-03-2021 15:02:36 | 08-03-2021 15:02:36 | Hey @pratikchhapolika,
Instead of setting values in attention_mask to 2 could you try using global_attention_mask instead?
Also see official docs here:
https://huggingface.co/transformers/master/model_doc/longformer.html#transformers.LongformerModel.forward<|||||>> Hey @pratikchhapolika,
>
> Instead of setting values in attention_mask to 2 could you try using global_attention_mask instead?
>
> Also see official docs here:
> https://huggingface.co/transformers/master/model_doc/longformer.html#transformers.LongformerModel.forward
You mean to say I should use this:
```python
global_attention_mask = torch.ones(input_ids.shape, dtype=torch.long, device=input_ids.device) # initialize to local attention
global_attention_mask[:, [0,-1]] = 2
outputs = model(input_ids, global_attention_mask =global_attention_mask)
```
<|||||>Rather:
```python
global_attention_mask = torch.zeros(input_ids.shape, dtype=torch.long, device=input_ids.device) # initialize to local attention
global_attention_mask[:, [0,-1]] = 1
outputs = model(input_ids, global_attention_mask =global_attention_mask)
```
as shown in the example of [this](https://huggingface.co/transformers/master/model_doc/longformer.html#transformers.LongformerModel.forward) method ;-)
<|||||>> Rather:
>
> ```python
> global_attention_mask = torch.zeros(input_ids.shape, dtype=torch.long, device=input_ids.device) # initialize to local attention
> global_attention_mask[:, [0,-1]] = 1
> outputs = model(input_ids, global_attention_mask =global_attention_mask)
> ```
>
> as shown in the example of [this](https://huggingface.co/transformers/master/model_doc/longformer.html#transformers.LongformerModel.forward) method ;-)
Can you pleas help me wit other questions as well and then I will close this issue? `Question 2, 3 and 4`<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,992 | closed | I met an error when I use EncoderDecoderModel. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0
- Platform:
- Python version:
- PyTorch version (GPU?): 1.7.1 cuda 9.2
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help @patrickvonplaten, @patil-suraj
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using EncoderDecoderModel:
When I am using EncoderDecoderModel, my code is here:
```python
model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-large-uncased', 'gpt2')
model = model.cuda()
output = model(input_ids, input_mask, decoder_input_ids, decoder_input_mask, labels=labels)
```
I met an error like this:
```python
Traceback (most recent call last):
File "/home/jwli/ljw/study/test.py", line 68, in <module>
output = model(input_ids, input_mask, decoder_input_ids, decoder_input_mask, labels=labels)
File "/home/jwli/anaconda3/envs/study/lib/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/jwli/anaconda3/envs/study/lib/python3.9/site-packages/transformers/models/encoder_decoder/modeling_encoder_decoder.py", line 438, in forward
decoder_outputs = self.decoder(
File "/home/jwli/anaconda3/envs/study/lib/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/jwli/anaconda3/envs/study/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 941, in forward
transformer_outputs = self.transformer(
File "/home/jwli/anaconda3/envs/study/lib/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/jwli/anaconda3/envs/study/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 789, in forward
outputs = block(
File "/home/jwli/anaconda3/envs/study/lib/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/jwli/anaconda3/envs/study/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 339, in forward
cross_attn_outputs = self.crossattention(
File "/home/jwli/anaconda3/envs/study/lib/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/jwli/anaconda3/envs/study/lib/python3.9/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 239, in forward
key, value = self.c_attn(encoder_hidden_states).split(self.split_size, dim=2)
File "/home/jwli/anaconda3/envs/study/lib/python3.9/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/jwli/anaconda3/envs/study/lib/python3.9/site-packages/transformers/modeling_utils.py", line 1400, in forward
x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight)
RuntimeError: mat1 dim 1 must match mat2 dim 0
```
But when I change 'bert-large-uncased' to 'bert-base-uncased', the code can run normally.
Can you help me?@patrickvonplaten, @patil-suraj, @LysandreJik | 08-03-2021 14:45:28 | 08-03-2021 14:45:28 | As you can see on the error, it has to do with the cross attention: the `encoder_hidden_states` (which are coming from BERT-large-uncased) have a dimensionality of 1024 (which I know by looking at the `hidden_size` attribute of the [config file](https://huggingface.co/bert-large-uncased/blob/main/config.json) of bert-large-uncased). You can also check this by doing:
```
from transformers import BertConfig
config = BertConfig.from_pretrained('bert-large-uncased')
print(config.hidden_size)
```
or
```
from transformers import BertModel
model = BertModel.from_pretrained('bert-large-uncased')
print(model.config.hidden_size)
```
For the decoder, the `queries` have a dimensionality of 768 (again, you can see this by looking at the config file or using Python). There's a bit of inconsistency between the models, because for gpt2 the dimensionality is determined by the `n_emb` attribute (whereas it should ideally also be called `hidden_size`).
Digging into the code, it turns out the error happens because the cross attention layer is defined as a `Conv1d` layer as can be seen [here](https://github.com/huggingface/transformers/blob/f064e0a43d05a6bb1eb81e65e700f8e0f4ab04f9/src/transformers/models/gpt2/modeling_gpt2.py#L151). The `in_channels` are defined as `2 * self.embed_dim` and the `out_channels` as `self.embed_dim`. So basically (2*768 = 1536, 768). However, one then applies this layer to the `encoder_hidden_states`, which have a dimensionality of 1024, so this will not work. You would have to update that line to:
`self.c_attn = Conv1D(2*1024, 1024)`<|||||>Thank you for your help! @NielsRogge
Actually, I know the error is caused by the dimension. In EncoderDecoderModel docs, it says "The EncoderDecoderModel can be used to initialize a sequence-to-sequence model with ***any pretrained autoencoding model as the encoder*** and any pretrained autoregressive model as the decoder.". So I think it will deal with the dimension matching problem automatically.
Thank you for your help, I will follow your suggestions to modify my code!
I will close the issue. Thank you!<|||||>Hi, thanks for your answer @NielsRogge. I am trying to do the same for a gpt-2 model with n_embd =1280, using also BertLage as encoder with hidden_size = 1024.
I saved my model and load it now by:
`model = AutoModelForSeq2SeqLM.from_pretrained(...)`
When I started to finetune my model, I reached the same error as OP reported.
I followed your advice afterwards, but this resulted in:
` size mismatch for decoder.transformer.h.0.crossattention.c_attn.weight: copying a param with shape torch.Size([1280, 2560]) from checkpoint, the shape in current model is torch.Size([1024, 2048]).
(many more of these line with h.x increasing. Removed them for readability)
`
Am I missing something here? It looks like the model does not accept the new dimension. Could you give me an advice how to solve that and perhaps what I am missing here?
Thanks alot!<|||||>Well as I wrote my comment, the solution came into my mind already: After changing the code you need to **recreate the model**. As the doc says: " Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like summarization", therefore cross_attention layers are automatically added at model creation. Dont try to fit random initiated weights into a wrong shape ;)
Sry for taking your time |
transformers | 12,991 | closed | How is Bert fine-tuned on STS-B task? | Hi, I am new to NLP and trying to reproduce fine-tune results of Bert. However, the STST-B task troubles me, from what I understand, the STST-B task is a regression task, but Bert treats it as a classification task. I do not quite know the transformation between scores and label in detail, is anybody willing to give me a hint? | 08-03-2021 14:05:12 | 08-03-2021 14:05:12 | Please ask those questions on the [forums](https://discuss.huggingface.co/). We keep the issues for bugs and feature requests only.<|||||>> Please ask those questions on the [forums](https://discuss.huggingface.co/). We keep the issues for bugs and feature requests only.
Thank you for your reply, I will post it on the forums |
transformers | 12,990 | closed | kindly adding some documentations on t5-v1_1-base"" | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
Documentation: @sgugger
Hi
Could you kindly add some documentations on "t5-v1_1-base"? I tested one code with t5-base and t5-v1 version, for t5-v1 I got memory issue, this seems to me the model size is different and larger, also fast tokenizer for this model does not work, could you kindly add a documentation on these differences?
thanks a lot.
| 08-03-2021 12:31:33 | 08-03-2021 12:31:33 | There is no model named `"t5-v1_1-base"` so I'm not sure what you mean.<|||||>Yes there is, `google/t5-v1_1-base`. Normally, t5_v1_1 and regular t5 aren't that different. From its [model card](https://huggingface.co/google/t5-v1_1-base):
> Version 1.1
T5 Version 1.1 includes the following improvements compared to the original T5 model- GEGLU activation in feed-forward hidden layer, rather than ReLU - see here.
Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning.
Pre-trained on C4 only without mixing in the downstream tasks.
no parameter sharing between embedding and classifier layer
"xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger d_model and smaller num_heads and d_ff.
Note: T5 Version 1.1 was only pre-trained on C4 excluding any supervised training. Therefore, this model has to be fine-tuned before it is useable on a downstream task. Pretraining Dataset: C4
So for the base-sized model, normally the memory requirements are the same as t5-base. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I have added documentation, see #13240. Therefore, closing. |
transformers | 12,989 | closed | Training hangs at the very start while using deepspeed | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.0
- base docker image: nvidia/cuda:10.1-cudnn7-devel-ubuntu18.04
- Python version: 3.8.8
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.2.1 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes, using deepspeed
### Who can help
@stas00 for deepspeed
## Information
Model I am using Layoutlm:
I need to test my layoutlm model by training it only 1 epoch due to test purposes. However, training hangs at the very start without logging anything or returning an error message. When I disable deepspeed and launch my training with `python -m torch.distributed.launch` instead of `deepspeed --num_gpus={torch.cuda.device_count()} --num_nodes=1`, I manage to train for 1 epoch.
The tasks I am working on is:
* Token Classification
## To reproduce
I think it is a general issue. So, training any model with deepspeed for only one epoch may result in hanging process.
## Expected behavior
It would be possible to train a model only for 1 epoch not to waste time while testing.
| 08-03-2021 12:09:30 | 08-03-2021 12:09:30 | Somehow I don't think this has anything to do with how many epochs you're training, at least I have never had a problem training with just one epoch. The problem most likely is elsewhere.
But I can't help you until you give me a way to reproduce your setup.
Ideally please use one of the existing examples, most likely you want this:
https://github.com/huggingface/transformers/tree/master/examples/pytorch/token-classification
1. - launch the example as explain in its README.md w/o deepspeed, than do the same with deepspeed,
2. - use a public dataset as given in the README.md of the example.
3. - try a public model first again from the README.md of the example.
and if it hangs please send the command line you were using after following the above 3 steps.
if it doesn't hang then try: `layoutlm` - then we know it's something specific to that particular model.
Thank you!<|||||>Meanwhile I also tested that layoutlm works with deepspeed. https://github.com/huggingface/transformers/pull/12695<|||||>Thank you @stas00 for your rapid response. I thought that it may be a general issue, that's why I didn't provide any example code. The code now I am working on is a confidential one, I will follow your advice and let you know afterward.<|||||>Also consider using these tools to diagnose the hanging:
- py-spy:
```
# trace a running python application - e.g. when it's hanging or very slow and you want to see the backtrace
pip install py-spy
# dumps traceback for each thread
sudo py-spy dump --pid PID # sudo may or may not be needed
```
- `faulthandler`
```
# make the traceback dumped periodically - every X seconds
import faulthandler
faulthandler.dump_traceback_later(20, repeat=True)
```<|||||>Thank you @stas00 for your suggestions to debug the issue. I have used both tools. FYI: I am using 2 GPUs and, they are stuck while initializing deepspeed. It is not happening every time but so frequently (50 percent of all my tries). Below you can see the outputs.
### Line 414 from `integrations.py`
```
model, optimizer, _, lr_scheduler = deepspeed.initialize(
args=SimpleNamespace(**ds_args), # expects an obj
model=model,
model_parameters=model_parameters,
config_params=config,
)
```
### This is from `py-spy` for pid 147
```
py-spy dump --pid 147
Process 147: /usr/bin/python -u nlp_ner_layoutlm/train_pipeline/training_step/training_script.py --local_rank=0 --local_example_folder /620a8e1a-2e53-4a9d-8205-61ee86e6453d/layoutlm_data --model_dir /mnt/pipeline/620a8e1a-2e53-4a9d-8205-61ee86e6453d/pytorch_model --batch_size 16 --weight_decay 0.0 --adam_epsilon 1e-08 --learning_rate 2e-05 --epochs 1 --seed 11046060 --tagging_scheme BILOU --profile_logs /mnt/pipeline/620a8e1a-2e53-4a9d-8205-61ee86e6453d/tensorboard_logs --patience 40 --gradient_accumulation_steps 1 --warmup_steps 300 --composite 1 --composite_loss_weight 0.5 --train_dataset_name train --validation_dataset_name validation --use_deepspeed 1 --consolidate 0 --incremental 1 --old_model_dir /mnt/pipeline/620a8e1a-2e53-4a9d-8205-61ee86e6453d/pytorch_model --base_model /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface --recursion_indexes [1, 2] --temperature 2.0
Python v3.8.0 (/usr/bin/python3.8)
Thread 147 (active): "MainThread"
barrier (torch/distributed/distributed_c10d.py:1967)
new_group (torch/distributed/distributed_c10d.py:2048)
_initialize_parameter_parallel_groups (deepspeed/runtime/zero/utils.py:20)
_configure_distributed_model (deepspeed/runtime/engine.py:578)
__init__ (deepspeed/runtime/engine.py:149)
initialize (deepspeed/__init__.py:120)
init_deepspeed (transformers/integrations.py:414)
train (composite_trainer.py:168)
train_model (nlp_ner_layoutlm/layoutlm/utils/training_utils.py:245)
<module> (training_script.py:65)
```
### This is from `py-spy` for pid 148
```
py-spy dump --pid 148
Process 148: /usr/bin/python -u nlp_ner_layoutlm/train_pipeline/training_step/training_script.py --local_rank=1 --local_example_folder /620a8e1a-2e53-4a9d-8205-61ee86e6453d/layoutlm_data --model_dir /mnt/pipeline/620a8e1a-2e53-4a9d-8205-61ee86e6453d/pytorch_model --batch_size 16 --weight_decay 0.0 --adam_epsilon 1e-08 --learning_rate 2e-05 --epochs 1 --seed 11046060 --tagging_scheme BILOU --profile_logs /mnt/pipeline/620a8e1a-2e53-4a9d-8205-61ee86e6453d/tensorboard_logs --patience 40 --gradient_accumulation_steps 1 --warmup_steps 300 --composite 1 --composite_loss_weight 0.5 --train_dataset_name train --validation_dataset_name validation --use_deepspeed 1 --consolidate 0 --incremental 1 --old_model_dir /mnt/pipeline/620a8e1a-2e53-4a9d-8205-61ee86e6453d/pytorch_model --base_model /mnt/pipeline/LAYOUTLM_PRE_TRAINED_MODEL/base-uncased-huggingface --recursion_indexes [1, 2] --temperature 2.0
Python v3.8.0 (/usr/bin/python3.8)
Thread 148 (active): "MainThread"
convert (torch/nn/modules/module.py:610)
_apply (torch/nn/modules/module.py:381)
_apply (torch/nn/modules/module.py:359)
_apply (torch/nn/modules/module.py:359)
_apply (torch/nn/modules/module.py:359)
_apply (torch/nn/modules/module.py:359)
_apply (torch/nn/modules/module.py:359)
_apply (torch/nn/modules/module.py:359)
_apply (torch/nn/modules/module.py:359)
to (torch/nn/modules/module.py:612)
_configure_distributed_model (deepspeed/runtime/engine.py:575)
__init__ (deepspeed/runtime/engine.py:149)
initialize (deepspeed/__init__.py:120)
init_deepspeed (transformers/integrations.py:414)
train (composite_trainer.py:168)
train_model (nlp_ner_layoutlm/layoutlm/utils/training_utils.py:245)
<module> (training_script.py:65)
```
### This is from `faulthandler`, it always logs below lines for every 20 seconds:
```
Timeout (0:00:20)!
Thread 0x00007f8be3d2b740 (most recent call first):
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py", line 1967 in barrier
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/distributed_c10d.py", line 2048 in new_group
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/zero/utils.py", line 20 in _initialize_parameter_parallel_groups
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 578 in _configure_distributed_model
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 149 in __init__
File "/usr/local/lib/python3.8/dist-packages/deepspeed/__init__.py", line 120 in initialize
File "/usr/local/lib/python3.8/dist-packages/transformers/integrations.py", line 414 in init_deepspeed
File "/app/nlp_ner_layoutlm/layoutlm/trainers/composite_trainer.py", line 168 in train
File "/app/nlp_ner_layoutlm/layoutlm/utils/training_utils.py", line 245 in train_model
File "nlp_ner_layoutlm/train_pipeline/training_step/training_script.py", line 65 in <module>
Timeout (0:00:20)!
Thread 0x00007fa11175f740 (most recent call first):
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 610 in convert
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 381 in _apply
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359 in _apply
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359 in _apply
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359 in _apply
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359 in _apply
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359 in _apply
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359 in _apply
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 359 in _apply
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 612 in to
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 575 in _configure_distributed_model
File "/usr/local/lib/python3.8/dist-packages/deepspeed/runtime/engine.py", line 149 in __init__
File "/usr/local/lib/python3.8/dist-packages/deepspeed/__init__.py", line 120 in initialize
File "/usr/local/lib/python3.8/dist-packages/transformers/integrations.py", line 414 in init_deepspeed
File "/app/nlp_ner_layoutlm/layoutlm/trainers/composite_trainer.py", line 168 in train
File "/app/nlp_ner_layoutlm/layoutlm/utils/training_utils.py", line 245 in train_model
File "nlp_ner_layoutlm/train_pipeline/training_step/training_script.py", line 65 in <module>
```
In the meantime, I will try to adapt my code to share here to allow reproducability.
<|||||>So you have a syncing problem, the 2 gpus run `barrier` which ensures they arrived to the same point, but one of the gpus doesn't, and so the other is stuck waiting for it.
Are you by chance misconfiguring the launch command? Try to hardcode `2` here:
```
deepspeed --num_gpus={torch.cuda.device_count()} --num_nodes=1
```
could `{torch.cuda.device_count()` be returning a different number than 2?
i.e.:
```
deepspeed --num_gpus=2 --num_nodes=1
```
<|||||>Thanks @stas00 one more time, I have hard-coded the launch command. My training pipeline contains several training steps, and interestingly, the initial 4 training steps with the same configuration have succeeded but 5th step has hanged for some reason in the same way. I can't reproduce it easily, it happens in different steps in my pipeline.
I am still investigating the issue.<|||||>After hard-coding the `num_gpus`, I have followed 2 different approaches with deepspeed and w/o deepspeed. Later, I triggered 3 new pipelines (each has 6 training steps) per each approach.
all pipelines without deepspeed have succeeded.
2 of 3 pipelines with deepspeed have hanged and 1 of them has succeeded.
🤷🏻♂️<|||||>For 2 days, I am triggering lots of trainings with the distributed setting, they didn't hang until now. I am convinced that my issue is related to `deepspeed`. Maybe `deepspeed` doesn't like my configuration :) But I can't go on debugging without much information about being stuck in barrier. I searched on web but can't find any useful info about it.<|||||>If you're able to reproduce the problem with something I can work with directly, I'd be happy to investigate this with you, @hasansalimkanmaz - perhaps you don't need to show us all of your confidential code but just the part where you start things - it should be pretty generic.
I'd start with your full app, and remove all code that appears **after** the hanging, - then you can prune it some more binary search-style reduction until you end up with a few lines of code that hang - then we will fix it quickly and most likely you will already see what the problem may be.<|||||>Thanks @stas00 for your kind help. Currently, I don't have time to dive into this issue as I manage to run in a distributed setting without deepspeed, it is not so urgent for now. On the other hand, I will be working on this issue in the next coming weeks. <|||||> I have a same problem. I run the Bert-Large pretrain with 4 nodes(32 GPUs). When I was debugging, I found that there seemed to be a problem with the training of the last batch. It seems that some Cuda streams are not synchronized, which seems to be related to the pre-compiled deepspeed transformer kernel. I installed deepspeed with DS_BUILD_OPS=1. <|||||>@HydraQYH, please file an Issue with https://github.com/microsoft/DeepSpeed as this is not an HF integration issue. Thank you!<|||||>> @HydraQYH, please file an Issue with https://github.com/microsoft/DeepSpeed as this is not an HF integration issue. Thank you!
Sorry for taking so long to reply to you, some urgent tasks need to be dealt with before. I have solved this problem. I use a distributed environment to run BERT model pre-training. I have 4 machines, each with 8 GPUs(32GB V100). I found that when the batch size read by some workers is not equal to the preset train_micro_batch_size_per_gpu, it will hang. Therefore, the problem may be caused by different workers with different batch sizes. This situation usually occurs at the end of an epoch, the data is not enough to fill a batch.<|||||>> @HydraQYH, please file an Issue with https://github.com/microsoft/DeepSpeed as this is not an HF integration issue. Thank you!
Sorry for taking so long to reply to you, some urgent tasks need to be dealt with before. I have solved this problem. I use a distributed environment to run BERT model pre-training. I have 4 machines, each with 8 GPUs(32GB V100). I found that when the batch size read by some workers is not equal to the preset train_micro_batch_size_per_gpu, it will hang. Therefore, the problem may be caused by different workers with different batch sizes. This situation usually occurs at the end of an epoch, the data is not enough to fill a batch.<|||||>Great to hear you have found the culprit, @HydraQYH!
By your description of it, a normal DDP would have had the same problem.
Do you have a solution on your side, or should `transformers` handle such circumstances? Note, that the Deepspeed integration doesn't touch on dataloading, and therefore it's a domain of `transformers` and not of Deepspeed.<|||||>@HydraQYH if you are done with the issue, Could you share the solution? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,988 | closed | [Flax] Correctly Add MT5 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
During the Flax sprint many teams weren't aware that mt5 models can be used with the `FlaxT5ForConditionalGeneration` class. This is mainly because the docs currently state that FlaxMT5 is not implemented: https://huggingface.co/transformers/index.html#supported-frameworks and because there are no docs on FlaxMT5, but for PyTorch & TF (https://huggingface.co/transformers/model_doc/mt5.html).
This PR adds a FlaxMT5 class analog to PT and TF and also adds official Flax weights to `mt5-base`, etc.: https://huggingface.co/google/mt5-base/commit/0b908f9e3c2fabccc4ab411b89838cecdd9ad499
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-03-2021 12:02:25 | 08-03-2021 12:02:25 | > Do we really need to add an extra class? Can't we just have the auto-mapping point to a FlaxT5Model?
Technically we don't need it, my main arguments are:
- Consistency with PyTorch & TF (people thought MT5 can't be used with Flax because https://huggingface.co/transformers/model_doc/mt5.html doesn't have Flax classes)
- Ability to provide meaningful examples for MT5 in Flax. T5 is not multi-lingual so the examples might be misleading for Flax |
transformers | 12,987 | closed | [Flax] Align jax flax device name | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
After feedback from @skye we settled on using `jnp.ndarray` as the class to describe jax/flax tensors. This PR replaces all outdated occurrences of "jax_xla.DeviceArray" with "jnp.ndarray"
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-03-2021 11:31:24 | 08-03-2021 11:31:24 | |
transformers | 12,986 | closed | pylint error when using `transformers.AutoModelForSequenceClassification.from_pretrained(path)` | I am using transformers 4.9.1 from PyPI.
When using pylint on `transformers.AutoModelForSequenceClassification.from_pretrained(path)` I am getting this error:
`my_scipt.py:277:11: E1120: No value for argument 'pretrained_model_name_or_path' in unbound method call (no-value-for-parameter)`
It I change it to
`transformers.AutoModelForSequenceClassification.from_pretrained(pretrained_model_name_or_path=path)`
I am getting
`script.py:277:11: E1120: No value for argument 'cls' in unbound method call (no-value-for-parameter)`
Could you maybe fix this?
| 08-03-2021 11:13:26 | 08-03-2021 11:13:26 | This may be linked to the missing `classmethod` decorators that were fixed in #12927, could you try on a source install?<|||||>> This may be linked to the missing `classmethod` decorators that were fixed in #12927, could you try on a source install?
I made a source install and then I am getting no pylint error anymore.
So this is fixed in the main branch. Closing this.
Thanks! |
transformers | 12,985 | closed | The transferred onnx model is much bigger than the origin pytorch model | python -m transformers.onnx --model=facebook/bart-large /home/sysadmin/downlaod/onnx_models/bart-large
Pytorch version: 1.9.0
transformers version: 4.9.1
platform: centos 7
python version: 3.7
The original bart model is aroud 2GB, But the transferred bart-large model is more than 3gb. This could because some shared weights are are duplicated in the onnx model | 08-03-2021 10:50:08 | 08-03-2021 10:50:08 | Hi @leoozy,
Thanks for bringing this to our attention.
I don't have all the details about the machinery ONNX/ORT are using to export the weights, but you're certainly right about some shared buffers being copied over at multiple places.
May be @tianleiwu from ORT would have some more insights about this specific behaviour? <|||||>@mfuntowicz @tianleiwu I checked the converted bart.onnx model. There is a huge weight called shared.weight, which is the weights of embeding layers (size: seq_length x vocabulary_size)。The encoding and decoding process shared the weights but the shape used in the two processes are different. When encoding, the shape is (vocabulary_size X seq_length). When decoding, the shape is (seq_length X vocabulary size). So, the onnx file saved it duplicatedly because of the different shapes. Models excess 2GB have a lot of limits while being optimized using onnxruntime. <|||||>@leoozy, please try do_constant_folding=False in torch.onnx.export to see whether it could reduce the onnx model size.
Updated: I tried in my machine, onnx model file size is 1.5 GB, pytorch model is about 1.0 GB. I also verified that removing duplicated weights in onnx model won't help (result is still around 1.5GB).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,984 | closed | convert_graph_to_onnx.convert broken for gpt-neo-x.xB since 4.5.0.dev0 | ## Environment info
- `transformers` version: 4.9.1
- Platform: Linux-4.15.0-151-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.4.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes (V100)
- Using distributed or parallel set-up in script?: No
### Who can help
This issue is a follow-up of #9803. People tagged in previous issues of the same kind:
@mfuntowicz @LysandreJik @patrickvonplaten
@StellaAthena (because of EleutherAI)
## Information
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
When trying to run any of the ONNX export scripts on the `EleutherAI/gpt-neo-x.xb` models, they fail. The last version of `transformers` that I can trace this behavior back to is on commit [04ceee7d246dcb26e13fa6aa8e3b990d6a0bf289](https://github.com/huggingface/transformers/commit/04ceee7d246dcb26e13fa6aa8e3b990d6a0bf289). This is not the exact commit, but the last working one I know. This issue is also present in the current `4.9.1` tag, as well as the recently introduced custom configurations to export more easily.
## To reproduce
Running the following script with the current `4.9.1` tag fails (output below). Installing from the above mentioned commit results in a properly working export.
```
pip install -U git+git://github.com/huggingface/transformers.git@04ceee7d246dcb26e13fa6aa8e3b990d6a0bf289
```
Actual code to run:
```
from pathlib import Path
import torch
import transformers
from transformers import convert_graph_to_onnx
from transformers import pipeline
model_name = "EleutherAI/gpt-neo-1.3B"
model_pth = Path(f"gpt_neo/gpt_neo_13b.onnx")
model_pth.parent.mkdir(exist_ok=True, parents=True)
class GPTNeoSent(transformers.GPTNeoForCausalLM):
def __init__(self, config):
super().__init__(config)
self.sentence_embedding = torch.nn.Identity()
def forward(self, input_ids, attention_mask):
return self.sentence_embedding(
super().forward(input_ids, attention_mask=attention_mask).logits
)
model = GPTNeoSent(config=transformers.AutoConfig.from_pretrained(model_name)).from_pretrained(model_name)
nlp = pipeline(
"feature-extraction",
model=model,
tokenizer=model_name,
)
inputs = nlp.tokenizer(["hello my friends!"], return_tensors="pt")
with torch.no_grad():
(
input_names,
output_names,
dynamic_axes,
tokens,
) = convert_graph_to_onnx.infer_shapes(nlp, "pt")
ordered_input_names, model_args = convert_graph_to_onnx.ensure_valid_input(
nlp.model, tokens, input_names
)
if not model_pth.exists():
torch.onnx.export(
model,
(inputs["input_ids"], inputs["attention_mask"]),
f=model_pth.as_posix(),
input_names=input_names,
output_names=output_names,
dynamic_axes=dynamic_axes,
do_constant_folding=True,
use_external_data_format=True, # Needed because of model size
enable_onnx_checker=True,
opset_version=13,
)
```
Which runs into the following error:
```
Found input input_ids with shape: {0: 'batch', 1: 'sequence'}
Found input attention_mask with shape: {0: 'batch', 1: 'sequence'}
Found output output_0 with shape: {0: 'batch', 1: 'sequence'}
Ensuring inputs are in correct order
Generated inputs order: ['input_ids', 'attention_mask']
/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/transformers/models/gpt_neo/modeling_gpt_neo.py:779: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert batch_size > 0, "batch_size has to be defined and > 0"
/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/transformers/models/gpt_neo/modeling_gpt_neo.py:149: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
while seq_length % block_length != 0:
/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/symbolic_helper.py:220: UserWarning: ONNX export failed on Unfold because input size not accessible not supported
warnings.warn("ONNX export failed on " + op + " because " + msg + " not supported")
Traceback (most recent call last):
File "test.py", line 42, in <module>
torch.onnx.export(
File "/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/__init__.py", line 271, in export
return utils.export(model, args, f, export_params, verbose, training,
File "/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/utils.py", line 88, in export
_export(model, args, f, export_params, verbose, training, input_names, output_names,
File "/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/utils.py", line 709, in _export
proto, export_map = graph._export_onnx(
RuntimeError: ONNX export failed: Couldn't export operator aten::unfold
Defined at:
/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/transformers/models/gpt_neo/modeling_gpt_neo.py(189): _look_back
/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/transformers/models/gpt_neo/modeling_gpt_neo.py(216): create_local_attention_mask
/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/transformers/models/gpt_neo/modeling_gpt_neo.py(801): forward
/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/torch/nn/modules/module.py(860): _slow_forward
/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/torch/nn/modules/module.py(887): _call_impl
/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/transformers/models/gpt_neo/modeling_gpt_neo.py(974): forward
test.py(18): forward
/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/torch/nn/modules/module.py(860): _slow_forward
/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/torch/nn/modules/module.py(887): _call_impl
/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/torch/jit/_trace.py(116): wrapper
/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/torch/jit/_trace.py(125): forward
/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/torch/nn/modules/module.py(889): _call_impl
/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/torch/jit/_trace.py(1139): _get_trace_graph
/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/utils.py(380): _trace_and_get_graph_from_model
/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/utils.py(420): _create_jit_graph
/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/utils.py(457): _model_to_graph
/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/utils.py(694): _export
/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/utils.py(88): export
/home/oborchers/anaconda3/envs/dev/lib/python3.8/site-packages/torch/onnx/__init__.py(271): export
test.py(42): <module>
```
## Expected behavior
Export should work.
| 08-03-2021 10:28:23 | 08-03-2021 10:28:23 | Hi @oborchers,
Thanks for raising the issue.
GPT-Neo is not supported by `convert_graph_to_onnx.py` and even if the model was potentially successfully exported in the past, I would not be surprised if some axis definition would be wrong.
With the new package `transformers.onnx` we are working on initial support for GPT-Neo, please see the PR [here](https://github.com/huggingface/transformers/pull/12911).
If you want to give it a try, we would love your feedback. <|||||>Hi @mfuntowicz,
thanks for doing a great re-implementation of the ONNX export functions, those are of great help 👍
I may have been a bit over eager in creating the issue, as well as changing state, as I am technically using a custom script as of now.
Went through a lot of hassle to actually re-create the onnx checkpoint to see if the most recent changes to ONNX do actually have any effect on the models performance, as back then original torch version was way faster than the exported one. See here: https://github.com/microsoft/onnxruntime/issues/7238
So, even if this was exportable properly on your side, it would be almost unusable due to being much slower than the pytorch version. At lest with the last exportable version (4.5.0.dev0). Let my try your code tomorrow to see if there are any differences in the results.<|||||>Thanks @oborchers for all the details.
We haven't run benchmark to properly say, we are just validating the outputs (with/without the past buffers) are matching the PyTorch outputs.
We would be very interested in supporting your efforts improving performance for GPT-Neo, so don't hesitate to ping us (@michaelbenayoun and myself).
Also, we can potential look at what the offline optimizations provided by Ort can bring here.
Thanks 🤗<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,983 | closed | subclassing a torch.utils.data.Dataset object for a T5 model | # 🚀 Feature request
In the example on the HuggingFace website "[Fine-tuning with custom datasets](https://huggingface.co/transformers/master/custom_datasets.html)" it says that for a custom dataset:
"_Now, let’s turn our labels and encodings into a Dataset object. In PyTorch, this is done by subclassing a torch.utils.data.Dataset object and implementing __len__ and __getitem__. In TensorFlow, we pass our input encodings and labels to the from_tensor_slices constructor method. We put the data in this format so that the data can be easily batched such that each key in the batch encoding corresponds to a named parameter of the forward() method of the model we will train._"
So the example provided is for distilbert classification but for a text to text model like T5 I got the error message when calling the trainer of "RuntimeError: Could not infer dtype of tokenizers.Encoding"
So could you provide more documentation or links to guides to what needs to change for T5 and maybe for other models if needed?
## Motivation
Here is the code I had to change going from the distilbert example you provide to T5. I assume it works becomes the DataCollatorForSeq2Seq() takes care of expanding the labels/output encoding into the features needed by T5? (but I know very little, I am guessing and I can't find any documentation that suggests this kind of change is needed).

## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 08-03-2021 09:41:32 | 08-03-2021 09:41:32 | This tutorial is out of date and will be rewritten soon. You should have a look at the [maintained examples](https://github.com/huggingface/transformers/tree/master/examples) or the [example notebooks](https://huggingface.co/transformers/notebooks.html) instead.<|||||>Thanks Sylvain, will do. |
transformers | 12,982 | closed | Fine-Tune Wav2Vec2 for English ASR with 🤗 Transformers, loading fine-tune models from local isn't working |

## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: Wav2Vec2
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 08-03-2021 03:23:27 | 08-03-2021 03:23:27 | Are you sure you saved your tokenizer in that folder with `tokenizer.save_pretrained`? What files are in this folder?<|||||>No, I didn't. I'm following this notebook https://huggingface.co/blog/fine-tune-wav2vec2-english it doesn't say save tokenizer. I have these files in my folder.

<|||||>Sorry, I meant the `processor`, not the `tokenizer`. You should save it if you want to be able to reload it with `from_pretrained`, or use the initial model to load the processor, since it's unlikely to have changed during your fine-tuning.<|||||>somehow saving the processor doesn't add tokenzier_config.json and special_tokens_map.json to the folder. I saved the tokenizer in the same folder, everything is working now. thank you for your support. |
transformers | 12,981 | closed | fix `Trainer.train(resume_from_checkpoint=False)` is causing an exception | fix with regression test for #12970 | 08-02-2021 16:23:41 | 08-02-2021 16:23:41 | All Tests green and ready for review. 👍<|||||>Thanks again! |
transformers | 12,980 | closed | tapas-base model is not predicting answers well. | Hi,
I was trying to get the answers for my own table. however results are not even reaching expectations. Please find the below code and output. actually, I am extracting tables from PDF,so here instead of providing pdf's i am providing excel sheet for you to test.
```
import pandas as pd
from transformers import TapasForQuestionAnswering,TapasTokenizer
import camelot
import os
import numpy as np
path='/home/jupyter/Projects/ExtractiveQnA/fastapi/knowledgebase/'
appended_data = []
for file in os.listdir(path):
print(file)
tables = camelot.read_pdf(path+file, pages="1-6")
if len(tables) != 0:
for i in range(len(tables)):
table = tables[0].df
table_clean = table.replace("", np.nan).dropna()
table_clean.rename(columns=table_clean.iloc[0], inplace=True)
table_clean.drop(table_clean.index[0], inplace=True)
table_clean.reset_index(drop=True, inplace=True)
appended_data.append(table_clean.astype("str"))
# appended_data=[df.set_index("Model") for df in appended_data]
final_tables=pd.concat(appended_data,axis=1)
final_tables = final_tables.loc[:,~final_tables.columns.duplicated()]
final_tables=final_tables.fillna("NA")
model_name = 'google/tapas-base'
model = TapasForQuestionAnswering.from_pretrained(model_name)
tokenizer = TapasTokenizer.from_pretrained(model_name)
queries='What is the service flow rate '
inputs = tokenizer(table=final_tables,
queries=queries,
padding='max_length',
return_tensors="pt")
outputs = model(**inputs)
predicted_answer_coordinates, = tokenizer.convert_logits_to_predictions(
inputs,
outputs.logits.detach(),
)
answers = []
for coordinates in predicted_answer_coordinates:
if len(coordinates) == 1:
# only a single cell:
answers.append(table.iat[coordinates[0]])
else:
# multiple cells
cell_values = []
for coordinate in coordinates:
cell_values.append(table.iat[coordinate])
answers.append(", ".join(cell_values))
print("")
for query, answer, in zip(queries, answers,):
print(query)
print("Predicted answer: " + answer)
```
and the answer I am getting is :
```
GXSHC40N.pdf
GXSF30V.pdf
GXMH31H.pdf
GXSH40V_GXSH45V.pdf
Some weights of TapasForQuestionAnswering were not initialized from the model checkpoint at google/tapas-base and are newly initialized: ['output_bias', 'column_output_bias', 'column_output_weights', 'output_weights']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Token indices sequence length is longer than the specified maximum sequence length for this model (567 > 512). Running this sequence through the model will result in indexing errors.
W
Predicted answer: Rated Capacity* (Grains@ Salt Dose), Total Water Used per Regeneration @ Maximum Salt Dose, Pressure Drop at Rated Service Flow (psig), Water Supply Maximum Hardness (gpg), Water Supply Maximum Clear Water Iron (ppm)***, Water Pressure Limits (minimum-maximum psi)****
```
can you please helpus with this.
[final_tables (2).zip](https://github.com/huggingface/transformers/files/6917872/final_tables.2.zip)
| 08-02-2021 15:05:40 | 08-02-2021 15:05:40 | You are initializing `TapasForQuestionAnswering` with randomly initialized classification heads, hence the predictions will be random. The warning also prints this:
```
Some weights of TapasForQuestionAnswering were not initialized from the model checkpoint at google/tapas-base and are newly initialized: ['output_bias', 'column_output_bias', 'column_output_weights', 'output_weights']
```
Instead of initializing from `google/tapas-base`, you can initialize from any of the [checkpoints on the hub](https://huggingface.co/models?search=google/tapas) which have "finetuned" in their name, like `google/tapas-base-finetuned-wtq` for example. <|||||>Hey @NielsRogge ,
Thanks for your answer. unfortunately the same error is showing even for google/tapas-base-finetuned-tabfact. but for google/tapas-large-finetuned-wtq. answer is showing wrongly and its not for every run.
```
queries = 'What is the service flow rate value'
GXSHC40N.pdf
GXSF30V.pdf
GXMH31H.pdf
GXSH40V_GXSH45V.pdf
Token indices sequence length is longer than the specified maximum sequence length for this model (568 > 512). Running this sequence through the model will result in indexing errors.
W
Predicted answer: 57.56/1.11, 20-125
```
but ideally it should show 7.5<|||||>Can you provide a colab to reproduce?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,979 | closed | Documentation: Dataset to Model interface examples | # 🚀 Feature request
Add explicit examples to the custom dataset docs which use fully realized key-value pairs and link to the forward methods of the relevant models to emphasize the contract between Dataset `__getitem__()` output and model `forward()`.
## Motivation
This issue arises when you consider the interaction of Datasets and Models. They may be independently well documented but when trying to use them together there are gaps.
Example:
You want to fine tune T5 using `Trainer` using a custom `Dataset`.
The minimal API for T5 a user needs to be aware of in this scenario is small but not well documented in the context of custom datasets. The `Dataset` must return an object in this format from it's `__getitem__()` method.
```
return {
'input_ids': input_ids,
'attention_mask': attention_mask,
'labels': labels,
}
```
Those three keys are then passed by `Trainer` to T5 via its forward method. Arguably those parameters are the most important first interface you need to know about to train T5 (inside or outside of `Trainer`).
This *is* documented in the T5 docs, however the key disconnect is that in the custom dataset docs none of the examples use explicit, fully realized set of key-value pairs. Also the docs don't emphasize that those return values *must match* the expected inputs to a model's `forward()` method.
## Your contribution
I can add these if this get's a few thumbs up.
| 08-02-2021 14:30:32 | 08-02-2021 14:30:32 | The custom dataset doc page is outdated and will be rewritten soon, you should use the [examples scripts](https://github.com/huggingface/transformers/tree/master/examples) or [example notebooks](https://huggingface.co/transformers/notebooks.html) as a base to fine-tune models.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,978 | closed | Validation and Evaluation not cumputed in run_qa.py | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.10.0.dev0
- Platform: ubuntu
- Python version:3.8
Models:
- roberta
Library:
- trainer: @sgugger
- pipelines: @LysandreJik
## Information
Model I am using is Roberta.
* while running run_qa.py on squad 2.0 data there is missing training metrics, validation, and evaluation loss.
* There is training loss and evaluation metric display after finetuning.
The tasks I am working on is:
* an official QuestionAnswer/Squad task
| 08-02-2021 11:43:32 | 08-02-2021 11:43:32 | What command did you use to run the script? Setting `--eval_strategy epoch` for instance will give you the evaluation every epoch.<|||||>@sgugger Yes.
At the time of evaluation [loss](https://github.com/huggingface/transformers/blob/75b8990d9068a2c6ef448c190f2595c17fbcb993/src/transformers/trainer.py#L2206) is empty<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,977 | closed | Control sequence length for Token Classification with Trainer | In the [new examples](https://github.com/huggingface/transformers/tree/master/examples/pytorch/token-classification) for running token classification training with pytorch and Trainer, there doesn't seem to be an option to control `max_seq_length`.
In the legacy version of the example, `DataTrainingArgs` has such an option, and for the example script without `Trainer` the option is also present.
Am I missing something? Or should I simply provide data samples pre-tokenized to my desired sequence length if I wish to use `Trainer`?
This was tested on v4.9.1. | 08-02-2021 09:12:00 | 08-02-2021 09:12:00 | There is an option to set the `max_seq_length`, introduced in #12929<|||||>That is it! Thank you. |
transformers | 12,976 | closed | Fix template for inputs docstrings | # What does this PR do?
The templates for the PyTorch model has a mistake in the input dosctrings (parenthesis should be inside the docstring and not the format) and several models had the same mistake (I realized it while reviewing Splinter actually). This PR fixes all of those and cleans up a few problems I spotted at the same time:
- image models don't need a format because there is nothing to format in the input docstrings
- some models that had the correct template for the input docstrings also add the parenthesis in some of the formats. | 08-02-2021 06:56:46 | 08-02-2021 06:56:46 | |
transformers | 12,975 | closed | Place BigBirdTokenizer in sentencepiece-only objects | # What does this PR do?
As was pointed out in #12946, it was impossible to import `BigBirdTokenizer` without sentencepiece installed, which shouldn't be the case. This PR fixes that.
Fixes #12946 | 08-02-2021 06:01:49 | 08-02-2021 06:01:49 | |
transformers | 12,974 | closed | fix typo in example/text-classification README | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-01-2021 20:52:53 | 08-01-2021 20:52:53 | |
transformers | 12,973 | closed | Add retrieval model config | 08-01-2021 18:04:28 | 08-01-2021 18:04:28 | ||
transformers | 12,972 | closed | Deberta tf | # What does this PR do?
TFDeBERTa implementation
@patrickvonplaten, @LysandreJik
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-01-2021 12:03:21 | 08-01-2021 12:03:21 | As a result of #13023 , you will need to rebase your PR on master and solve the merge conflicts (basically, you will just need to re-add the models in the auto-mappings as strings). Let us know if you need any help with that.<|||||>Glad to see a tf version! Thank you!<|||||>@LysandreJik
Yes, I am interested in contributing the DeBERTa-v2 model also |
transformers | 12,971 | closed | [FLAX] Potential bug in CLM script when using text files | Hi,
I've seen the following bug when using the CLM script with FLAX in combination with text files, pretty much the same as reported on StackOverflow:
https://stackoverflow.com/questions/65145526/why-new-lines-arent-generated-with-my-fine-tuned-distilgpt2-model
The underlying problem is, that newlines are removed and then the output of a fully trained model looks like:
```text
'Der Sinn des Lebens ist es, sich in ein und derselben Welt selbst niederzulassen und den anderen zum Leben auf der Grundlage dieses Modells zu berufen.Denn auch sie sind nur möglich, weil sie mit dem Göttlichen und mit dem göttlichen Willen in Verbindung stehen, die sich aus einer Welt der Liebe und des Friedens füreinander ergeben.Wir müssen die Freiheit der menschlichen und physischen Existenz verteidigen.Denn die Freiheit geht davon aus, dass Menschen nur existieren, weil sie in einem Zustand von Freiheit, Würde, Harmonie und'
```
So no newlines are generated. I've modified:
https://github.com/huggingface/transformers/blob/a4340d3b85fa8a902857d26d7870c53f82a4f666/examples/flax/language-modeling/run_clm_flax.py#L376
to
```python
output = tokenizer([example + "\n" for example in examples[text_column_name]])
```
and the output of the model now is:
```text
Mein Name ist Alexey.\nIch bin...\nOh, verdammt.\n- Was?\n- Ich kann mit dir nicht gut befreundet sein.\n- Sollen wir nicht?\n- Es ist ein bisschen komplizierter.\nUnd du solltest dir die Zähne putzen.\n- Hier war es noch nie.\n- Ich weiß.\nJetzt mal raus, bitte.\n- Das ist ja großartig.\n- Das ist es ja.\n- Wirklich?\n- Es muss nicht nur ein Spaß sein, das weiß ich doch.\nDas ist wirklich gut.\nJa.\nDie Leute mögen es, wenn du hier bist.\nGenau wie ich's tue.\n- Warum?\n- Ich wollte das Gefühl haben.\nEs ist nicht leicht, füreinander zu sorgen.\nAber das ist, was ich wollte.\nIch wollte mich bedanken, dass ich alles getan habe, was du auf die Beine gebracht hast.\nDu hast alles tun, was du wolltest.\nAber das
```
This is only a temporarily workaround, but maybe the best option is to use the `keep_linebreaks` in the dataset loader (when using text files), but I haven't tested it yet. This option was introduced in https://github.com/huggingface/datasets/pull/1913.
/cc @patrickvonplaten | 08-01-2021 09:35:10 | 08-01-2021 09:35:10 | If I understand correctly this also applies to PyTorch's `run_clm.py` script. I am happy to add the `keep_linebreaks` parameter to both `run_clm.py` and `run_flax_clm.py` if `load_dataset("text")` is used.
@sgugger @lhoestq - what do you think? It seems like multiple people had this problem.
Also @lhoestq - maybe it's a good idea to add some documentation about `keep_linebreaks` as it can't be find anywhere in the docs. Maybe here: https://huggingface.co/docs/datasets/loading_datasets.html#text-files ? <|||||>This also applies to the TensorFlow script as well. I have no problem adding the `keep_linebreaks` parameter there.<|||||>Ok great! @stefan-it - would you maybe be interested in opening a PR to change the following files:
- https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_clm_flax.py
- https://github.com/huggingface/transformers/blob/master/examples/tensorflow/language-modeling/run_clm.py
- https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py
Also, we should probs open a PR add docs to https://huggingface.co/docs/datasets/loading_datasets.html#text-files<|||||>And don't forget https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm_no_trainer.py<|||||>Hi,
I tested it with the `keep_linebreaks` parameter and output of the model is then correct :hugs:
Yeah, I would like to open a PR for these changes, should I wait until #13024 is merged, @patrickvonplaten :thinking: <|||||>Actually, I think #13024 doesn't actually lead to a speed-up :D Feel free to open as soon as you want :-) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,970 | closed | `Trainer.train(resume_from_checkpoint=False)` is causing an exception | Since `resume_from_checkpoint` can be `str` and `bool` it should be possible to pass `False` to it.
But when `resume_from_checkpoint` is `False` it causes an exception here:
https://github.com/huggingface/transformers/blob/3d4b3bc3fd77e0e48e2364464ea90379f13bcf37/src/transformers/trainer.py#L1049-L1050
```text
E TypeError: expected str, bytes or os.PathLike object, not bool
```
The most simple solution would be to do this at the beginning of the `train` function:
```python
resume_from_checkpoint = None if not resume_from_checkpoint else resume_from_checkpoint
```
If wanted I can provide a PR. | 07-31-2021 19:07:31 | 07-31-2021 19:07:31 | That seems like the right fix indeed. Please go ahead with a PR, thanks! :-) |
transformers | 12,969 | closed | Add tokenizer method to convert ids to tokens | Adds basic functionality to convert model output to a human-interpretable format for applications such as grammar checking with the T5 CoLA task. @thomwolf
Fixes #12967 | 07-31-2021 19:05:59 | 07-31-2021 19:05:59 | |
transformers | 12,968 | closed | 403 error in colab to download tokenizer | ```
647 class HTTPDefaultErrorHandler(BaseHandler):
648 def http_error_default(self, req, fp, code, msg, hdrs):
--> 649 raise HTTPError(req.full_url, code, msg, hdrs, fp)
650
651 class HTTPRedirectHandler(BaseHandler):
HTTPError: HTTP Error 403: rate limit exceeded
``` | 07-31-2021 18:50:12 | 07-31-2021 18:50:12 | Please follow the issue template, there is nothing we can do to help otherwise. |
transformers | 12,967 | closed | Unable to convert output to interpretable format | # 🚀 Feature request
There is no way to convert model outputs to a human-interpretable format, such as a list of token strings. Without this feature, there is no practicality other than benchmarking model performance, which does not require such a feature.
## Motivation
I'm trying to use the T5 CoLA task to determine whether an input sentence is grammatical.
## Your contribution
See #12969
| 07-31-2021 18:46:13 | 07-31-2021 18:46:13 | |
transformers | 12,966 | closed | Workaround for training models with really big text files |
# 🚀 Feature request
Provide a workaround for `run_mlm.py` when working with big text files.
## Motivation
I'm trying to train a `RoBERTa` model with a lot of big text files (~ 50GB of text). When doing so, I'm facing two obstacles:
1. Tokenization beforehand creates a lot of cache storage (See #10204), so one has to resort to on-the-fly tokenization using `set_transform`
2. `datasets` is quite slow when working with really big text files (see https://github.com/huggingface/datasets/issues/2210, https://github.com/huggingface/datasets/issues/2252). Some fixes have been proposed but to my knowledge the issue persists
Can you provide an example on how to workaround these two issues? I suppose that using a custom torch Dataset could (temporarily) fix this.
## Your contribution
Inspired by https://github.com/huggingface/transformers/issues/10278#issuecomment-805245903, I replaced the `datasets` class by this one. As far as I could see, time improves vs using `datasets` with `set_transform`, but I'm not really sure if it is optimal, particularly regarding parallelism (I'm running this script with `python xla_spawn.py`)
```python
from torch.utils.data import IterableDataset
class BatchProcessedDataset(IterableDataset):
def __init__(self, files, tokenizer, batch_size=4096, limit=-1):
self.files = files
self.batch_size = batch_size
self.tokenizer = tokenizer
self.limit = limit
def __iter__(self):
num_iter = 0
for file_path in self.files:
with open(file_path) as f:
next_batch = [x.strip("\n") for _, x in zip(range(self.batch_size), f)]
while next_batch:
tokenized_batch = self.tokenizer(next_batch, padding='max_length', truncation=True, return_special_tokens_mask=True)
for encoding in tokenized_batch.encodings:
if num_iter == self.limit:
return
yield {
"input_ids": encoding.ids,
"token_type_ids": encoding.type_ids,
"attention_mask": encoding.attention_mask,
"special_tokens_mask": encoding.special_tokens_mask
}
num_iter += 1
next_batch = [x.strip("\n") for _, x in zip(range(self.batch_size), f)]
``` | 07-31-2021 13:45:02 | 07-31-2021 13:45:02 | cc @sgugger @lhoestq<|||||>The cache issue should be mostly fixed, now that datasets stores the tokenized inputs with the right precision. If it's not, it should be discussed on the Datasets repo.
The second issue should also be discussed on the Datasets repo.
As mentioned on the main README, the examples provided here are just this: examples. You can adapt them to your use case (as you did) but we leave them as generic as possible on purpose.<|||||>Thanks @sgugger for your answer; it's true you can't just add every possible example there, and of course I don't intend to discuss `datasets` issues here.
I share a gist with a modified version of `run_mlm.py` in case anyone is facing the same problem.
https://gist.github.com/finiteautomata/bef480d508d12e2028fdeae19a92b350 |
transformers | 12,965 | closed | Bugs when fine tuning the gpt2 | Transformers Version: 4.8.2
Torch Version: 1.8.0
I am using the official script to fine tune the gpt2 on the csv files.
the script:
https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm_no_trainer.py
train and validation file makeup:
```
df_train_ft_aug.rename(columns={'content': 'text'}).sample(frac=1).to_csv(train_file, index=False)
df_train_ft_aug.rename(columns={'content': 'text'}).sample(frac=0.2).to_csv(validation_file, index=False)
```
My shell command:
```
python -u ./run_clm_no_trainer.py \
--num_train_epochs 7 \
--train_file './fintune_csvs/stsa_train_finetune.csv' \
--validation_file './fintune_csvs/stsa_test_finetune.csv' \
--model_name_or_path gpt2 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--output_dir "./finetune_gpt2_stsa" \
--preprocessing_num_workers 16 \
--block_size 256 --overwrite_cache True
```
where ths csv files contain a column, named 'text' for fine tuning the model.
However, there are always errors below, suggesting the lengths of the dataloader
> File "./run_clm_no_trainer.py", line 503, in <module>
> main()exts in chunks of 256 #12: 0%| | 0/1 [00:00<?, ?ba/s]
> File "./run_clm_no_trainer.py", line 480, in main
> for step, batch in enumerate(eval_dataloader):
> File "/usr/local/lib/python3.6/dist-packages/accelerate/data_loader.py", line 289, in __iter__
> for batch in super().__iter__():
> File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 517, in __next__
> data = self._next_data()
> File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 557, in _next_data
> data = self._dataset_fetcher.fetch(index) # may raise StopIteration
> File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
> return self.collate_fn(data)
> File "/usr/local/lib/python3.6/dist-packages/transformers/data/data_collator.py", line 80, in default_data_collator
> batch[k] = torch.tensor([f[k] for f in features])
> ValueError: expected sequence of length 256 at dim 1 (got 52)
Next time I run it, it returns the similar error:
> ValueError: expected sequence of length 168 at dim 1 (got 136)
Then I modified the input params of tokenizer:
```
tokenizer.pad_token = tokenizer.eos_token
def tokenize_function(examples):
return tokenizer(examples[text_column_name],) , padding=True, truncation=True )
```
This seems fix the problem. However, the generated texts are quite short after this change.
Any suggestions?
| 07-31-2021 07:44:03 | 07-31-2021 07:44:03 | Pinging @sgugger<|||||>It's hard to investigate more without having the data. Adding padding when fine-tuning GPT-2 is a very bad idea when fine-tuning GPT-2, which does not have a padding token, and it shouldn't be necessary. Could you provide us with a reproducer that includes the data?<|||||>> It's hard to investigate more without having the data. Adding padding when fine-tuning GPT-2 is a very bad idea when fine-tuning GPT-2, which does not have a padding token, and it shouldn't be necessary. Could you provide us with a reproducer that includes the data?
Thanks for your suggestion. I will check my data to meet the default setting of fine-tuning.
By the way, should the eos_token, <endoftext>, be append to the end of each sample ? (the text column in the csv files )
@sgugger
<|||||>If it's not done by the tokenizer, yes it should.<|||||>> some people do deserve'right to be forgotten'– but law's power shouldn't rest...<|endoftext|>
> cyrus bus burns on way to no ; she surprises cat's meow crowd<|endoftext|>
> eu commission approves uk's carphone, dixons merger<|endoftext|>
> miley cyrus fan arrested<|endoftext|>
> rdio, crackle, vudu add chromecast support<|endoftext|>
> being a cynic linked to tripled risk of developing dementia, finland study suggests<|endoftext|>
> australia, japan strike trade deal<|endoftext|>
> record low teen birth rate not low enough, says cdc<|endoftext|>
> legendary house music dj frankie knuckles dies aged 59<|endoftext|>
> nhtsa closes tesla investigations : reuters<|endoftext|>
> brad pitt speaks out on premiere punching<|endoftext|>
> twitter's users are in asia, but its revenue is in the us<|endoftext|>
> new report questions effectiveness of flu drug tamiflu<|endoftext|>
> hilary duff talks " really difficult " split from mike comrie<|endoftext|>
> the top 10 reasons why'guardians of the galaxy'is awesome<|endoftext|>
> we had a blast at the planes : fire and rescue red carpet premiere!<|endoftext|>
> fcc extends neutrality comment deadline after site crashes<|endoftext|>
> olivia munn lives in a haunted house<|endoftext|>
> uk agency invests in vfx house to create virtual reality content<|endoftext|>
> of mice and men must die<|endoftext|>
> death toll in w. african ebola outbreak rises to 518<|endoftext|>
> cheaper gas, food push down producer prices<|endoftext|>
> tesla opens up patent portfolio to promote innovation in electronic car...<|endoftext|>
> useful android tips that you should know<|endoftext|>
> autism diagnoses on the rise<|endoftext|>
> u. s. stock futures rising ahead of testimony from fed chair<|endoftext|>
> blackberry z3 review<|endoftext|>
> update 1 - buffett's berkshire hathaway buys stake in verizon, adds to wal - mart<|endoftext|>
> st. luke's improves, but easton hospital falters in safety ratings<|endoftext|>
> drowsy driving is more common than you think<|endoftext|>
> republicans nab approval for '. gop'internet domain<|endoftext|>
> apple says sold well over 800 million mobile devices<|endoftext|>
> the dot view case for the one m8 is in htc's store for $ 50, not available for...<|endoftext|>
> physicians push for extension of medicaid reimbursement increase<|endoftext|>
> mobile fix : chinese ipos, first party data and iphone 6<|endoftext|>
> ranking the country's best and worst jobs<|endoftext|>
> nerdnews : marvel comics picks a woman to be the next thor<|endoftext|>
> men with eating disorders slow to get help, study shows<|endoftext|>
> apple eyeing beats electronics for $ 3. 2 bln<|endoftext|>
> measles update for the united states<|endoftext|>
> former'scandal'star arrested<|endoftext|>
> us economy shrank at steep 2. 9 percent rate<|endoftext|>
> white house : medicaid expansion would have covered 120k wisconsinites<|endoftext|>
> samsung galaxy k zoom goes official with 20. 7mp camera, 10x optical zoom<|endoftext|>
> asian stocks tumble on weak china, japan data<|endoftext|>
> killer virus boosts bacon prices<|endoftext|>
> e - cig industry awaits federal regs<|endoftext|>
> what would you do to get your cell phone back?<|endoftext|>
> dc circuit brings back rule limiting bank fees<|endoftext|>
> texas nuke site increases monitoring of containers<|endoftext|>
> 10 worst cities for spring allergies<|endoftext|>
> taxi drivers in europe protest over uber cab service<|endoftext|>
> taco bell fires second shot at mcdonald's<|endoftext|>
> a brand - new meteor shower could be spectacular tonight — here's how to...<|endoftext|>
> argentina debt default 101 : what's at stake? ( + video )<|endoftext|>
> wikipedia medical entries 90 % inaccurate<|endoftext|>
> selweski : april 15 may have marked the last tax day<|endoftext|>
> no real progress on child obesity, latest report says<|endoftext|>
> skin cancer rate increases in north east<|endoftext|>
> ambassador drives into history : hm kills india's oldest car<|endoftext|>
> super moon to brighten summer sky<|endoftext|>
> google inc ( nasdaq : goog ) beats apple inc. ( nasdaq : aapl ) in introducing...<|endoftext|>
> samsung galaxy s5 zoom gets fcc certification<|endoftext|>
> overdose death rates drop in states with medical marijuana laws<|endoftext|>
> japanese automakers recall 3 mn vehicles for airbag defect<|endoftext|>
> the white house has released the definitive report on climate change, and...<|endoftext|>
> bitcoin value and price in silk road auction : us marshals receive offers from...<|endoftext|>
> see christian hendricks, elisabeth moss & others before they were on " mad...<|endoftext|>
> bnp paribas nears up to usd9bn settlement with us authorities - source<|endoftext|>
> browns owner jimmy haslam won't be punished by nfl, per report<|endoftext|>
> kristin cavallari defends her choice not to vaccinate her child<|endoftext|>
> us manufacturing gaining on china, brazil and rest of world, study finds<|endoftext|>
> emma stone addresses weight criticisms in ( typically awesome ) fashion<|endoftext|>
> billions wasted on flu drug : researchers<|endoftext|>
> spacecraft crashes on moon to end mission<|endoftext|>
> chinese manufacturing reaches six - month high, official figures show<|endoftext|>
> sports day at greatham primary<|endoftext|>
> pluto's moon may have had an underground ocean<|endoftext|>
> starbucks'oprah - branded tea ; nyc's macaron day<|endoftext|>
> microsoft has unveiled the new nokia x2<|endoftext|>
> caught on tape : emt driver voguing<|endoftext|>
> ' deliver us from evil'is a genre hopping & highly entertaining piece of cinema<|endoftext|>
> mobile county : 12 new hiv cases reported in may alone, free testing offered<|endoftext|>
> roche, exelixis skin cancer drug delays tumor progression<|endoftext|>
> ntsb faults pilot'mismanagment'in asiana flight - ktbs. com - shreveport, la...<|endoftext|>
> new skype translator offers nearly real - time audio translation<|endoftext|>
> the grand budapest hotel is both a sly crime caper and a charming ode to old...<|endoftext|>
> driverless cars will be on uk roads by january 2015<|endoftext|>
> space giants join forces to battle spacex : this is how cheap space travel begins<|endoftext|>
> weekend report :'captain america'wins close fight with'rio 2 '<|endoftext|>
> sc business notebook, may 24<|endoftext|>
> 21st century fox confirms rejected bid for time warner<|endoftext|>
> usher bounces his head on nicki minaj's butt at the 2014 mtv vmas : gif<|endoftext|>
> apple opens os x beta testing to all users with new seed program<|endoftext|>
> anthrax discovered in beef in hungary<|endoftext|>
> iowa farmer chris soules is abc's next'bachelor'| the republic<|endoftext|>
> murdoch names son lachlan as vice president of media empire<|endoftext|>
> cdc reports first chikungunya case acquired in the united states ; disease...<|endoftext|>
> shailene woodley on being cut from amazing spider - man 2 : " was i awful? "<|endoftext|>
> justina pelletier heads home after judge ends state custody<|endoftext|>
> singer chris brown's dc assault trial is delayed for months ; judge says singer to...<|endoftext|>
> android wear : 5 things developers need to know<|endoftext|>
> micro machine macro funding<|endoftext|>
> fcc forced to push back comment deadline on net neutrality rules<|endoftext|>
> hgtv slammed for excluding anti - gay christian consumers from america's...<|endoftext|>
> ' mom mobiles'a shrinking category for automakers<|endoftext|>
> malaysia airlines considers re - branding itself<|endoftext|>
> review : 50 cent's " animal ambition "<|endoftext|>
> hump day unusual moment : little roger & the goosebumps “ stairway to...<|endoftext|>
> women happier at work than home, study finds<|endoftext|>
> awfully good : sharknado 2<|endoftext|>
> annie leibovitz axed kim and kanye west wedding gig at last minute<|endoftext|>
> former astrazeneca chief executive attacks pfizer deal<|endoftext|>
> private funeral for mick jagger's longtime girlfriend, l'wren scott, held in los...<|endoftext|>
> government allots p6. 8m for aquino's trip to myanmar<|endoftext|>
> ( click the phrases to see a list )<|endoftext|>
> the - dream arrested for felony assault on pregnant ex - girlfriend<|endoftext|>
> kanye west gives 20 - minute speech, says the kardashians are'the most...<|endoftext|>
> team clones stem cells from 75 - year - old's skin<|endoftext|>
> sober smartphone app aids boozers<|endoftext|>
> spread of polio is now a world health emergency, u. n. says<|endoftext|>
> ' true blood'recap : [ spoiler ] is killed off — shocking death<|endoftext|>
> how game - changing was game of thrones'big reveal?<|endoftext|>
> alcohol costs us $ 224bn a year<|endoftext|>
> bmw investing $ 1 billion in mexican assembly plant<|endoftext|>
> report finds st. johns county florida's healthiest county<|endoftext|>
> giant of the skies was like'a dragon '<|endoftext|>
> beyonce named as world's most powerful celebrity<|endoftext|><|||||>@sgugger Hello, I try to reproduce this error. The texts above is the samples for finetuning for GPT2. It is the column of `text`.
```
train_file = './fintune_csvs/{}_train_finetune_32_{}.csv'.format(args.dsn, seed)
validation_file = './fintune_csvs/{}_test_finetune_32_{}.csv'.format(args.dsn, seed)
ds.df_train['text'] = ds.df_train['content'] + tokenizer_gpt2.eos_token
ds.df_test['text'] = ds.df_test['content'] + tokenizer_gpt2.eos_token
ds.df_train[['text']].sample(frac=1).to_csv(train_file, index=False)
ds.df_test[['text']].sample(frac=1).to_csv(validation_file, index=False)
model_output_path = "./finetune_gpt2/{}_32_{}".format(args.dsn, seed)
os.system(
"CUDA_VISIBLE_DEVICES=1 python -u ./run_clm_no_trainer.py \
--num_train_epochs {} \
--train_file {} \
--validation_file {} \
--model_name_or_path gpt2 \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--output_dir {} \
--preprocessing_num_workers 16 --overwrite_cache True \
--block_size 256".format(args.ft_epochs, train_file, validation_file, model_output_path) )
```
`run_clm_no_trainer.py` is the official script from transformers repo.
When I use another dataset, which have longer sentences than this dataset, there is no error and the finetuning process is OK.
<|||||>I also tried sentiment analysis dataset, which also consists of relatively short sentences. The error came out too.<|||||>> Grouping texts in chunks of 256 #11: 100%|█████████████████████████████| 1/1 [00:00<00:00, 25.61ba/s]
> Grouping texts in chunks of 256 #12: 100%|█████████████████████████████| 1/1 [00:00<00:00, 28.63ba/s]
> Grouping texts in chunks of 256 #13: 100%|█████████████████████████████| 1/1 [00:00<00:00, 25.03ba/s]
> Grouping texts in chunks of 256 #14: 100%|█████████████████████████████| 1/1 [00:00<00:00, 23.64ba/s]
> Grouping texts in chunks of 256 #15: 100%|█████████████████████████████| 1/1 [00:00<00:00, 30.86ba/s]
> 08/20/2021 03:43:32 - INFO - __main__ - ***** Running training *****
> 08/20/2021 03:43:32 - INFO - __main__ - Num examples = 16
> 08/20/2021 03:43:32 - INFO - __main__ - Num Epochs = 1 | 0/1 [00:00<?, ?ba/s]
> 08/20/2021 03:43:32 - INFO - __main__ - Instantaneous batch size per device = 16
> 08/20/2021 03:43:32 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 16
> 08/20/2021 03:43:32 - INFO - __main__ - Gradient Accumulation steps = 1 | 0/1 [00:00<?, ?ba/s]
> 08/20/2021 03:43:32 - INFO - __main__ - Total optimization steps = 1
> 0%| | 0/1 [00:00<?, ?it/s]Traceback (most recent call last):
> File "./run_clm_no_trainer.py", line 503, in <module> | 0/1 [00:00<?, ?ba/s]
> main()
> File "./run_clm_no_trainer.py", line 463, in main | 0/1 [00:00<?, ?ba/s]
> for step, batch in enumerate(train_dataloader):
> File "/usr/local/lib/python3.6/dist-packages/accelerate/data_loader.py", line 289, in __iter__
> for batch in super().__iter__():
> File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 517, in __next__
> data = self._next_data()
> File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 557, in _next_data
> data = self._dataset_fetcher.fetch(index) # may raise StopIteration
> File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
> return self.collate_fn(data)
> File "/usr/local/lib/python3.6/dist-packages/transformers/data/data_collator.py", line 80, in default_data_collator
> batch[k] = torch.tensor([f[k] for f in features])
> ValueError: expected sequence of length 135 at dim 1 (got 112)<|||||>I try another manner to organise the training corpus, as txt file:
```
with open (train_file, 'w') as f:
f.write(" {} ".format(tokenizer_gpt2.eos_token).join(ds.df_train['content'].tolist()))
with open (validation_file, 'w') as f:
f.write(" {} ".format(tokenizer_gpt2.eos_token).join(ds.df_test['content'].tolist()))
```
The error comes the same.
> 33%|███▎ | 1/3 [00:00<00:01, 1.33it/s]Traceback (most recent call last):
> File "./run_clm_no_trainer.py", line 483, in <module>
> main()
> File "./run_clm_no_trainer.py", line 460, in main
> for step, batch in enumerate(eval_dataloader):
> File "/usr/local/lib/python3.6/dist-packages/accelerate/data_loader.py", line 289, in __iter__
> for batch in super().__iter__():
> File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 517, in __next__
> data = self._next_data()
> File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 557, in _next_data
> data = self._dataset_fetcher.fetch(index) # may raise StopIteration
> File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
> return self.collate_fn(data)
> File "/usr/local/lib/python3.6/dist-packages/transformers/data/data_collator.py", line 80, in default_data_collator
> batch[k] = torch.tensor([f[k] for f in features])
> ValueError: expected sequence of length 256 at dim 1 (got 117)<|||||>Yes, this all points out to your corpus being too short to form a full batch. You should use a lower batch size or a lower block size. |
transformers | 12,964 | closed | Using `model.sample()` and increasing the `max_length` leads to CUDA OOM crash | ## Environment info
- `transformers` version: 4.6.0.dev0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.11
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
I think @patrickvonplaten and @LysandreJik
## Information
Model I am using GPT-NEO:
The problem arises when using:
* my own modified scripts: Simply using `.sample()` method with `GPTNeoForCausalLM`
The tasks I am working on is:
* my own task: Simple Text generation.
## To reproduce
Steps to reproduce the behavior:
1. Visit this colab and run GPU runtime: https://colab.research.google.com/drive/1VjVUrptwgUx3TxdlcVqPNwXyX6YJYSAK
2. Execute Runtime -> Run All
3. Note the `nvidia-smi` output.
4. In the last cell, increase the `max_length` from 30 to 350. And run again.
4. Even if the crash doesn't occur, check `nvidia-smi` again.
## Expected behavior
To not consume that much GPU memory. It is understood that some tasks may take some memory, but using **~10 GB** for an increase of 300 tokens is... something wrong. | 07-31-2021 07:25:52 | 07-31-2021 07:25:52 | Related to https://github.com/huggingface/transformers/issues/11320
Cc @patil-suraj <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,963 | closed | Prevent `Trainer.evaluate()` crash when using only tensorboardX | # What does this PR do?
Fixes #12962
I did not write any tests because it seems like the logging integration callbacks have absolutely no testing at all, and I'm not creating a whole set of tests for a one-line fix.
## Who can review?
trainer: @sgugger
| 07-31-2021 02:53:48 | 07-31-2021 02:53:48 | Thanks for the fix! |
transformers | 12,962 | closed | `Trainer.evaluate()` crashes when using only tensorboardX | ## Environment info
- `transformers` version: 4.9.1
- Platform: Linux-3.10.0-1160.31.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core
- Python version: 3.7.9
- PyTorch version (GPU?): 1.8.1+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes, but not relevant
- Using distributed or parallel set-up in script?: no
### Who can help
This might be a one-line fix, and I will be submitting a PR shortly. However, it might be a sign of a bigger problem, so I'm still tagging the person listed for the trainer, @sgugger.
## Information
Model I am using: `gpt2` (not model-specific issue, though)
The problem arises when using:
- [x] the official example scripts: (give details below)
The tasks I am working on is the one given in the example script.
## To reproduce
Steps to reproduce the behavior:
1. Create an environment with [`requirements.txt`](https://github.com/huggingface/transformers/blob/v4.9.1/examples/pytorch/language-modeling/requirements.txt) and `tensorboardX==2.4` installed but without tensorboard itself installed.
2. Run [`run_clm.py`](https://github.com/huggingface/transformers/blob/v4.9.1/examples/pytorch/language-modeling/run_clm.py) with the following script (based on [the example in the README](https://github.com/huggingface/transformers/blob/v4.9.1/examples/pytorch/language-modeling/README.md#gpt-2gpt-and-causal-language-modeling)):
```bash
time python run_clm.py \
--model_name_or_path gpt2 \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--do_train \
--do_eval \
--output_dir output_dir \
--logging_dir output_dir/logs \
--logging_strategy epoch \
--num_train_epochs 3 \
--per_device_train_batch_size 4 \
--gradient_accumulation_steps 2 \
--max_train_samples 16 \
--max_eval_samples 8 \
--report_to tensorboard
```
3. See the stack trace that was output:
```python
Traceback (most recent call last):
File "run_clm.py", line 515, in <module>
main()
File "run_clm.py", line 483, in main
metrics = trainer.evaluate()
File "venv/lib/python3.7/site-packages/transformers/trainer.py", line 2055, in evaluate
self.log(output.metrics)
File "venv/lib/python3.7/site-packages/transformers/trainer.py", line 1720, in log
self.control = self.callback_handler.on_log(self.args, self.state, self.control, logs)
File "venv/lib/python3.7/site-packages/transformers/trainer_callback.py", line 371, in on_log
return self.call_event("on_log", args, state, control, logs=logs)
File "venv/lib/python3.7/site-packages/transformers/trainer_callback.py", line 388, in call_event
**kwargs,
File "venv/lib/python3.7/site-packages/transformers/integrations.py", line 391, in on_log
self.tb_writer.add_scalar(k, v, state.global_step)
File "venv/lib/python3.7/site-packages/tensorboardX/writer.py", line 453, in add_scalar
self.comet_logger.log_metric(tag, display_name, scalar_value, global_step)
AttributeError: 'NoneType' object has no attribute 'log_metric'
```
(I edited the stack trace to remove the parts of the path outside the virtual environment for improved readability.)
## Expected behavior
The script should not crash.
## Notes
I figured out what is causing the crash. When training ends, `TensorBoardCallback.on_train_end()` is called, which runs `self.tb_writer.close()`, which sets `self.tb_writer.comet_logger` to `None`. When `TensorBoardCallback.on_log()` is called again during evaluation, `self.comet_logger` is called again, even though it's `None`. The bug appears to essentially be a use-after-free bug. This specific exception only happens when tensorboard is not installed because only tensorboardX uses `comet_logger`.
The solution is simple: set `self.tb_writer` to `None` immediately after the call to `self.tb_writer.close()`. When `TensorBoardCallback.on_log()` is called again during evaluation, the method detects that `self.tb_writer is None` and re-initializes it, which makes everything work, at least finishing without crashing. I will be releasing a PR with this fix very soon.
However, given that more of these logging callbacks can be called during evaluation and some of them also have `on_train_end()` functions that close resources, there might be a bigger problem here involving the calling of logging integrations during evaluation. I don't know enough about them to determine that for myself, though.
| 07-31-2021 02:53:15 | 07-31-2021 02:53:15 | same problem<|||||>I simply update the transformers,then it works fine. It seems like the newest version has fixed this error of tensorboradX. |
transformers | 12,961 | closed | Use min version for huggingface-hub dependency | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR proposes to use a min version for the `huggingface_hub` dependency.
The reasoning behind this is that we're currently running into dependency conflicts between `autonlp` (which uses `transformers` v4.8.0) and `evaluate` which relies on `huggingface_hub` v0.0.15. If I am not mistaken, setting a min version will provide the flexibility for `pip` to figure out which one to pick.
I realise that `huggingface_hub` is under active development, so feel free to close this PR if there's a strong need to freeze the version explicitly in `transformers`.
cc @abhishekkrthakur @SBrandeis
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 07-31-2021 01:12:57 | 07-31-2021 01:12:57 | Yes the exact pin has been set on purpose because `huggingface_hub` is not stable enough right now, and there might breaking changes in the future that would break older versions of Transformers.
We can accept an upgrade on the Transformers side (bumping to 0.0.15) but we will only switch to a minimum version when `huggingface_hub` is more mature.
Also, for something like this, no merge before @LysandreJik is back please, as he knows more than me (and may have a different opinion) :-)<|||||>Thanks for the clarification @sgugger! I totally understand the reasoning to pin the exact version, so will find a work around in the meantime.
I'll keep this PR open until @LysandreJik is back in case he wants to accept a bump to v0.0.15 😃 <|||||>i found a work around using the `/datasets` endpoint so happy to close this PR until `huggingface_hub` is more stable<|||||>Just checked, and it should be fine to bump to 0.0.15. Most of the `huggingface_hub` specifics in `transformers` is using the logic defined in `src/transformers/hf_api.py`, so close to nothing would be affected by that upgrade.
Feel free to upgrade and merge if all tests pass, I'll keep a close eye on the slow tests. |
transformers | 12,960 | closed | [Very WIP] Migrating ALL pipelines to new testing + fixes | # What does this PR do?
For now we just need to see the tests times to see how bad we are, and how much we need to improve.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 07-30-2021 20:25:42 | 07-30-2021 20:25:42 | I'm curious how long it takes to run the whole suite - would it be possible to add a commit that impacts all pipelines to see how long that takes?<|||||>@LysandreJik That's the one.
<|||||>Or you mean hitting the `pipelines` files ?<|||||>Yes, I can push a commit that does that on your branch if you want!<|||||>Go ahead ! Not sure how to trigger it.
<|||||>Done in many smaller PRs. |
transformers | 12,959 | closed | huggingface-hub version conflict | `transformers` requires `huggingface-hub==0.0.12` in `setup.py` in the `master` branch.
`huggingface-hub` has just released version `0.0.15`.
Other projects that use `huggingface-hub`, such as `https://github.com/UKPLab/sentence-transformers`, are happy with the latest version of `huggingface-hub`.
Depending upon the order of installation, a version conflict may result. Here's a sample message from our project, `kgtk`, which requires both `transformers` and `sentence-transformers`:
```pkg_resources.ContextualVersionConflict: (huggingface-hub 0.0.15 (/opt/anaconda3/envs/kgtk-env/lib/python3.8/site-packages/huggingface_hub-0.0.15-py3.8.egg), Requirement.parse('huggingface-hub==0.0.12'), {'transformers'})```
| 07-30-2021 19:18:36 | 07-30-2021 19:18:36 | This is discussed in #12961 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,958 | open | Weird behavior with mBART-50 and Spanish | ## Environment info
- `transformers` version: 4.9.1
- Platform: Linux-5.4.0-1054-aws-x86_64-with-debian-buster-sid
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0+cu102 (True)
## Who can help
@patrickvonplaten
## Information
I am seeing weird behavior with mBART-50 and Spanish. Please look at the code below:
```
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
text = "http://www.ted.com/talks/stephen_palumbi_following_the_mercury_trail.html"
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-one-mmt")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-one-mmt")
tokenizer.src_lang = "es_XX"
encoded = tokenizer(text, return_tensors="pt")
generated_tokens = model.generate(**encoded, forced_bos_token_id=tokenizer.lang_code_to_id["en_XX"])
tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
```
The output is:
```
['(b) To continue to cooperate closely with the Special Rapporteur on extrajudicial, summary or arbitrary executions, the Special Rapporteur on torture and other cruel, inhuman or degrading treatment or punishment, the Special Rapporteur on the sale of children, child prostitution and child pornography, the Special Rapporteur on torture and other cruel, inhuman or degrading treatment or punishment, the Special Rapporteur on the sale of children, child prostitution and child pornography, the Special Rapporteur on the sale of children, child prostitution and child pornography, the Special Rapporteur on the sale of children, child prostitution and child pornography, the Special Rapporteur on violence against women, its causes and consequences, the Special Rapporteur on the sale of children, child prostitution and child pornography, the Special Rapporteur on the sale of children, child prostitution and child pornography, the Special']
```
However if I change the source language to french `tokenizer.src_lang = "fr_XX"` or any other language, I get the following output (which is what you expect):
```
['http://www.ted.com/talks/stephen_palumbi_following_the_mercury_trail.html']
```
This behavior is similar with other texts as well (e.g., "888"). Do you know why this behavior is unique to Spanish? Also, do you have any idea how to correct this behavior?
Thanks!
| 07-30-2021 18:43:24 | 07-30-2021 18:43:24 | Pinging @patil-suraj too, and @mrm8488 might have played with that model in the past.<|||||>Any progress here? I've faced the exact same problem when attempting to translate from Spanish, although slightly different output:
```
The Committee recommends that the State party take all necessary measures to ensure that the right to adequate housing is guaranteed in the State party's next periodic report, and that the State party take all necessary measures to ensure that the right to adequate housing is guaranteed in its next periodic report.
```<|||||>@patil-suraj - could you take a look here? |
transformers | 12,957 | closed | 404 Error when loading pretrained model, after finetuning | I'm trying to load up a finetuned T5 model I've saved but I keep getting a 404 error.
I have my model saved in the same directory as my jupyter notebook, at ```textgen_models/textgen_model_shuffle_e10/```
It contains a config.json, and a pytorch_model.bin
"""
```
model_path = Path("textgen_models/textgen_model_shuffle_e10/")
mdl = T5ForConditionalGeneration.from_pretrained(model_path)
```
"""
I am getting the following 404 error, which tells me I am not specifying the path to the model properly, though I'm not sure what I'm doing incorrectly. Can someone help?
"""
```
404 Client Error: Not Found for url: https://huggingface.co/textgen_models%5Ctextgen_model_shuffle_e10/resolve/main/config.json
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
~\anaconda3\envs\imagesTemporal\lib\site-packages\transformers\configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
511 # Load from URL or cache if already cached
--> 512 resolved_config_file = cached_path(
513 config_file,
~\anaconda3\envs\imagesTemporal\lib\site-packages\transformers\file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, use_auth_token, local_files_only)
1362 # URL, so get it from the cache (downloading if necessary)
-> 1363 output_path = get_from_cache(
1364 url_or_filename,
~\anaconda3\envs\imagesTemporal\lib\site-packages\transformers\file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, use_auth_token, local_files_only)
1533 r = requests.head(url, headers=headers, allow_redirects=False, proxies=proxies, timeout=etag_timeout)
-> 1534 r.raise_for_status()
1535 etag = r.headers.get("X-Linked-Etag") or r.headers.get("ETag")
~\anaconda3\envs\imagesTemporal\lib\site-packages\requests\models.py in raise_for_status(self)
952 if http_error_msg:
--> 953 raise HTTPError(http_error_msg, response=self)
954
HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/textgen_models%5Ctextgen_model_shuffle_e10/resolve/main/config.json
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_11592/172074313.py in <module>
1 model_path = Path("textgen_models/textgen_model_shuffle_e10/")
----> 2 mdl = T5ForConditionalGeneration.from_pretrained(pretrained_model_name_or_path = model_path)
~\anaconda3\envs\imagesTemporal\lib\site-packages\transformers\modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1181 if not isinstance(config, PretrainedConfig):
1182 config_path = config if config is not None else pretrained_model_name_or_path
-> 1183 config, model_kwargs = cls.config_class.from_pretrained(
1184 config_path,
1185 *model_args,
~\anaconda3\envs\imagesTemporal\lib\site-packages\transformers\configuration_utils.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
453
454 """
--> 455 config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
456 if "model_type" in config_dict and hasattr(cls, "model_type") and config_dict["model_type"] != cls.model_type:
457 logger.warn(
~\anaconda3\envs\imagesTemporal\lib\site-packages\transformers\configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
530 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a {CONFIG_NAME} file\n\n"
531 )
--> 532 raise EnvironmentError(msg)
533
534 except json.JSONDecodeError:
OSError: Can't load config for 'textgen_models\textgen_model_shuffle_e10'. Make sure that:
- 'textgen_models\textgen_model_shuffle_e10' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'textgen_models\textgen_model_shuffle_e10' is the correct path to a directory containing a config.json file
```
"""
| 07-30-2021 18:16:15 | 07-30-2021 18:16:15 | How did you save your model? Was it with the `same_pretrained` method? The error says it can't locate the `config.json` associated to the model, so double check you have that file in the folder you are loading from.<|||||>Thank you for the pointer. I was able to solve it with a full path input, so I think I was not specifying the relative path correctly. |
transformers | 12,956 | closed | Wav2Vec2 WER remains 1.00 and return blank transcriptions. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: transformers==4.4.0
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten @patil-suraj
## Information
Wav2Vec2 WER remains 1.00 no matter which dataset we use and also can see the same behaviour across multiple datasets.
Returns blank transcriptions when making predictions.
## To reproduce
Steps to reproduce the behavior:
1. Run the following colab notebook : https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 07-30-2021 17:37:09 | 07-30-2021 17:37:09 | 
<|||||>I see you are using transformers==4.4.0
there seems to be some updates to [wav2vec2 model ](https://github.com/huggingface/transformers/tree/master/src/transformers/models/wav2vec2) after that so maybe try with the latest release or pull from master<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>In case someone stumbles upon this issue while using a version between 4.9.0 and 4.10.dev: https://github.com/huggingface/transformers/pull/13512 |
transformers | 12,955 | closed | Add splinter | # What does this PR do?
[Splinter](https://arxiv.org/abs/2101.00438) implementation
@patil-suraj @LysandreJik @patrickvonplaten | 07-30-2021 16:03:58 | 07-30-2021 16:03:58 | Thanks a lot for the PR! Think we can merge this soon :-) Some points that I think will be important to adapt before merging are:
- Simplify the logic of `splinter_qass` vs `new_splinter_qass`. IMO there should only be one `splinter_class` class attribute and if this has to be reinitialized or set to 0 we could instead add a `reinit` function. I don't really understand why we need to identical `splinter_qass` and `new_spliter_qass` modules
- Make sure we don't have hardcoded id's such as 102 in the model
- Add a QA integration test to make sure the model works as expected<|||||>@patil-suraj @patrickvonplaten @sgugger
Any idea why this exception is raised when calling `make quality`?
```
Traceback (most recent call last):
File "/mnt/c/Program Files/JetBrains/PyCharm 2019.3.2/plugins/python/helpers/pydev/pydevd.py", line 1434, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/mnt/c/Program Files/JetBrains/PyCharm 2019.3.2/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/mnt/c/Users/ori/Desktop/Ori/Research/transformers_splinter/utils/check_copies.py", line 353, in <module>
check_copies(args.fix_and_overwrite)
File "/mnt/c/Users/ori/Desktop/Ori/Research/transformers_splinter/utils/check_copies.py", line 186, in check_copies
new_diffs = is_copy_consistent(filename, overwrite)
File "/mnt/c/Users/ori/Desktop/Ori/Research/transformers_splinter/utils/check_copies.py", line 164, in is_copy_consistent
theoretical_code = blackify(lines[start_index - 1] + theoretical_code)
File "/mnt/c/Users/ori/Desktop/Ori/Research/transformers_splinter/utils/check_copies.py", line 104, in blackify
result = black.format_str(code, mode=black.FileMode([black.TargetVersion.PY35], line_length=119))
File "/home/oriram/venv/transformers_splinter/lib/python3.7/site-packages/black/__init__.py", line 1063, in format_str
src_node = lib2to3_parse(src_contents.lstrip(), mode.target_versions)
File "/home/oriram/venv/transformers_splinter/lib/python3.7/site-packages/black/__init__.py", line 1171, in lib2to3_parse
raise exc from None
black.InvalidInput: Cannot parse: 15:4: def __init__(self, config, add_pooling_layer=True):
```
Didn't happen before. Tried to debug it, but couldn't understand the cause.
Also, it seems like it's not in any file related to Splinter, as I don't have the argument `add_pooling_layer`.<|||||>@sgugger
One idea I had in mind regarding the `self.splinter_qass` and `self.new_splinter_qass` was just to create two more checkpoints (so overall there will be 4 rather than 2):
```
tau/splinter-base-with-qass
tau/splinter-base
tau/splinter-large-with-qass
tau/splinter-large
```
In the ones without qass, I'll drop the weights from the state_dict at `pytorch_model.bin`.
This will also keep the message `All the weights of SplinterForQuestionAnswering were initialized from the model checkpoint at X` correct.
Does that sound OK?<|||||>I imagine there will be an associated config parameter to determine which layer use?
It looks like a good idea.<|||||>As a result of #13023 , you will need to rebase your PR on master and solve the merge conflicts (basically, you will just need to re-add the models/tokenizers and config in the auto-mappings as strings). Let us know if you need any help with that.<|||||>Hi @sgugger @patil-suraj @patrickvonplaten,
Sorry to bother you, but I couldn't find the reason as to why the `run_tests_tf` (and the two others) fail.
CircleCI doesn't provide much info..
Other than that and the rebase, I think everything is set to do the PR, took care of:
- Removing abstractions from Bert classes
- Separating the models into `splinter-base` and `splinter-base-qass` (and same for `large`), as well as removing all references to `config.initialize_new_qass` from the code
- Added an integration test
- Removed all unnecessary classes (`SplinterForMaskedLM` etc.)
- Fixed `Copy from` issues
etc.
Thanks!! <|||||>Thanks @sgugger for your quick response!
I think the tests are now failing due to the rebase issue, as the errors don't seem related to Splinter.
Any chance you can help with the rebase? Don't have any experience with that..
Also, I noticed that you made some changes to the structure of the `auto_..` classes..
Would really appreciate it :)<|||||>Hello @oriram, I just took care of the merge and the auto-classes<|||||>Many thanks @LysandreJik @sgugger!!
Are we ready to merge then?<|||||>Hi @LysandreJik :)
- Changed copyrights in Splinter's 4 files
- Regarding your comments on `SplinterTokenizer` - The difference stems from dealing with the special `[QUESTION]` token which is used for building question representations
Many thanks!<|||||>Great @LysandreJik!
Let's merge? :)<|||||>@patil-suraj @sgugger @LysandreJik @patrickvonplaten
Just wanted to say many thanks again for all your effort in this PR!!<|||||>@oriram - thanks a mille for your great PR! Let's try to promote Splinter so that people see its power for QA :-) |
transformers | 12,954 | closed | Fix typo in example of DPRReader | # What does this PR do?
Fix typo in example of DPRReader
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
| 07-30-2021 15:05:04 | 07-30-2021 15:05:04 | |
transformers | 12,953 | closed | Fix division by zero in NotebookProgressPar | # What does this PR do?
This PR fixes the bug reported in #12950. More precisely, the following snippet of code was failing with a division by zero error:
```py
from transformers.utils.notebook import NotebookProgressBar
pbar = NotebookProgressBar(total=1)
pbar.update(1)
pbar.update(1, force_update=True)
```
This PR fixes that by being a bit more defensive before dividing by a potential zero.
Fixes #12950 | 07-30-2021 13:18:50 | 07-30-2021 13:18:50 | |
transformers | 12,952 | closed | Add multilingual documentation support | This PR adds multilingual documentation support for incoming Chinese documentations. | 07-30-2021 11:46:36 | 07-30-2021 11:46:36 | You will need to run `make style` on your branch to fix the quality check. I'm looking at the result [here](https://247866-155220641-gh.circle-artifacts.com/0/docs/_build/html/index.html) but don't see anything changed. Is it normal?<|||||>@sgugger Yes, it's normal. This is just a PR to support multilingual docs but they aren't there yet. <|||||>Mmm, looks like you may have some wrong version on your side? A `pip install -e .[quality]` should fix this (but you will probably need to revert the changes in the pipeline tests as black doesn't undo the new lines it adds). |
transformers | 12,951 | closed | Add substep end callback method | As discussed in #12920 with @sgugger, a callback method after a gradient accumulation step is needed for some training techniques such as differentially private training with Opacus (see [Opacus - Docs: virtual_step](https://opacus.ai/api/privacy_engine.html?highlight=virtual_step#opacus.privacy_engine.PrivacyEngine.virtual_step)).
This PR extends `TrainerCallback` and `CallbackHandler` with a method `on_substep_end` which ought to be called during gradient accumulation after a training step is taken (i.e. loss and gradients computed) but no model parameters are updated.
| 07-30-2021 11:14:56 | 07-30-2021 11:14:56 | Thanks for your help. Yep, happy to add the callback as well. I'll tag you on a PR when I have something ready. |
transformers | 12,950 | closed | ZeroDivisionError in NotebookProgressBar.update with small dataset | I don't know the specifics, but during training (details below) NotebookProgressBar's update function got called with `force_update` is true, while no progress had been made (i.e. `value == self.start_value`). This leads directly to a ZeroDivisionError on line 151 in src/transformers/util/notebook.py.
## Environment info
- `transformers` version: 4.6.1
- Platform: Linux-5.4.0-1051-azure-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyTorch version (GPU?): 1.8.1+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@sgugger based on Git blame.
## Information
Model I am using (Bert, XLNet ...): Roberta-Base
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Managed to reproduce based on the token classification notebook ([here](https://github.com/huggingface/notebooks/blob/master/examples/token_classification.ipynb)).
Steps to reproduce the behavior from this notebook:
After loading the dataset, apply the following code to make small:
```python
n = 14
datasets["train"] = datasets["train"].filter(lambda x: int(x["id"]) < n)
datasets["validation"] = datasets["validation"].filter(lambda x: int(x["id"]) < n)
datasets["test"] = datasets["test"].filter(lambda x: int(x["id"]) < n)
```
Replace the TrainingArguments in "Fine-tuning the model" with:
```python
model_name = model_checkpoint.split("/")[-1]
args = TrainingArguments(
output_dir='./deletepls',
num_train_epochs=1,
per_device_train_batch_size=8,
per_device_eval_batch_size=4,
warmup_steps=500,
weight_decay=0.01,
learning_rate=5e-4,
logging_dir='./logs',
logging_steps=7,
)
```
After executing `trainer.train()` I get the ZeroDivisionError.
## Expected behavior
Some more descriptive error related to my logging steps, dataset size, or batch size. I'm still not sure what exactly causes this error.
| 07-30-2021 09:19:52 | 07-30-2021 09:19:52 | Thanks for reporting! I could reproduce and extracted a shorter reproducer (see the PR above). Fix is on its way :-)<|||||>Excellent! |
transformers | 12,949 | closed | [end2end rag] Slow speed when extending the external KB | Hi folks,
@shamanez Sorry to disturb you for some problems. It mainly about process about re-encode and re-index, related to kb_encode_utils.py and finetune_rag.py.
First, When I extended your provided SQUAD-KB.csv to another five times larger file, the re-index process became too slow, sometimes even needing half or one hour to finish re-indexing. I am trying to use faiss-gpu to speed the re-index process, but it doesn't work well.
If you are interested at the above BUG, you can try using the small split of dpr_wiki to test: https://storage.googleapis.com/huggingface-nlp/datasets/wiki_dpr/psgs_w100.tsv.pkl
Second, I found that when I set index_gpus to more than 2, the prepare time for re-encoding became longer even to 5 or 10 minutes . I guess this cost mainly due to the I/O of load_dataset or spilt in def embed_update(ctx_encoder, total_processes, device, process_num, shard_dir, csv_path).
Overall, these problems only occur when extending the external knowledge corpus. It works well when using your provided small squad-kb.csv.
Feel free to give any suggestion on what you are intereted at, in the way as you like.
| 07-30-2021 09:13:39 | 07-30-2021 09:13:39 | Hi, Thanks a lot for trying out our model and pointing out these valuable facts. I checked your problem and didn't see any problem with GPU selection, but yeah the time can increase, dramatically if we use a large dataset with the current index. So please go through my answer below.
Actually, it is not a bug. If you only check the time taken to re-encode, you can always reduce it by using a lot more GPUs, because encoding process is embarrassingly parallel. In the encoding process, the model creates dataset splits using the HF dataset library and saves them into a disk. Then, those splits will only be merged when we need to start the re-indexing process. So this increase of time is not because of having more GPUs, but because of the re-indexing process. In re-indexing, we use FAISS and HNSW index. I had some long chats with FAISS people and they say usually HNSW index time is slow, and changes according to the number of vectors and status of the vectors (since we are changing the embeddings). This bug can be solved by using another index like IVF, which is very fast.
<|||||>Yes, it's just related to larger dataset. Once the process of re-encoding started, it goes fast. But your mentioned process of creating dataset splits, saving them into a disk and merging them from the disk indeed cost much time.
As for the re-index process, I agree that your said IVF and other hyperparameters of Faiss may helps. If you find what setting really works better, please let me know.<|||||>@Dopaminezsy
sure. I think you might need to do hyperparameters tuning. Anyways I trained a model where that the external KB consisted of 7.5 million passages. Although KB update time has increased it worked fine. Another thing is if you have access to enough computational power you can easily make the entire process much more efficient. When it comes to the indexing process, you can try completely neglecting it and using a greedy search during the training. I have noticed this method in REALM paper.
On Mon, Aug 2, 2021 at 8:52 PM Dopaminezsy ***@***.***> wrote:
> Yes, it's just related to larger dataset. Once the process of re-encoding
> started, it goes fast. But your mentioned process of creating dataset
> splits, saving them into a disk and merging them from the disk indeed cost
> much time.
>
> As for the re-index process, I agree that your said IVF and other
> hyperparameters of Faiss may helps. If you find what setting really works
> better, please let me know.
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/12949#issuecomment-890849702>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGRYJLGPI2VLXEYDL5TT2ZMFLANCNFSM5BIEELFA>
> .
>
--
[image: Augmented Human Lab] <http://www.ahlab.org/> [image: uni]
<https://www.auckland.ac.nz/en/abi.html>
Gayal Shamane
Ph.D. Candidate
Augmented Human Lab
Auckland Bioengineering Institute | The University of Auckland
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,948 | closed | BertForQuestionAnswering result not match when multiple run in same input | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.0.dev0
- Platform: Linux-4.15.0-126-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.4
- PyTorch version (GPU?): 1.7.0 (false)
- Tensorflow version (GPU?): 2.5.0 (false)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
@Rocketknight1
@sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): BERT(TFBertForQuestionAnswering, BertForQuestionAnswering)
The problem arises when using:
--> my own modified script
here is my test script
```python
import numpy as np
import os
import tensorflow as tf
from transformers import BertTokenizer, TFBertForQuestionAnswering, AdamWeightDecay
model_name = "bert-base-uncased"
tokenizer = BertTokenizer.from_pretrained(model_name)
tf_model = TFBertForQuestionAnswering.from_pretrained(model_name)
question, text = "who was Jim Henson?", "Jim Henson was a puppet"
input_dict = tokenizer(question, text, return_tensors="tf")
base_output = tf_model({'input_ids':input_dict['input_ids'],
'attention_mask':input_dict['attention_mask'],
'token_type_ids':input_dict['token_type_ids']})
import tensorflow.keras.backend as k
tf.print(base_output.start_logits)
tf.print(base_output.end_logits)
start_logits = base_output.start_logits
end_logits = base_output.end_logits
all_tokens = tokenizer.convert_ids_to_tokens(input_dict["input_ids"].numpy()[0])
answer = ' '.join(all_tokens[tf.math.argmax(start_logits, 1)[0] : tf.math.argmax(end_logits, 1)[0]+1])
print("---------------------answer : ", answer)
# output
1 iteration : ---------------------answer : henson was
2 iteration : ---------------------answer :
3 iteration : ---------------------answer : [CLS] who was jim henson
```
same warning message below
```bash
All model checkpoint layers were used when initializing TFBertForQuestionAnswering.
Some layers of TFBertForQuestionAnswering were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['qa_outputs']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
The tasks I am working on is:
--> I used official example in this link : https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering
## To reproduce
Steps to reproduce the behavior:
1. copy this example https://huggingface.co/transformers/model_doc/bert.html#bertforquestionanswering
2. run python script multiple times with the same input value
3. check the result answer.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
here is start_logits and end_logits value
```python
[[0.146546721 0.337863982 -0.050462883 ... 0.21409747 0.230913743 0.0886169]]
[[-0.432590961 0.0157010294 -0.264513016 ... -0.262505233 -0.097313717 0.101949602]]
---------------------answer : who was him
[[0.239599198 -0.0761167854 -0.150168374 ... -0.329441965 -0.296196282 -0.43989116]]
[[-0.395110816 -0.316928446 -0.0174004361 ... -0.15449807 -0.0412646905 -0.340780914]]
---------------------answer : [CLS] who was jim henson
[[0.49121806 -0.028806597 0.371522099 ... 0.544696152 0.163530082 0.184236392]]
[[0.203870535 0.0572335199 -0.129730135 ... 0.0982186 0.130047619 0.0592225939]]
---------------------answer :
[[0.284656644 -0.252363682 -0.441064388 ... 0.0992026776 0.198949382 -0.0191452727]]
[[-0.0616797283 -0.0639260635 0.413451135 ... 0.396001071 0.16053389 0.245075911]]
---------------------answer : henson was
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
same result for start_logits and end_logits values when multiple runs the python script
and, I thought it was normal to see the following results in the correct answer.
---------------tvm answer : jim henson ? [SEP] jim henson was a puppet
[[-0.14654148 -0.20532154 -0.293788 -0.22902387 -0.0299019 -0.09931126
-0.02225712 -0.28276378 0.02211829 -0.19016735 -0.25408638 0.09656907
0.00328144]]
[[-0.63135976 0.25255007 0.4773104 0.62560356 0.6185883 0.07990392
-0.2211009 0.2174719 0.2831107 0.18743467 -0.03354458 0.08337761
-0.20905018]]
```
## Background
- I was doing a test to run TFBertForQuestionAnswering and BertForQuestionAnswering on TVM. But, TF and Pytorch model's output does not match when inputting the same input. What did I miss? Is there any other way to perform or check? | 07-30-2021 09:12:32 | 07-30-2021 09:12:32 | You are using the generic BERT checkpoint `bert-base-cased` for a question-answering task, which is why you get the warning telling you that some of the weights are randomly initialized (the weights of the question answering head). Since there is that part that is randomly initialized, you won't get the same results with two consecutive runs, or with PT vs TF.
You should use a checkpoint fine-tuned for question-answering, such as distilbert-base-uncased-distilled-squad. Complete list of available checkpoints is [here](https://huggingface.co/models?pipeline_tag=question-answering)<|||||>ok :) i got it. thank you for your explanation! |
transformers | 12,947 | closed | [FLAX] Minor fixes in LM example | Hi,
this PR introduces some fixes for getting the correct vocab size from the Tokenizers used in the FLAX example language modeling readme. | 07-30-2021 09:05:25 | 07-30-2021 09:05:25 | |
transformers | 12,946 | closed | ImportError: cannot import name 'BigBirdTokenizer' from 'transformers' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.1
- Platform: windows
- Python version: 3.9
- PyTorch version (GPU?): 1.9 (CPU)
- Tensorflow version (GPU?):
- Using GPU in script?: no
- Using distributed or parallel set-up in script?:
## Information
Model I am using BigBird:
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
from transformers import BigBirdTokenizer,BigBirdModel
print("hello")
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
No import error.
Importing **BigBirdTokenizerFast** works without a problem. | 07-30-2021 07:53:52 | 07-30-2021 07:53:52 | The sentencepiece library was missing. <|||||>`BigBirdTokenizer` requires a sentencepiece installation, but you should have had that error instead of an import error. This is because the `BigBirdTokenizer` was misplaced in the init, the PR linked above fixes it.<|||||>I sadly only got the import error, nothing else. An error indicating that sentencepiece is missing is definitely more helpful. Thanks for creating the PR<|||||>I installed sentencepiece but I got the same error:
```
!pip install --quiet sentencepiece
from transformers import BigBirdTokenizer
```
ImportError: cannot import name 'BigBirdTokenizer' from 'transformers' (/usr/local/lib/python3.7/dist-packages/transformers/__init__.py)<|||||>@MariamDundua what is the version of your transformers package?<|||||>Hi @zynos @sgugger . I'm using transformers 4.8.0 and have installed sentencepiece. But I'm having same cannot import name 'BigBirdTokenizer' issue. Thanks. <|||||>Make sure you use the latest version of Transformers. It should include a clearer error message if the import fails. |
transformers | 12,945 | closed | Transformers tokenizer pickling issue using hydra and submitit_slurm | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.2
- Platform: Linux-5.4.0-1051-aws-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.9.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
-
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
@LysandreJik
## Information
Model I am using (Bert, XLNet ...): t5
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Run the script using command:
python hf_hydra.py hydra/launcher=submitit_slurm -m
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Code (hf_hydra.py):
import hydra
import logging
# from transformers import AutoTokenizer
import transformers
@hydra.main(config_path=None)
def main(cfg):
logger = logging.getLogger(__name__)
# tokenizer = AutoTokenizer.from_pretrained("t5-small")
tokenizer = transformers.T5Tokenizer.from_pretrained("t5-small")
logger.info(f"vocab size: {tokenizer.vocab_size}")
if __name__ == '__main__':
main()
Using AutoTokenizer works but using T5Tokenizer fails with the following error.
Traceback (most recent call last):
File "/data/home/aghoshal/miniconda/lib/python3.8/site-packages/hydra/_internal/utils.py", line 211, in run_and_report
return func()
File "/data/home/aghoshal/miniconda/lib/python3.8/site-packages/hydra/_internal/utils.py", line 376, in <lambda>
lambda: hydra.multirun(
File "/data/home/aghoshal/miniconda/lib/python3.8/site-packages/hydra/_internal/hydra.py", line 139, in multirun
ret = sweeper.sweep(arguments=task_overrides)
File "/data/home/aghoshal/miniconda/lib/python3.8/site-packages/hydra/_internal/core_plugins/basic_sweeper.py", line 157, in sweep
results = self.launcher.launch(batch, initial_job_idx=initial_job_idx)
File "/data/home/aghoshal/miniconda/lib/python3.8/site-packages/hydra_plugins/hydra_submitit_launcher/submitit_launcher.py", line 145, in launch
jobs = executor.map_array(self, *zip(*job_params))
File "/data/home/aghoshal/miniconda/lib/python3.8/site-packages/submitit/core/core.py", line 631, in map_array
return self._internal_process_submissions(submissions)
File "/data/home/aghoshal/miniconda/lib/python3.8/site-packages/submitit/auto/auto.py", line 213, in _internal_process_submissions
return self._executor._internal_process_submissions(delayed_submissions)
File "/data/home/aghoshal/miniconda/lib/python3.8/site-packages/submitit/slurm/slurm.py", line 313, in _internal_process_submissions
return super()._internal_process_submissions(delayed_submissions)
File "/data/home/aghoshal/miniconda/lib/python3.8/site-packages/submitit/core/core.py", line 749, in _internal_process_submissions
delayed.dump(pickle_path)
File "/data/home/aghoshal/miniconda/lib/python3.8/site-packages/submitit/core/utils.py", line 136, in dump
cloudpickle_dump(self, filepath)
File "/data/home/aghoshal/miniconda/lib/python3.8/site-packages/submitit/core/utils.py", line 240, in cloudpickle_dump
cloudpickle.dump(obj, ofile, pickle.HIGHEST_PROTOCOL)
File "/data/home/aghoshal/miniconda/lib/python3.8/site-packages/cloudpickle/cloudpickle_fast.py", line 55, in dump
CloudPickler(
File "/data/home/aghoshal/miniconda/lib/python3.8/site-packages/cloudpickle/cloudpickle_fast.py", line 563, in dump
return Pickler.dump(self, obj)
TypeError: cannot pickle '_LazyModule' object
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Job should run and print the vocab size. | 07-29-2021 18:23:21 | 07-29-2021 18:23:21 | This has been solved in v4.9, you should upgrade to the latest version of Transformers.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,944 | closed | rum_mlm crashes with bookcorpus and --preprocessing_num_workers | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.0
- Platform: Linux-5.8.0-63-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help
@LysandreJik
Models:
- albert, bert, xlm: @LysandreJik @sgugger @patil-suraj
Library:
- trainer: @sgugger
- pipelines: @LysandreJik
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
## Information
Trying to train BERT from scratch on wikipedia and bookcorpus using the run_mlm.py example.
As the dataset is large and I am using a strong machine (80 CPU cores 350GB RAM) I set the --preprocessing_num_workers flag to 64 to accelerate the preprocessing.
When running a wikipedia or squad for sanity check, everything works fine but with bookcorpus, after dataset mapping is supposedly completed (all three occurrences), it gets stuck on with the info:
`Spawning 64 processes `
for a while and crashes with
`BrokenPipeError: [Errno 32] Broken pipe`
This does not occur when dropping the --preprocessing_num_workers flag but then processing wiki + bookcorpus will take nearly two days.
I tried changing the transformer version or upgrading/downgrading the multiprocessing and dill packages and it didn't help
The problem arises when using:
* [ x] the official example scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
run:
`python transformers/examples/pytorch/language-modeling/run_mlm.py --output_dir transformers/trained_models/bert_base --dataset_name bookcorpus --model_type bert --preprocessing_num_workers 64 --tokenizer_name bert-base-uncased --do_train --do_eval --per_device_train_batch_size 16 --overwrite_output_dir --dataloader_num_workers 64 --max_steps 1000000 --learning_rate 1e-4 --warmup_steps 10000 --save_steps 25000 --adam_epsilon 1e-6 --adam_beta1 0.9 --adam_beta2 0.999 --weight_decay 0.0'
## Expected behavior
Training should begin as done properly when loading wiki and other datasets
Thanks is advance, | 07-29-2021 17:42:45 | 07-29-2021 17:42:45 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,943 | closed | Moving fill-mask pipeline to new testing scheme | # What does this PR do?
Changes the testing of fill-mask so we can test all supported architectures.
Turns out quite a bit are NOT testable (because reference tokenizers do not include
mask token, reformer is a bit tricky to handle too).
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 07-29-2021 17:39:47 | 07-29-2021 17:39:47 | @LysandreJik I think it' s ready for 2nd review to check that everything you raised is fixed. I'll go on to the next pipeline after that. |
transformers | 12,942 | closed | trainer is not reproducible | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.1
- Platform: linux
- Python version: 3.7
- PyTorch version (GPU?): 1.9
- Tensorflow version (GPU?): -
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: -
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
trainer: @sgugger
## Information
Model I am using T5-small model and I am testing the original run_translation.py codes [1] for reproducibility when we need to restart the codes from the previously saved checkpoints (I only have access to gpus for a short time and I need to restart the codes).
## To reproduce
Steps to reproduce the behavior:
1) Please kindly run this command:
```
python run_translation.py --model_name_or_path t5-small --do_train --do_eval --source_lang en --target_lang ro --source_prefix "translate English to Romanian: " --dataset_name wmt16 --dataset_config_name ro-en --output_dir /temp/jack/tst-translation --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate --max_steps 100 --eval_step 10 --evaluation_strategy steps --max_train_samples 100 --max_eval_samples 100 --save_total_limit 1 --load_best_model_at_end --metric_for_best_model bleu --greater_is_better true
```
then kindly break the codes in this points:
```
{'eval_loss': 1.3589547872543335, 'eval_bleu': 10.9552, 'eval_gen_len': 18.05, 'eval_runtime': 4.0518, 'eval_samples_per_second': 24.68, 'eval_steps_per_second': 6.17, 'epoch': 0.8}
20%|██████████████████████████████▍ | 20/100 [00:11<00:21, 3.70it/s[INFO|trainer.py:1919] 2021-07-29 17:22:43,852 >> Saving model checkpoint to /temp/jack/tst-translation/checkpoint-20
[INFO|configuration_utils.py:379] 2021-07-29 17:22:43,857 >> Configuration saved in /temp/jack/tst-translation/checkpoint-20/config.json
[INFO|modeling_utils.py:997] 2021-07-29 17:22:44,351 >> Model weights saved in /temp/jack/tst-translation/checkpoint-20/pytorch_model.bin
[INFO|tokenization_utils_base.py:2006] 2021-07-29 17:22:44,355 >> tokenizer config file saved in /temp/jack/tst-translation/checkpoint-20/tokenizer_config.json
[INFO|tokenization_utils_base.py:2012] 2021-07-29 17:22:44,357 >> Special tokens file saved in /temp/jack/tst-translation/checkpoint-20/special_tokens_map.json
29%|████████████████████████████████████████████ | 29/100 [00:14<00:22, 3.20it/s][INFO|trainer.py:2165] 2021-07-29 17:22:46,444 >> ***** Running Evaluation *****
[INFO|trainer.py:2167] 2021-07-29 17:22:46,444 >> Num examples = 100
[INFO|trainer.py:2170] 2021-07-29 17:22:46,444 >> Batch size = 4
```
break here please
```
{'eval_loss': 1.3670727014541626, 'eval_bleu': 10.9234, 'eval_gen_len': 18.01, 'eval_runtime': 3.9468, 'eval_samples_per_second': 25.337, 'eval_steps_per_second': 6.334, 'epoch': 2.4}
[INFO|trainer.py:1919] 2021-07-29 17:24:01,570 >> Saving model checkpoint to /temp/jack/tst-translation/checkpoint-60
[INFO|configuration_utils.py:379] 2021-07-29 17:24:01,576 >> Configuration saved in /temp/jack/tst-translation/checkpoint-60/config.json | 60/100 [00:23<00:11, 3.42it/s]
[INFO|modeling_utils.py:997] 2021-07-29 17:24:02,197 >> Model weights saved in /temp/jack/tst-translation/checkpoint-60/pytorch_model.bin
[INFO|tokenization_utils_base.py:2006] 2021-07-29 17:24:02,212 >> tokenizer config file saved in /temp/jack/tst-translation/checkpoint-60/tokenizer_config.json
[INFO|tokenization_utils_base.py:2012] 2021-07-29 17:24:02,218 >> Special tokens file saved in /temp/jack/tst-translation/checkpoint-60/special_tokens_map.json
[INFO|trainer.py:1995] 2021-07-29 17:24:03,216 >> Deleting older checkpoint [/temp/jack/tst-translation/checkpoint-50] due to args.save_total_limit
[INFO|trainer.py:2165] 2021-07-29 17:24:03,810 >> ***** Running Evaluation *****██████████████████████████████▉ | 69/100 [00:26<00:09, 3.37it/s]
[INFO|trainer.py:2167] 2021-07-29 17:24:03,810 >> Num examples = 100
[INFO|trainer.py:2170] 2021-07-29 17:24:03,810 >> Batch size = 4
```
break here please and then run the codes please from here till the end.
```
final train metrics
***** train metrics *****
epoch = 4.0
train_loss = 0.1368
train_runtime = 0:00:27.13
train_samples = 100
train_samples_per_second = 14.741
train_steps_per_second = 3.685
07/29/2021 17:25:08 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:2165] 2021-07-29 17:25:08,774 >> ***** Running Evaluation *****
[INFO|trainer.py:2167] 2021-07-29 17:25:08,774 >> Num examples = 100
[INFO|trainer.py:2170] 2021-07-29 17:25:08,774 >> Batch size = 4
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:08<00:00, 2.92it/s]
***** eval metrics *****
epoch = 4.0
eval_bleu = 24.3863
eval_gen_len = 32.84
eval_loss = 1.3565
eval_runtime = 0:00:09.08
eval_samples = 100
eval_samples_per_second = 11.005
eval_steps_per_second = 2.751
```
the final metrics when running the codes without breaks:
```
***** train metrics *****
epoch = 4.0
train_loss = 0.3274
train_runtime = 0:01:04.19
train_samples = 100
train_samples_per_second = 6.231
train_steps_per_second = 1.558
07/29/2021 17:00:12 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:2165] 2021-07-29 17:00:12,315 >> ***** Running Evaluation *****
[INFO|trainer.py:2167] 2021-07-29 17:00:12,315 >> Num examples = 100
[INFO|trainer.py:2170] 2021-07-29 17:00:12,315 >> Batch size = 4
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 25/25 [00:08<00:00, 2.97it/s]
***** eval metrics *****
epoch = 4.0
eval_bleu = 24.3863
eval_gen_len = 32.84
eval_loss = 1.3565
eval_runtime = 0:00:08.95
eval_samples = 100
eval_samples_per_second = 11.164
eval_steps_per_second = 2.791
```
the training loss between the two runs with and without break would be different.
I kindly appreciate having a look, this is required for me to be able to use the great huggingface codes. and I would like to appreciate a lot your great work and colleague on this second to none, great work you are doing. thanks a lot.
## Expected behavior
to see the same training loss when the user trains the codes without any break and when we train the codes with breaking in between. | 07-29-2021 15:46:39 | 07-29-2021 15:46:39 | The average training loss is indeed not saved and thus you will have a different one restarting from a checkpoint. It's also not a useful metric in most cases, which is why we don't bother. You will notice however that your eval BLEU is exactly the same, so the training yielded the same model at the end.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,941 | closed | OSError: Can't load config for 'bert-base-uncased | ## Environment info
It happens in local machine, Colab, and my colleagues also.
- `transformers` version:
- Platform: Window, Colab
- Python version: 3.7
- PyTorch version (GPU?): 1.8.1 (GPU yes)
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik It is to do with 'bert-base-uncased'
## Information
Hi, I m having this error suddenly this afternoon. It was all okay before for days. It happens in local machine, Colab and also to my colleagues. I can access this file in browser https://huggingface.co/bert-base-uncased/resolve/main/config.json no problem. Btw, I m from Singapore. Any urgent help will be appreciated because I m rushing some project and stuck there.
Thanks

403 Client Error: Forbidden for url: https://huggingface.co/bert-base-uncased/resolve/main/config.json
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
505 use_auth_token=use_auth_token,
--> 506 user_agent=user_agent,
507 )
6 frames
HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/bert-base-uncased/resolve/main/config.json
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
516 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a {CONFIG_NAME} file\n\n"
517 )
--> 518 raise EnvironmentError(msg)
519
520 except json.JSONDecodeError:
OSError: Can't load config for 'bert-base-uncased'. Make sure that:
- 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'bert-base-uncased' is the correct path to a directory containing a config.json file | 07-29-2021 15:27:27 | 07-29-2021 15:27:27 | Was it just a fluke or is the issue still happening? On Colab I have no problem downloading that model.<|||||>@sgugger Hi it is still happening now. Not just me, many people I know of. I can access the config file from browser, but not through the code. Thanks<|||||>Still not okay online, but I managed to do it locally
git clone https://huggingface.co/bert-base-uncased
#model = AutoModelWithHeads.from_pretrained("bert-base-uncased")
model = AutoModelWithHeads.from_pretrained(BERT_LOCAL_PATH, local_files_only=True)
#tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
tokenizer = AutoTokenizer.from_pretrained(BERT_LOCAL_PATH, local_files_only=True)
adapter_name = model2.load_adapter(localpath, config=config, model_name=BERT_LOCAL_PATH)<|||||>This, like #12940, is probably related to a change we've made on the infra side (cc @n1t0), which we'll partially revert. Please let us know if this still occurs.<|||||>@WinMinTun Could you share a small collab that reproduces the bug? I'd like to have a look at it.<|||||>With additional testing, I've found that this issue only occurs with adapter-tranformers, the AdapterHub.ml modified version of the transformers module. With the HuggingFace module, we can pull pretrained weights without issue.
Using adapter-transformers this is now working again from Google Colab, but is still failing locally and from servers running in AWS. Interestingly, with adapter-transformers I get a 403 even if I try to load a nonexistent model (e.g. fake-model-that-should-fail). I would expect this to fail with a 401, as there is no corresponding config.json on huggingface.co. The fact that it fails with a 403 seems to indicate that something in front of the web host is rejecting the request before the web host has a change to respond with a not found error.<|||||>Thanks so much @jason-weddington. This will help us pinpoint the issue. (@n1t0 @Pierrci)<|||||>I have the same problem, but it only happens when the model is private.

<|||||>Your token for `use_auth_token` is not the same as your API token. The easiest way to get it is to login with `!huggingface-cli login` and then just pass `use_auth_token=True`.<|||||>I think the problem is something else:

<|||||>Yes, I have come across this as well. I have tracked it down to this line
https://github.com/huggingface/transformers/blob/143738214cb83e471f3a43652617c8881370342c/src/transformers/pipelines/__init__.py#L422
It's because the `use_auth_token` has not been set up early enough in the model_kwargs. The line referenced above needs to be moved above instantiate config section.
<|||||>I've added a pull request to which I think will fix this issue. You can get round it for now by adding `use_auth_token` to the model_kwargs param when creating a pipeline e.g.:
`pipeline('zero-shot-classification', model=model, tokenizer=tokenizer, model_kwargs={'use_auth_token': True})`<|||||>Still getting the same error
Here is my code :
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("hemangjoshi37a/autotrain-ratnakar_600_sample_test-1427753567", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("hemangjoshi37a/autotrain-ratnakar_600_sample_test-1427753567", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
```
Error :
```
----> 3 model = AutoModelForTokenClassification.from_pretrained("hemangjoshi37a/autotrain-ratnakar_600_sample_test-1427753567", use_auth_token=True)
OSError: Can't load config for 'hemangjoshi37a/autotrain-ratnakar_600_sample_test-1427753567'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'hemangjoshi37a/autotrain-ratnakar_600_sample_test-1427753567' is the correct path to a directory containing a config.json file
```
I have transformers version : `4.21.3`
https://hjlabs.in<|||||>runnign this command and authenticating it solved issue: `huggingface-cli login`
https://hjlabs.in<|||||>I am facing the same problem in Kaggle too... How can I

resolve this issue ?<|||||>Hello, I had the same problem when using transformers - pipeline in the aws-sagemaker notebook.
I started to think it was the version or the network problem. But, after some local tests, this guess is wrong. So, I just debug the source code. I find that:

This will raise any error as EnviromentError. So, from experience, I solve it, by running this pip:
!pip install --upgrade jupyter
!pip install --upgrade ipywidgets
You guys can try it when meeting the problem in aws-notebook or colab!<|||||>
I am unable to solve this issues Since Morning .. i had been trying to Solve it ...
Im working on my Final Year Project .. can someone pls help me in it ...<|||||>Just ask chatGPT LOL...😂😂<|||||>I dont understand it ?? What do u mean ..
The Hugging Face Website is also not working ...<|||||>@VRDJ goto this website [chatGPT](chat.openai.com) and enter your error in the chatbox in this website and for the 99% you will get your solution there.<|||||>> Still not okay online, but I managed to do it locally
>
> git clone https://huggingface.co/bert-base-uncased
>
> #model = AutoModelWithHeads.from_pretrained("bert-base-uncased") model = AutoModelWithHeads.from_pretrained(BERT_LOCAL_PATH, local_files_only=True)
>
> #tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") tokenizer = AutoTokenizer.from_pretrained(BERT_LOCAL_PATH, local_files_only=True)
>
> adapter_name = model2.load_adapter(localpath, config=config, model_name=BERT_LOCAL_PATH)
-------------------
Hello! Thanks for your sharing. I wonder in
'tokenizer = AutoTokenizer.from_pretrained(BERT_LOCAL_PATH, local_files_only=True)',
which file does 'BERT_LOCAL_PATH' refer to specifically? Is it the path for the directory 'bert-base-uncased', or the 'pytorch_model.bin', or something else? |
transformers | 12,940 | closed | Starting today, I get an error downloading pre-trained models | ## Environment info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 2.1.1
- Platform: Linux-5.8.0-63-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
## Information
Model I am using (Bert, XLNet ...):
roberta-base, but this is currently an issue with all models
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
- downloading pre-trained models is currently failing, this seems have have started just in the last day
## To reproduce
Steps to reproduce the behavior:
1. attempt to load any pre-trained model from HuggingFace (code below)
This code:
`generator = pipeline("text-generation", model="bert-base-uncased")`
Generates this error:
403 Client Error: Forbidden for url: https://huggingface.co/bert-base-uncased/resolve/main/config.json
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
...
OSError: Can't load config for 'bert-base-uncased'. Make sure that:
- 'bert-base-uncased' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'bert-base-uncased' is the correct path to a directory containing a config.json file
## Expected behavior
I expect the pre-trained model to be downloaded. This issue just started today.
| 07-29-2021 15:18:19 | 07-29-2021 15:18:19 | Hi @jason-weddington, are you calling those URLs from any particular workload or infrastructure?
The only reason I can see where you would get a 403 on this URL is if your usage triggers our infra's firewall. Would you mind contacting us at `expert-acceleration at huggingface.co` so we can take a look?<|||||>Thanks, I'll email you. I'm running this in a notebook on my desktop, using my home internet connection, but we're also seeing this in Google Colab. The issue just stated today.<|||||>This is working again, thanks for the help. |
transformers | 12,939 | closed | Fix from_pretrained with corrupted state_dict | # What does this PR do?
As we discovered in #12843, when a state dictionary contains keys for the body of the model that are not prefixed *and* keys for the head, the body is loaded but the head is ignored with no warning.
This PR fixes that by keeping track of the expected key that do not contain the prefix and erroring out if we load only the body of the model and there are some keys to load in that list of expected keys that do not contain the prefix. I chose the error since those kinds of state dictionaries should not exist, since `from_pretrained` or `torch.save(model.state_dict())` do not generate those. | 07-29-2021 15:13:24 | 07-29-2021 15:13:24 | The test caught something weird with `sshleifer/tiny-distilbert-base-uncased-finetuned-sst-2-english` (another plus for this PR in my opinion!)
This model is used in the benchmark tests and in the zero shot pipeline but that model is beyond salvation: its weights have the names of BERT (in the keys) when it's a DistilBERT architecture, the number of labels of the config don't match the weights, the embedding size of the weights does not match the vocab size of the tokenzier or the embedding size in the config...
Loading it for now just results in a random model (silently) since none of the weights can't be loaded.
To fix this, I created a new tiny random model following the same kind of config as `sshleifer/tiny-distilbert-base-uncased-finetuned-sst-2-english` (but not messed up) and stored it in `sgugger/tiny-distilbert-classification`.<|||||>I'll address @patrickvonplaten 's remarks regarding a more general refactor of the method to clean the code later on, merging this PR in the meantime. |
transformers | 12,938 | closed | Add CpmTokenizerFast | # What does this PR do?
Add a fast version of `CpmTokenizer`
Fixes #12837 | 07-29-2021 14:03:59 | 07-29-2021 14:03:59 | > I don't think the fast tokenizer as it's written works for now, as the fast tokenizer do not call the `_tokenize` method.
Oops! It looks the old pull request isn't right. I'll take a closer look<|||||>@sgugger I've updated and tested it. It works fine - only needs to wait for the `tokenizer.json` to be uploaded.<|||||>Tokenizer file uploaded. Merging it. |
transformers | 12,937 | closed | Not able use TF Dataset on TPU when created via generator in Summarization example | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.1
- Platform: Kaggle/Colab
- Python version: 3.7.10
- Tensorflow version (GPU?): 2.4.1 / 2.5.1
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patil-suraj, @Rocketknight1
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (XSum)
* [ ] my own task or dataset: (give details below)
I am trying to replicate the summarization example present [here](https://github.com/huggingface/transformers/blob/master/examples/tensorflow/summarization/run_summarization.py) on XSum dataset using T5, but, am facing error when trying to use a TPU (it works on gpu).
## To reproduce
[Kaggle link](https://www.kaggle.com/rehanwild/tpu-tf-huggingface-error?scriptVersionId=69298817)
Error in TF 2.4.1:
```
---------------------------------------------------------------------------
UnavailableError Traceback (most recent call last)
<ipython-input-11-8513f78e8e35> in <module>
72 model.fit(tf_tokenized_train_ds,
73 validation_data=tf_tokenized_valid_ds,
---> 74 epochs=1,
75 )
76 #callbacks=[WandbCallback()])
/opt/conda/lib/python3.7/site-packages/wandb/integration/keras/keras.py in new_v2(*args, **kwargs)
122 for cbk in cbks:
123 set_wandb_attrs(cbk, val_data)
--> 124 return old_v2(*args, **kwargs)
125
126 training_arrays.orig_fit_loop = old_arrays
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1100 tmp_logs = self.train_function(iterator)
1101 if data_handler.should_sync:
-> 1102 context.async_wait()
1103 logs = tmp_logs # No error, now safe to assign to logs.
1104 end_step = step + data_handler.step_increment
/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/context.py in async_wait()
2328 an error state.
2329 """
-> 2330 context().sync_executors()
2331
2332
/opt/conda/lib/python3.7/site-packages/tensorflow/python/eager/context.py in sync_executors(self)
643 """
644 if self._context_handle:
--> 645 pywrap_tfe.TFE_ContextSyncExecutors(self._context_handle)
646 else:
647 raise ValueError("Context is not initialized.")
UnavailableError: 9 root error(s) found.
(0) Unavailable: {{function_node __inference_train_function_49588}} failed to connect to all addresses
Additional GRPC error information from remote target /job:localhost/replica:0/task:0/device:CPU:0:
:{"created":"@1627548744.739596558","description":"Failed to pick subchannel","file":"third_party/grpc/src/core/ext/filters/client_channel/client_channel.cc","file_line":4143,"referenced_errors":[{"created":"@1627548744.739593083","description":"failed to connect to all addresses","file":"third_party/grpc/src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":398,"grpc_status":14}]}
[[{{node MultiDeviceIteratorGetNextFromShard}}]]
[[RemoteCall]]
[[IteratorGetNextAsOptional]]
[[cond_14/switch_pred/_200/_88]]
(1) Unavailable: {{function_node __inference_train_function_49588}} failed to connect to all addresses
Additional GRPC error information from remote target /job:localhost/replica:0/task:0/device:CPU:0:
:{"created":"@1627548744.739596558","description":"Failed to pick subchannel","file":"third_party/grpc/src/core/ext/filters/client_channel/client_channel.cc","file_line":4143,"referenced_errors":[{"created":"@1627548744.739593083","description":"failed to connect to all addresses","file":"third_party/grpc/src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":398,"grpc_status":14}]}
[[{{node MultiDeviceIteratorGetNextFromShard}}]]
[[RemoteCall]]
[[IteratorGetNextAsOptional]]
[[strided_slice_18/_288]]
(2) Unavailable: {{function_node __inference_train_function_49588}} failed to connect to all addresses
Additional GRPC error information from remote target /job:localhost/replica:0/task:0/device:CPU:0:
:{"created":"@1627548744.739596558","description":"Failed to pick subchannel","file":"third_party/grpc/src/core/ext/filters/client_channel/client_channel.cc","file_line":4143,"referenced_errors":[{"created":"@1627548744.739593083","description":"failed to connect to all addresses","file":"third_party/grpc/src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":398,"grpc_status":14}]}
[[{{node MultiDeviceIteratorGetNextFromShard}}]]
[[RemoteCall]]
[[IteratorGetNextAsOptional]]
[[tpu_compile_succeeded_assert/_1965840270157496994/_8/_335]]
(3) Unavailable: {{function_node __inference_train_function_49588}} failed to connect to all addresses
Additional GRPC error information from remote target /job:localhost/replica:0/task:0/device:CPU:0:
:{"created":"@1627548744.739596558","description":"Failed to pick subchannel","file":"third_party/grpc/src/core/ext/filters/client_channel/client_channel.cc","file_line":4143,"referenced_errors":[{"created":"@1627548744.739593083","description":"failed to connect to all addresses","file":"third_party/grpc/src/core/ext/filters/client_channel/lb_policy/pick_first/pick_first.cc","file_line":398,"grpc_status":14}]}
[[{{node MultiDeviceIteratorGetNextFromShard}}]]
[[RemoteCall]]
[[IteratorGetNextAsOptional]]
[[Pad_27/paddings/_218]]
(4) Unavailable: ... [truncated]
```
Error in TF 2.5.1:
```
NotFoundError: Op type not registered 'XlaSetDynamicDimensionSize' in binary running on n-f62ff7a1-w-0. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
No such error
<!-- A clear and concise description of what you would expect to happen. -->
EDIT:
I found tensorflow/tensorflow#48268, though it has been closed it is not yet completely solved I guess, since I found tensorflow/tensorflow#50980. I was not able to try with TF-2.6.0-rc1 as it is not yet supported by transformers. Since, this is an upstream bug, I think there should be an edit in the [run_summarization.py](https://github.com/huggingface/transformers/blob/master/examples/tensorflow/summarization/run_summarization.py) stating its incompatibility with TPU for the timebeing.
PS: Since, I have not ran the original script, I would like to know whether my above kaggle kernel is missing anything. I was able to run it on GPU. Only got the problem while using TPU. | 07-29-2021 10:56:37 | 07-29-2021 10:56:37 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>cc @Rocketknight1 <|||||>Hi, I'm sorry for the slow response here! It does seem like an upstream bug, but we'll hopefully be supporting TF 2.6 in the next release. I'm also working on a refactor of the examples using a new data pipeline, so I'll test TPU training with this example when that's implemented to make sure it's working then.<|||||>> Hi, I'm sorry for the slow response here! It does seem like an upstream bug, but we'll hopefully be supporting TF 2.6 in the next release. I'm also working on a refactor of the examples using a new data pipeline, so I'll test TPU training with this example when that's implemented to make sure it's working then.
@Rocketknight1 Ohh alright. I will keep this issue open for now since it is not yet solved just incase someone needs it. Eagerly waiting for increased TensorFlow support. :smiley:<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,936 | closed | `PretrainedTokenizer.return_special_tokens` returns incorrect mask | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.1
- Platform: ubuntu
- Python version: 3.8
- PyTorch version (GPU?): 1.9
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
## To reproduce
```python
tokenizer = transformers.AutoTokenizer.from_pretrained("bert-base-uncased")
text = "foo 雲 bar"
tokens=tokenizer.tokenize(text)
print("tokens : ", tokens)
inputs = tokenizer(text, return_special_tokens_mask=True)
print("mask : ", inputs["special_tokens_mask"])
print("mask from input ids : ", tokenizer.get_special_tokens_mask(inputs["input_ids"], already_has_special_tokens=True))
```
Output:
```
tokens : ['foo', '[UNK]', 'bar']
mask : [1, 0, 0, 0, 1] # [UNK] is ignored!
mask from input ids : [1, 0, 1, 0, 1]
```
## Expected behavior
`[UNK]` is special token.
`get_special_tokens_mask` is consistent with `__call__`.
| 07-29-2021 06:06:03 | 07-29-2021 06:06:03 | Indeed, we have an error in the way the special tokens mask is computed here. See here for the slow tokenizer: https://github.com/huggingface/transformers/blob/3f44a66cb617c72efeef0c0b4201cbe2945d8edf/src/transformers/models/bert/tokenization_bert.py#L297-L299
This seems to also be the case for the fast tokenizer. Would you like to propose a fix? Pinging @SaulLu as it might be of interest to her.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>unstale
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@SaulLu do you have time to look at this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>**TL,DR :**
To come back on this issue, I would tend to think that in its current state this method (`get_special_tokens_mask`) and this argument (`return_special_tokens_mask` in `__call__`) is very useful.
Indeed, this behavior is common to all tokenizers (I checked all tokenizers listed in `AutoTokenizer`, I can share a code if you want to have a look) and from my point of view it allows identifying the special tokens that are added by the `add_special_tokens` argument in the `__call__` method (the unknown token is not included in them, see the details section below).
Nevertheless, I imagine that it is not something obvious at all and that we should perhaps see how it could be better explained in the documentation. Futhermore, we can think about creating a new method that would generate a mask that would also include the unknow token if needed.
What do you think about it ?
**Details:**
The unknow special token does indeed differ from other special tokens in that it is a special token that is essential to the proper functioning of the tokenization algorithm and is therefore not an "add-on" oroptional like all other special tokens. A "unknow" token will correspond to a part of the initial text.
By the way, the documentation of `get_special_tokens_mask` is `Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model or encode_plus methods.` and the unknow token is not added by the `prepare_for_model` or `encode_plus` methods but by the heart of the tokenizer : the tokenization algorithm.
@tamuhey , could you share your use case where you need to identify the position of unknown tokens? That would be really useful to us :blush: <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Re-opened this issue as I thought a fix needed to be done - but reading @SaulLu's answer I believe the current behavior is correct.
Please let us know if this is an issue to your workflow and we'll look into solutions.<|||||>Hello @LysandreJik, I also encountered the problem. I will use the example in this [issue](https://github.com/huggingface/transformers/issues/16938).
``` Python
import transformers
print(transformers.__version__)
tokenizer = transformers.AutoTokenizer.from_pretrained('roberta-base')
special_tokens_dict = {"additional_special_tokens": ["<test1>", "<test2>"]}
tokenizer.add_special_tokens(special_tokens_dict)
processed = tokenizer("this <test1> that <test2> this", return_special_tokens_mask=True)
tokens = tokenizer.convert_ids_to_tokens(processed.input_ids)
for i in range(len(processed.input_ids)):
print(f"{processed.input_ids[i]}\t{tokens[i]}\t{processed.special_tokens_mask[i]}")
```
``` Python
Returned output:
0 <s> 1
9226 this 0
1437 Ġ 0
50265 <test1> 0
14 Ġthat 0
1437 Ġ 0
50266 <test2> 0
42 Ġthis 0
2 </s> 1
Expected output:
0 <s> 1
9226 this 0
1437 Ġ 0
50265 <test1> 1
14 Ġthat 0
1437 Ġ 0
50266 <test2> 1
42 Ġthis 0
2 </s> 1
```
My goal is to train a RoBERTa model from scratch with two additional special tokens `<test1>` and `<test2>`.
For masked language modelling, I don't want customized special tokens to be masked during training. I used `tokenizer` and `DataCollatorForLanguageModeling`. I thought `special_tokens_mask` from tokenizer could [disable special token masking](https://github.com/huggingface/transformers/blob/v4.26.0/src/transformers/data/data_collator.py#L767) in `DataCollatorForLanguageModeling`.
``` Python
processed = tokenizer("this <test1> that <test2> this", return_special_tokens_mask=True)
```
But it didn't recognize `<test1>` and `<test2>`.
The workaround is
``` Python
processed = tokenizer("this <test1> that <test2> this")
processed['special_tokens_mask'] = tokenizer.get_special_tokens_mask(processed['input_ids'], already_has_special_tokens=True)
```
It works fine for me on one sentence, but it seems `get_special_tokens_mask` cannot encode in batch, unlike the default tokenizer.
Do you think it makes sense to modify the behaviour of `return_special_tokens_mask` or to create a new method?
|
transformers | 12,935 | closed | Better error message? `CUDA error: CUBLAS_STATUS_ALLOC_FAILED` | ## Environment info
- `transformers` version: 4.9.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.11
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes/no
- Using distributed or parallel set-up in script?: no
### Who can help
Models:
- gpt2: @patrickvonplaten, @LysandreJik
## Information
I found that out of index in embedding is little weird when using cuda.
```
RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`
```
However, in cpu, it is understandable.
```
IndexError: index out of range in self
```
I just wondered if it needs better error message, or just leave it?
## To reproduce
### To get weird CUDA error:
```python
from transformers import AutoConfig, AutoModel, AutoTokenizer
import torch
config = AutoConfig.from_pretrained("gpt2")
config.update({"output_hidden_states":True,
"hidden_dropout_prob": 0.0,
"layer_norm_eps": 1e-7})
gpt_model = AutoModel.from_pretrained('gpt2').cuda()
input_ids = torch.randint(0, 100_000, (4, 128)).cuda()
attention_mask = torch.randint(0, 1, (4, 128)).cuda()
outputs = gpt_model(input_ids=input_ids, attention_mask=attention_mask)
last_hidden_states = outputs.last_hidden_states
print(last_hidden_states.shape)
```
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-2-b5d926c8a3c3> in <module>()
10 attention_mask = torch.randint(0, 1, (4, 248)).cuda()
11
---> 12 outputs = gpt_model(input_ids=input_ids, attention_mask=attention_mask)
13 last_hidden_states = outputs.last_hidden_states
14 print(last_hidden_states.shape)
7 frames
/usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py in forward(self, x)
1585 def forward(self, x):
1586 size_out = x.size()[:-1] + (self.nf,)
-> 1587 x = torch.addmm(self.bias, x.view(-1, x.size(-1)), self.weight)
1588 x = x.view(*size_out)
1589 return x
RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)`
```
### To get cpu error:
```python
from transformers import AutoConfig, AutoModel, AutoTokenizer
import torch
config = AutoConfig.from_pretrained("gpt2")
config.update({"output_hidden_states":True,
"hidden_dropout_prob": 0.0,
"layer_norm_eps": 1e-7})
gpt_model = AutoModel.from_pretrained('gpt2')
input_ids = torch.randint(0, 100_000, (4, 128))
attention_mask = torch.randint(0, 1, (4, 128))
outputs = gpt_model(input_ids=input_ids, attention_mask=attention_mask)
last_hidden_states = outputs.last_hidden_states
print(last_hidden_states.shape)
```
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-3-262727302e1e> in <module>()
9 input_ids = torch.randint(0, 100_000, (4, 248))
10 attention_mask = torch.randint(0, 1, (4, 248))
---> 11 outputs = gpt_model(input_ids=input_ids, attention_mask=attention_mask)
12 last_hidden_states = outputs.hidden_states
13 print(last_hidden_states)
4 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
2041 # remove once script supports set_grad_enabled
2042 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 2043 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
2044
2045
IndexError: index out of range in self
``` | 07-29-2021 04:26:47 | 07-29-2021 04:26:47 | That's something we can't solve I suppose, unfortunately. If you have a CUDA error like that, it's always advised to run your code on CPU as it provides a much more informative error message.<|||||>@NielsRogge
Thanks for the answer.
> it's always advised to run your code on CPU as it provides a much more informative error message.
Definitely agree on this. Closing this issue |
transformers | 12,934 | closed | [Wav2vec Pretrain] KeyError: ‘attention_mask’ | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.1
- Platform: Google Colab
- Python version: 3.7 & 3.8
- PyTorch version (GPU?): 1.8
- Tensorflow version (GPU?): N/A
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
Models:
@patrickvonplaten
## Information
Model I am using Wav2vec Pretrain:
The problem arises when using:
https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_pretrain.py
The tasks I am working on is:
* [ ] an official wav2vec pretrain task: (give the name)
* [ ] my own task or dataset: (give details below)
Wav2vec on TIMIT
## To reproduce
Steps to reproduce the behavior:
python run_pretrain.py --output_dir="./wav2vec2-base" \
--num_train_epochs="3" \
--per_device_train_batch_size="32" \
--per_device_eval_batch_size="32" \
--gradient_accumulation_steps="2" \
--save_total_limit="3" \
--save_steps="500" \
--logging_steps="10" \
--learning_rate="5e-4" \
--weight_decay="0.01" \
--warmup_steps="3000" \
--model_name_or_path="facebook/wav2vec2-base" \
--dataset_name="timit_asr" \
--train_split_name="train" \
--preprocessing_num_workers="4" \
--max_duration_in_seconds="10.0" \
--group_by_length \
--verbose_logging \
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
***** Running training *****
Num examples = 185
Num Epochs = 3
Instantaneous batch size per device = 32
Total train batch size (w. parallel, distributed & accumulation) = 64
Gradient Accumulation steps = 2
Total optimization steps = 9
0% 0/9 [00:00<?, ?it/s]Traceback (most recent call last):
File "wav2vec_pretrain.py", line 388, in <module>
main()
File "wav2vec_pretrain.py", line 384, in main
trainer.train()
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1254, in train
for step, inputs in enumerate(epoch_iterator):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/dataloader.py", line 561, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "wav2vec_pretrain.py", line 176, in __call__
if batch["attention_mask"] is not None:
File "/usr/local/lib/python3.7/dist-packages/transformers/feature_extraction_utils.py", line 81, in __getitem__
return self.data[item]
KeyError: 'attention_mask'
Thank you very much!
| 07-29-2021 01:30:36 | 07-29-2021 01:30:36 | I've assigned Patrick, but looking at the docs of Wav2Vec2, is says:
> Wav2Vec2 models that have set config.feat_extract_norm == "group", such as wav2vec2-base, have not been trained using attention_mask. For such models, input_values should simply be padded with 0 and no attention_mask should be passed.
> For Wav2Vec2 models that have set config.feat_extract_norm == "layer", such as wav2vec2-lv60, attention_mask should be passed for batched inference.
It seems like the pre-training script currently only supports models that are pre-trained using an attention mask, such as `patrickvonplaten/wav2vec2-base-libri-100h`.<|||||>@NielsRogge
Got it! It works well now. Thank you for your advice! <|||||>@NielsRogge The training process can start normally. But the loss doesn't decrease any more after ~300 steps. I have tried different datasets, including English and Chinese data. Could you help me check it? I appreciate it so much!
{'loss': 4.0485, 'learning_rate': 3.3333333333333335e-05, 'epoch': 0.07}
{'loss': 3.7386, 'learning_rate': 3.5000000000000004e-05, 'epoch': 0.07}
{'loss': 1.5081, 'learning_rate': 3.6666666666666666e-05, 'epoch': 0.07}
{'loss': 4.2322, 'learning_rate': 3.8333333333333334e-05, 'epoch': 0.08}
{'loss': 4.1046, 'learning_rate': 4e-05, 'epoch': 0.08}
{'loss': 3.2526, 'learning_rate': 4.1666666666666665e-05, 'epoch': 0.08}
{'loss': 1.5949, 'learning_rate': 4.3333333333333334e-05, 'epoch': 0.09}
{'loss': 0.0013, 'learning_rate': 4.4999999999999996e-05, 'epoch': 0.09}
{'loss': 0.0013, 'learning_rate': 4.666666666666667e-05, 'epoch': 0.09}
{'loss': 0.0013, 'learning_rate': 4.8333333333333334e-05, 'epoch': 0.1}
{'loss': 0.0013, 'learning_rate': 5e-05, 'epoch': 0.1}
{'loss': 0.0013, 'learning_rate': 5.1666666666666664e-05, 'epoch': 0.1}
{'loss': 0.0013, 'learning_rate': 5.333333333333334e-05, 'epoch': 0.11}
{'loss': 0.0013, 'learning_rate': 5.5e-05, 'epoch': 0.11}
4%|███▏ | 340/8922 [07:55<3:33:42, 1.49s/it]
{'loss': 0.0013, 'learning_rate': 5.6666666666666664e-05, 'epoch': 0.11}
4%|███▎ | 350/8922 [08:04<1:50:16, 1.30it/s]
{'loss': 0.0014, 'learning_rate': 5.833333333333333e-05, 'epoch': 0.12}
{'loss': 0.0013, 'learning_rate': 6e-05, 'epoch': 0.12}
4%|███▍ | 370/8922 [08:34<2:31:36, 1.06s/it]
{'loss': 0.0013, 'learning_rate': 6.166666666666667e-05, 'epoch': 0.12}
{'loss': 0.0013, 'learning_rate': 6.333333333333335e-05, 'epoch': 0.13}
{'loss': 0.0013, 'learning_rate': 6.500000000000001e-05, 'epoch': 0.13}
{'loss': 0.0013, 'learning_rate': 6.666666666666667e-05, 'epoch': 0.13}
{'loss': 0.0013, 'learning_rate': 6.833333333333333e-05, 'epoch': 0.14}
Btw, others have the same problem. Refer to https://discuss.huggingface.co/t/why-is-wav2vec-pretraining-loss-not-decreasing/8112<|||||>> @NielsRogge The training process can start normally. But the loss doesn't decrease any more after ~300 steps. I have tried different datasets, including English and Chinese data. Could you help me check it? I appreciate it so much!
>
> {'loss': 4.0485, 'learning_rate': 3.3333333333333335e-05, 'epoch': 0.07} {'loss': 3.7386, 'learning_rate': 3.5000000000000004e-05, 'epoch': 0.07} {'loss': 1.5081, 'learning_rate': 3.6666666666666666e-05, 'epoch': 0.07} {'loss': 4.2322, 'learning_rate': 3.8333333333333334e-05, 'epoch': 0.08} {'loss': 4.1046, 'learning_rate': 4e-05, 'epoch': 0.08} {'loss': 3.2526, 'learning_rate': 4.1666666666666665e-05, 'epoch': 0.08} {'loss': 1.5949, 'learning_rate': 4.3333333333333334e-05, 'epoch': 0.09} {'loss': 0.0013, 'learning_rate': 4.4999999999999996e-05, 'epoch': 0.09} {'loss': 0.0013, 'learning_rate': 4.666666666666667e-05, 'epoch': 0.09} {'loss': 0.0013, 'learning_rate': 4.8333333333333334e-05, 'epoch': 0.1}
>
> {'loss': 0.0013, 'learning_rate': 5e-05, 'epoch': 0.1} {'loss': 0.0013, 'learning_rate': 5.1666666666666664e-05, 'epoch': 0.1}
>
> {'loss': 0.0013, 'learning_rate': 5.333333333333334e-05, 'epoch': 0.11}
>
> {'loss': 0.0013, 'learning_rate': 5.5e-05, 'epoch': 0.11} 4%|███▏ | 340/8922 [07:55<3:33:42, 1.49s/it] {'loss': 0.0013, 'learning_rate': 5.6666666666666664e-05, 'epoch': 0.11} 4%|███▎ | 350/8922 [08:04<1:50:16, 1.30it/s] {'loss': 0.0014, 'learning_rate': 5.833333333333333e-05, 'epoch': 0.12} {'loss': 0.0013, 'learning_rate': 6e-05, 'epoch': 0.12} 4%|███▍ | 370/8922 [08:34<2:31:36, 1.06s/it] {'loss': 0.0013, 'learning_rate': 6.166666666666667e-05, 'epoch': 0.12} {'loss': 0.0013, 'learning_rate': 6.333333333333335e-05, 'epoch': 0.13} {'loss': 0.0013, 'learning_rate': 6.500000000000001e-05, 'epoch': 0.13} {'loss': 0.0013, 'learning_rate': 6.666666666666667e-05, 'epoch': 0.13} {'loss': 0.0013, 'learning_rate': 6.833333333333333e-05, 'epoch': 0.14}
>
> Btw, others have the same problem. Refer to https://discuss.huggingface.co/t/why-is-wav2vec-pretraining-loss-not-decreasing/8112
Hello, I’m facing the same problem pretraining my model from English base model. Have you solved it?<|||||>Hey guys,
I think this is a good example of how it looks like when the `"contrastive_loss"` function collapses and the training becomes useless. If you see an instant drop to `0.0013` this means that the training didn't work. I've seen this countless times in my tests and there is not a very easy fix IMO.
What seems to work best to counteract this is to do the following in this line:
https://github.com/huggingface/transformers/blob/4c99e553c152ce9b709d7c138379b0b126ed2fa1/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py#L327
Replace:
`mask_time_indices=mask_time_indices,` by `mask_time_indices=batch["sub_attention_mask"]`
This is known to be a more robust training that however seems to give slightly worse results.
Also, I think [speechbrain](https://speechbrain.github.io/) is working quite a bit on getting Wav2Vec2-Pretraining more robust and general, as far as I know those guys have done much more experiements with pretraining than I have so it might be worth checking out their pretraining script as well.
cc @TParcollet
<|||||>I'm hoping to find some time to again dive a bit deeper into wav2vec2 pretraining over the Chrismas holidays and then make a comprehensive guide on how to pretrain wav2vec2 at some point. I'm really not sure though whether I find the time<|||||>> Hey guys,
>
> I think this is a good example of how it looks like when the `"contrastive_loss"` function collapses and the training becomes useless. If you see an instant drop to `0.0013` this means that the training didn't work. I've seen this countless times in my tests and there is not a very easy fix IMO.
>
> What seems to work best to counteract this is to do the following in this line:
>
> https://github.com/huggingface/transformers/blob/4c99e553c152ce9b709d7c138379b0b126ed2fa1/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py#L327
>
> Replace: `mask_time_indices=mask_time_indices,` by `mask_time_indices=batch["sub_attention_mask"]`
>
> This is known to be a more robust training that however seems to give slightly worse results.
>
> Also, I think [speechbrain](https://speechbrain.github.io/) is working quite a bit on getting Wav2Vec2-Pretraining more robust and general, as far as I know those guys have done much more experiements with pretraining than I have so it might be worth checking out their pretraining script as well.
>
> cc @TParcollet
Hi. The `%_mask_idx ` i got is so low, I wonder if you changed `mask_prob` in the configuration file from 0.05 to 0.5?<|||||>For passing the mask_prob should be around 0.65<|||||>FYI I ran into the same issue (missing attention_mask in pre-trained model) saving my model on a custom dataset from the greek emotion classification using wav2vec2 from this notebook:
https://colab.research.google.com/github/m3hrdadfi/soxan/blob/main/notebooks/Emotion_recognition_in_Greek_speech_using_Wav2Vec2.ipynb#scrollTo=n0HzBneBK84G
Changing the model to 'facebook/wav2vec2-large-960h-lv60-self' helped. |
transformers | 12,933 | closed | ONNX v2 raises an Exception when using PyTorch < 1.8.0 | 07-28-2021 21:48:13 | 07-28-2021 21:48:13 | @sgugger failing tests seem unrelated to this PR, let you check 👍🏻 |
|
transformers | 12,932 | closed | Error when trying `push_to_hub` for a fine-tuned model on Colab | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.1
- Platform: Colab
### Who can help
@sgugger
## To reproduce
Steps to reproduce the behavior. This is the code I'm running:
I first install the following packages:
```
! pip install transformers datasets
! sudo apt-get install git-lfs
```
Then I run the `! transformers-cli login` and successfully login and my token is saved at: `/root/.huggingface/token`
Then I run the following code:
```
from transformers import AutoModel
model = AutoModel.from_pretrained('/path/to/my/fine-tuned/model/on/my/google/drive')
model.push_to_hub("my-username/my-model-name")
```
Per @sgugger's suggestion, I also tried the following line but I'm getting the very error:
`model.push_to_hub("my-model-name")`
And this is the error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-3-84aee0bf13c0> in <module>()
4 model = AutoModel.from_pretrained(model_path)
5
----> 6 model.push_to_hub("my-username/my-model-name")
2 frames
/usr/local/lib/python3.7/dist-packages/huggingface_hub/repository.py in __init__(self, local_dir, clone_from, use_auth_token, git_user, git_email)
102 )
103 raise ValueError(
--> 104 "If not specifying `clone_from`, you need to pass Repository a valid git clone."
105 )
106
ValueError: If not specifying `clone_from`, you need to pass Repository a valid git clone.
```
## Expected behavior
To have my fine-tuned model uploaded to my private repo on Huggingface.
| 07-28-2021 19:01:29 | 07-28-2021 19:01:29 | Just tried on a fresh colab and could upload a model without any problem (as long as there is no "/" in the model ID). Do you already have a model with the same username maybe?
Note that you are missing the step `! git config --global user.email "your_email"` in the preparation.
Are you certain you do have the latest version of Transformers installed?<|||||>Thanks for the tips. Problem solved. I think it was because I created a repo and a model with the very name on the Hugging Face website (I thought there should already be a model with the name there if we want to push the model.) I removed the model with the same name and now it works! |
transformers | 12,931 | closed | How to fuse copy mechnism into the GenerationMixin? | Hello, is there any way to directly fuse copy mechanism into beamsearch? since the beam_search function receive model_output.logit rather than probabily of vocab?
https://github.com/huggingface/transformers/blob/72aee83ced5f31302c5e331d896412737287f976/src/transformers/generation_utils.py#L1801 | 07-28-2021 16:31:55 | 07-28-2021 16:31:55 | Pinging @patrickvonplaten and @patil-suraj <|||||>Hey @Hannibal046,
Could you clarify a bit what you mean by "copy-mechanism" ?
Maybe a code example of what you want to do?<|||||>Hello,I also find others to talk about the `copy mechanism`. in this [link](https://discuss.huggingface.co/t/copying-mechanism-for-transformer/5025)
BTW,could you please check my another issue about BART Generation? It confused me a long time, https://github.com/huggingface/transformers/issues/12870, thanks so much.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,930 | closed | Print defaults when using --help for scripts | # What does this PR do?
This PR uses the solution suggested in #12924 to automatically print the defaults of each argument when using `--help` for the script for instance, using `--help` on any of the examples would yield:
```
--push_to_hub [PUSH_TO_HUB]
Whether or not to upload the trained model to the
model hub after training.
```
before, and after this PR it will yield
```
--push_to_hub [PUSH_TO_HUB]
Whether or not to upload the trained model to the
model hub after training. (default: False)
```
Fixes #12924 | 07-28-2021 14:39:34 | 07-28-2021 14:39:34 | This is indeed a great addition! |
transformers | 12,929 | closed | Add option to set max_len in run_ner | # What does this PR do?
This PR adds an option to the set the maximum sequence length in `run_ner`. Pointed out in #12817, this script does not have that option (but the TF version and `run_ner_no_trainer` both have). | 07-28-2021 13:29:32 | 07-28-2021 13:29:32 | |
transformers | 12,928 | closed | Fix QA examples for roberta tokenizer | # What does this PR do?
https://github.com/huggingface/datasets/pull/2586 has changed the SQUAD dataset and doesn't clean anymore the whitespace in questions. This in turn makes the tokenization fail for tokenizers that don't remove whitespace (like RoBERTa): some questions begin with loooots of spaces and so the truncation strategy then fails because the question itself is longer than the max length (the infuriating example number 107709 of the training set for instance).
For more context, see #12880
This PR addresses that by removing the whitespace on the left of questions. | 07-28-2021 13:18:57 | 07-28-2021 13:18:57 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.