repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
β | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 3,992 | closed | RuntimeError: Creating MTGP constants failed. at /opt/conda/conda-bld/pytorch_1549287501208/work/aten/src/THC/THCTensorRandom.cu:35 | # π Bug
## Information
Model I am using (Bert, XLNet ...): BertForSequenceClassification
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I am performing multi-class (50 classes) classification on the Project Gutenberg dataset. The max text length is 1620 so I set max length at 2048. I'm also padding the text with 0s.
```
MAX_LEN = 2048
def get_encodings(texts):
token_ids = []
attention_masks = []
for text in texts:
token_id = tokenizer.encode(text, add_special_tokens=True, max_length=2048)
token_ids.append(token_id)
return token_ids
def pad_encodings(encodings):
return pad_sequences(encodings, maxlen=MAX_LEN, dtype="long",
value=0, truncating="post", padding="post")
def get_attention_masks(padded_encodings):
attention_masks = []
for encoding in padded_encodings:
attention_mask = [int(token_id > 0) for token_id in encoding]
attention_masks.append(attention_mask)
return attention_masks
X_train = torch.tensor(train_encodings)
y_train = torch.tensor(train_df.author_id.values)
train_masks = torch.tensor(train_attention_masks)
X_test = torch.tensor(test_encodings)
y_test = torch.tensor(test_df.author_id.values)
test_masks = torch.tensor(test_attention_masks)
batch_size = 32
# Create the DataLoader for our training set.
train_data = TensorDataset(X_train, train_masks, y_train)
train_sampler = RandomSampler(train_data)
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=batch_size)
validation_data = TensorDataset(X_test, test_masks, y_test)
validation_sampler = SequentialSampler(validation_data)
validation_dataloader = DataLoader(validation_data, sampler=validation_sampler, batch_size=batch_size)
```
My model is setup like so:
```
model = BertForSequenceClassification.from_pretrained(
"bert-base-uncased",
num_labels = 50,
output_attentions = False,
output_hidden_states = False,
)
```
During training however, the line indicated below is throwing the error:
```
for step, batch in enumerate(train_dataloader):
b_texts = batch[0].to(device)
b_attention_masks = batch[1].to(device)
b_authors = batch[2].to(device) <---- ERROR
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Ubuntu 16.04
- Python version: 1.0.0
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.1.0
- Using GPU in script?: Yes. P4000
- Using distributed or parallel set-up in script?: No
| 04-26-2020 20:12:45 | 04-26-2020 20:12:45 | |
transformers | 3,991 | closed | Fix the typos | Fix the typos of "[0.0, 1.0[" --> "[0.0, 1.0]". | 04-26-2020 19:20:37 | 04-26-2020 19:20:37 | That's not a typo :)
https://en.wikipedia.org/wiki/Interval_(mathematics)<|||||>Ohhh! I am not aware of this notation before. (only [X, Y) is in my knowledge-base). Thanks for the clarification. |
transformers | 3,990 | closed | Language generation not possible with roberta? | Hi,
I have a small question.
I finetuned a roberta model and then I was about to do generation and I understood this was not possible on my model, but only in a smaller collection of model, where most of them are quite huge.
Why is that? Is there another way to generate language with roberta?
Thanks in a advance!
PS: the scripts I used were: `run_language_modeling.py` and `run_generation.py` | 04-26-2020 18:53:24 | 04-26-2020 18:53:24 | `Roberta` has not really been trained on generating text. Small models that work well for generation are `distilgpt2` and `gpt2` for example. |
transformers | 3,989 | closed | Model cards for KoELECTRA | Hi:)
I've recently uploaded `KoELECTRA`, pretrained ELECTRA model for Korean:)
Thanks for supporting this great library and S3 storage:)
| 04-26-2020 17:15:25 | 04-26-2020 17:15:25 | That's great! Model pages:
https://huggingface.co/monologg/koelectra-base-generator
https://huggingface.co/monologg/koelectra-base-discriminator
cc'ing @LysandreJik and @clarkkev for information |
transformers | 3,988 | closed | [Trainer] Add more schedules to trainer | Adds all available schedules to trainer and also add beta1 and beta2 for Adam to args.
@julien-c | 04-26-2020 16:38:21 | 04-26-2020 16:38:21 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3988?src=pr&el=h1) Report
> Merging [#3988](https://codecov.io/gh/huggingface/transformers/pull/3988?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cb3c2212c79d7ff0a4a4e84c3db48371ecc1c15d&el=desc) will **decrease** coverage by `0.05%`.
> The diff coverage is `33.33%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3988?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3988 +/- ##
==========================================
- Coverage 78.45% 78.40% -0.06%
==========================================
Files 111 111
Lines 18521 18533 +12
==========================================
- Hits 14531 14530 -1
- Misses 3990 4003 +13
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3988?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/3988/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `42.42% <9.09%> (-1.18%)` | :arrow_down: |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/3988/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `88.40% <100.00%> (+0.71%)` | :arrow_up: |
| [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/3988/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `91.89% <0.00%> (-8.11%)` | :arrow_down: |
| [src/transformers/configuration\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/3988/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3hsbmV0LnB5) | `93.87% <0.00%> (-2.05%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3988/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.59% <0.00%> (-0.17%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3988?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3988?src=pr&el=footer). Last update [cb3c221...851b8f3](https://codecov.io/gh/huggingface/transformers/pull/3988?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I'd rather let the user pass an optimizer and a scheduler in the Trainer's `__init__` and only keep the default one (which was the one used in all the scripts) in the implementation. Otherwise there's really a combinatorial explosion of possibilities.
What do you think?<|||||>Yeah it's true that this could lead to a "combinatorial explosion" but I don't really mind that because:
1. If we set good default values (like the beta ones for Adam), and have a good description I don't feel like the user is affected by more and more parameters
2. I think if a user does want to change very special params (like the beta ones), then it's very convenient to be able to do it directly
What I already started in this PR which can become messy is that there are now parameters that are only relevant if other parameters are set in a certain way `num_cycles` is only relevant if there is a cosine scheduler. And it's true that the trainer class could become messier this way as well.
In the end, for me the use case of the trainer class is more important I guess. If the idea is to have a very easy and fast way to train a model, then I would not mind introducing a bunch of parameters with good default values and description. If instead it should be a bit more "low-level" and the `run_language_modeling` script should wrap this class into a faster interface then it might be better to keep clean. <|||||>Closing this due to cafa6a9e29f3e99c67a1028f8ca779d439bc0689 |
transformers | 3,987 | closed | fix output_dir / tokenizer_name confusion | Fixing #3950
It seems that `output_dir` has been used in place of `tokenizer_name` in `run_xnli.py`. I have corrected this typo.
I am not that familiar with code style in this repo, in a lot of instances I used:
`args.tokenizer_name if args.tokenizer_name else args.model_name_or_path`
Instead, I believe I could have set `args.tokenizer_name = args.tokenizer_name if args.tokenizer_name else args.model_name_or_path` somewhere at the start of the script, so that the code could be cleaner below. | 04-26-2020 13:27:51 | 04-26-2020 13:27:51 | No, this isn't correct. This was consistent with other scripts (before they were ported to the new Trainer in #3800) where we always saved the model and its tokenizer before re-loading it for eval
So I think that code is currently right. (but again, should be re-written to Trainer pretty soon) |
transformers | 3,986 | closed | Run Multiple Choice failure with ImportError: cannot import name 'AutoModelForMultipleChoice' | # π Bug
## Information
Run the example, it returns error as below. The version in April 17 doesn't have the problem.
python ./examples/run_multiple_choice.py \
> --task_name swag \
> --model_name_or_path roberta-base \
> --do_train \
> --do_eval \
> --data_dir $SWAG_DIR \
> --learning_rate 5e-5 \
> --num_train_epochs 3 \
> --max_seq_length 80 \
> --output_dir models_bert/swag_base \
> --per_gpu_eval_batch_size=16 \
> --per_gpu_train_batch_size=16 \
> --gradient_accumulation_steps 2 \
> --overwrite_output
Traceback (most recent call last):
File "./examples/run_multiple_choice.py", line 26, in <module>
from transformers import (
ImportError: cannot import name 'AutoModelForMultipleChoice'
Model I am using (Bert, XLNet ...):
Language I am using the model on (English, Chinese ...):English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Follow the exact steps in README.md
1. Download the transformer with git clone
2. git clone https://github.com/rowanz/swagaf.git
3. export SWAG_DIR=/path/to/swag_data_dir
python ./examples/run_multiple_choice.py \
--task_name swag \
--model_name_or_path roberta-base \
--do_train \
--do_eval \
--data_dir $SWAG_DIR \
--learning_rate 5e-5 \
--num_train_epochs 3 \
--max_seq_length 80 \
--output_dir models_bert/swag_base \
--per_gpu_eval_batch_size=16 \
--per_gpu_train_batch_size=16 \
--gradient_accumulation_steps 2 \
--overwrite_output
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
***** Eval results *****
eval_acc = 0.8338998300509847
eval_loss = 0.44457291918821606
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: ubuntu 18.04
- Python version: 3.6.8
- PyTorch version (GPU?): 1.4.0 GPU
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No.
| 04-26-2020 13:23:05 | 04-26-2020 13:23:05 | You need to install transformers from source as specified in the README |
transformers | 3,985 | closed | Using the T5 model with huggingface's mask-fill pipeline | Does anyone know if it is possible to use the T5 model with hugging face's mask-fill pipeline? The below is how you can do it using the default model but i can't seem to figure out how to do is using the T5 model specifically?
```
from transformers import pipeline
nlp_fill = pipeline('fill-mask')
nlp_fill('Hugging Face is a French company based in ' + nlp_fill.tokenizer.mask_token)
```
Trying this for example raises the error "TypeError: must be str, not NoneType" because nlp_fill.tokenizer.mask_token is None.
```
nlp_fill = pipeline('fill-mask',model="t5-base", tokenizer="t5-base")
nlp_fill('Hugging Face is a French company based in ' + nlp_fill.tokenizer.mask_token)
```
Stack overflow [question](https://stackoverflow.com/questions/61408753/using-the-t5-model-with-huggingfaces-mask-fill-pipeline) | 04-26-2020 12:18:26 | 04-26-2020 12:18:26 | Correct me if I'm wrong @patrickvonplaten, but I don't think T5 is trained on masked language modeling (and does not have a mask token) so will not work with this pipeline.<|||||>Yeah, `T5` is not trained on the conventional "Bert-like" masked language modeling objective. It does a special encoder-decoder masked language modeling (see docs [here](https://huggingface.co/transformers/model_doc/t5.html#training)), but this is not really supported in combination with the `mask-fill` pipeline at the moment.<|||||>Hi @patrickvonplaten, is there any plan to support `T5` with the `mask-fill` pipeline in the near future?<|||||>`T5` is an encoder-decoder model so I don't really see it as a fitting model for the `mask-fill` task. <|||||>Could we use the following workaround?
* `<extra_id_0>` could be considered as a mask token
* Candidate sequences for the mask-token could be generated using a code, like:
```python
from transformers import T5Tokenizer, T5Config, T5ForConditionalGeneration
T5_PATH = 't5-base' # "t5-small", "t5-base", "t5-large", "t5-3b", "t5-11b"
DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # My envirnment uses CPU
t5_tokenizer = T5Tokenizer.from_pretrained(T5_PATH)
t5_config = T5Config.from_pretrained(T5_PATH)
t5_mlm = T5ForConditionalGeneration.from_pretrained(T5_PATH, config=t5_config).to(DEVICE)
# Input text
text = 'India is a <extra_id_0> of the world. </s>'
encoded = t5_tokenizer.encode_plus(text, add_special_tokens=True, return_tensors='pt')
input_ids = encoded['input_ids'].to(DEVICE)
# Generaing 20 sequences with maximum length set to 5
outputs = t5_mlm.generate(input_ids=input_ids,
num_beams=200, num_return_sequences=20,
max_length=5)
_0_index = text.index('<extra_id_0>')
_result_prefix = text[:_0_index]
_result_suffix = text[_0_index+12:] # 12 is the length of <extra_id_0>
def _filter(output, end_token='<extra_id_1>'):
# The first token is <unk> (inidex at 0) and the second token is <extra_id_0> (indexed at 32099)
_txt = t5_tokenizer.decode(output[2:], skip_special_tokens=False, clean_up_tokenization_spaces=False)
if end_token in _txt:
_end_token_index = _txt.index(end_token)
return _result_prefix + _txt[:_end_token_index] + _result_suffix
else:
return _result_prefix + _txt + _result_suffix
results = list(map(_filter, outputs))
results
```
Output:
```
['India is a cornerstone of the world. </s>',
'India is a part of the world. </s>',
'India is a huge part of the world. </s>',
'India is a big part of the world. </s>',
'India is a beautiful part of the world. </s>',
'India is a very important part of the world. </s>',
'India is a part of the world. </s>',
'India is a unique part of the world. </s>',
'India is a part of the world. </s>',
'India is a part of the world. </s>',
'India is a beautiful country in of the world. </s>',
'India is a part of the of the world. </s>',
'India is a small part of the world. </s>',
'India is a part of the world. </s>',
'India is a part of the world. </s>',
'India is a country in the of the world. </s>',
'India is a large part of the world. </s>',
'India is a part of the world. </s>',
'India is a significant part of the world. </s>',
'India is a part of the world. </s>']
```<|||||>@girishponkiya Thanks for your example! Unfortunately, I can't reproduce your results. I get
```
['India is a _0> of the world. </s>',
'India is a β extra of the world. </s>',
'India is a India is of the world. </s>',
'India is a β extra_ of the world. </s>',
'India is a a of the world. </s>',
'India is a [extra_ of the world. </s>',
'India is a India is an of the world. </s>',
'India is a of the world of the world. </s>',
'India is a India. of the world. </s>',
'India is a is a of the world. </s>',
'India is a India β of the world. </s>',
'India is a Inde is of the world. </s>',
'India is a ] of the of the world. </s>',
'India is a . of the world. </s>',
'India is a _0 of the world. </s>',
'India is a is β of the world. </s>',
'India is a india is of the world. </s>',
'India is a India is the of the world. </s>',
'India is a -0> of the world. </s>',
'India is a β _ of the world. </s>']
```
Tried on CPU, GPU, 't5-base' and 't5-3b' β same thing.<|||||>Could you please mention the version of torch, transformers and tokenizers?
I used the followings:
* torch: 1.5.0+cu101
* transformers: 2.8.0
* tokenizers: 0.7.0
`tokenizers` in the latest version of `transformers` has a bug. Looking at your output, I believe you are using a buggy version of tokenizers. <|||||>@girishponkiya I'm using
```
transformers 2.9.0
tokenizers 0.7.0
torch 1.4.0
```
Tried tokenizers-0.5.2 transformers-2.8.0 βΒ now it works, thank you!<|||||>Thanks to @takahiro971. He pointed out this bug in #4021. <|||||>@girishponkiya thanks a lot for your above code. Your example works but if I run your above code instead with the text :
`text = "<extra_id_0> came to power after defeating Stalin"
`
I get the following error:
```
/usr/local/lib/python3.6/dist-packages/transformers/modeling_utils.py in _generate_beam_search(self, input_ids, cur_len, max_length, min_length, do_sample, early_stopping, temperature, top_k, top_p, repetition_penalty, no_repeat_ngram_size, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, decoder_start_token_id, batch_size, num_return_sequences, length_penalty, num_beams, vocab_size, encoder_outputs, attention_mask)
1354 # test that beam scores match previously calculated scores if not eos and batch_idx not done
1355 if eos_token_id is not None and all(
-> 1356 (token_id % vocab_size).item() is not eos_token_id for token_id in next_tokens[batch_idx]
1357 ):
1358 assert torch.all(
UnboundLocalError: local variable 'next_tokens' referenced before assignment
```
Any ideas of the cause?
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Can I use multiple masking in a same sentence?<|||||>> Can I use multiple masking in a same sentence?
In this case, you need to set max_length higher or not set it at all.
This might produce more outputs than you need though.
<|||||>outputs = t5_mlm.generate(input_ids=input_ids,
num_beams=200, num_return_sequences=20,
max_length=5)
is very slow. How can I improve performance?<|||||>@jban-x3 I think you should try smaller values for `num_beams`<|||||>> Could we use the following workaround?
>
> * `<extra_id_0>` could be considered as a mask token
> * Candidate sequences for the mask-token could be generated using a code, like:
>
> ```python
> from transformers import T5Tokenizer, T5Config, T5ForConditionalGeneration
>
> T5_PATH = 't5-base' # "t5-small", "t5-base", "t5-large", "t5-3b", "t5-11b"
>
> DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # My envirnment uses CPU
>
> t5_tokenizer = T5Tokenizer.from_pretrained(T5_PATH)
> t5_config = T5Config.from_pretrained(T5_PATH)
> t5_mlm = T5ForConditionalGeneration.from_pretrained(T5_PATH, config=t5_config).to(DEVICE)
>
> # Input text
> text = 'India is a <extra_id_0> of the world. </s>'
>
> encoded = t5_tokenizer.encode_plus(text, add_special_tokens=True, return_tensors='pt')
> input_ids = encoded['input_ids'].to(DEVICE)
>
> # Generaing 20 sequences with maximum length set to 5
> outputs = t5_mlm.generate(input_ids=input_ids,
> num_beams=200, num_return_sequences=20,
> max_length=5)
>
> _0_index = text.index('<extra_id_0>')
> _result_prefix = text[:_0_index]
> _result_suffix = text[_0_index+12:] # 12 is the length of <extra_id_0>
>
> def _filter(output, end_token='<extra_id_1>'):
> # The first token is <unk> (inidex at 0) and the second token is <extra_id_0> (indexed at 32099)
> _txt = t5_tokenizer.decode(output[2:], skip_special_tokens=False, clean_up_tokenization_spaces=False)
> if end_token in _txt:
> _end_token_index = _txt.index(end_token)
> return _result_prefix + _txt[:_end_token_index] + _result_suffix
> else:
> return _result_prefix + _txt + _result_suffix
>
> results = list(map(_filter, outputs))
> results
> ```
>
> Output:
>
> ```
> ['India is a cornerstone of the world. </s>',
> 'India is a part of the world. </s>',
> 'India is a huge part of the world. </s>',
> 'India is a big part of the world. </s>',
> 'India is a beautiful part of the world. </s>',
> 'India is a very important part of the world. </s>',
> 'India is a part of the world. </s>',
> 'India is a unique part of the world. </s>',
> 'India is a part of the world. </s>',
> 'India is a part of the world. </s>',
> 'India is a beautiful country in of the world. </s>',
> 'India is a part of the of the world. </s>',
> 'India is a small part of the world. </s>',
> 'India is a part of the world. </s>',
> 'India is a part of the world. </s>',
> 'India is a country in the of the world. </s>',
> 'India is a large part of the world. </s>',
> 'India is a part of the world. </s>',
> 'India is a significant part of the world. </s>',
> 'India is a part of the world. </s>']
> ```
Nice tool!
may I ask a question?
what if I have several masked places to predict?
```python
t5('India is a <extra_id_0> of the <extra_id_1>. </s>',max_length=5)
```
I got:
[{'generated_text': 'part world'}]
It seems only the first masked place is predicted.<|||||>> > Could we use the following workaround?
> >
> > * `<extra_id_0>` could be considered as a mask token
> > * Candidate sequences for the mask-token could be generated using a code, like:
> >
> > ```python
> > from transformers import T5Tokenizer, T5Config, T5ForConditionalGeneration
> >
> > T5_PATH = 't5-base' # "t5-small", "t5-base", "t5-large", "t5-3b", "t5-11b"
> >
> > DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # My envirnment uses CPU
> >
> > t5_tokenizer = T5Tokenizer.from_pretrained(T5_PATH)
> > t5_config = T5Config.from_pretrained(T5_PATH)
> > t5_mlm = T5ForConditionalGeneration.from_pretrained(T5_PATH, config=t5_config).to(DEVICE)
> >
> > # Input text
> > text = 'India is a <extra_id_0> of the world. </s>'
> >
> > encoded = t5_tokenizer.encode_plus(text, add_special_tokens=True, return_tensors='pt')
> > input_ids = encoded['input_ids'].to(DEVICE)
> >
> > # Generaing 20 sequences with maximum length set to 5
> > outputs = t5_mlm.generate(input_ids=input_ids,
> > num_beams=200, num_return_sequences=20,
> > max_length=5)
> >
> > _0_index = text.index('<extra_id_0>')
> > _result_prefix = text[:_0_index]
> > _result_suffix = text[_0_index+12:] # 12 is the length of <extra_id_0>
> >
> > def _filter(output, end_token='<extra_id_1>'):
> > # The first token is <unk> (inidex at 0) and the second token is <extra_id_0> (indexed at 32099)
> > _txt = t5_tokenizer.decode(output[2:], skip_special_tokens=False, clean_up_tokenization_spaces=False)
> > if end_token in _txt:
> > _end_token_index = _txt.index(end_token)
> > return _result_prefix + _txt[:_end_token_index] + _result_suffix
> > else:
> > return _result_prefix + _txt + _result_suffix
> >
> > results = list(map(_filter, outputs))
> > results
> > ```
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > Output:
> > ```
> > ['India is a cornerstone of the world. </s>',
> > 'India is a part of the world. </s>',
> > 'India is a huge part of the world. </s>',
> > 'India is a big part of the world. </s>',
> > 'India is a beautiful part of the world. </s>',
> > 'India is a very important part of the world. </s>',
> > 'India is a part of the world. </s>',
> > 'India is a unique part of the world. </s>',
> > 'India is a part of the world. </s>',
> > 'India is a part of the world. </s>',
> > 'India is a beautiful country in of the world. </s>',
> > 'India is a part of the of the world. </s>',
> > 'India is a small part of the world. </s>',
> > 'India is a part of the world. </s>',
> > 'India is a part of the world. </s>',
> > 'India is a country in the of the world. </s>',
> > 'India is a large part of the world. </s>',
> > 'India is a part of the world. </s>',
> > 'India is a significant part of the world. </s>',
> > 'India is a part of the world. </s>']
> > ```
>
> Nice tool! may I ask a question? what if I have several masked places to predict?
>
> ```python
> t5('India is a <extra_id_0> of the <extra_id_1>. </s>',max_length=5)
> ```
>
> I got: [{'generated_text': 'part world'}]
>
> It seems only the first masked place is predicted.
Seems like <extra_id_0> is filled with "part", and <extra_id_1> is filled with "world"?<|||||>Hi @beyondguo,
> may I ask a question?
> what if I have several masked places to predict?
> ```python
> t5('India` is a <extra_id_0> of the <extra_id_1>. </s>',max_length=5)
> ```
You need to modify the `_filter` function.
The (original) function in [my code](https://github.com/huggingface/transformers/issues/3985#issuecomment-622981083) extracts the word sequence between `<extra_id_0>` and `<extra_id_1>` from the output, and the word sequence is being used to replace the `<extra_id_0>` token in the input. In addition to it, you have to extract the word sequence between `<extra_id_1>` and `<extra_id_2>` to replace the `<extra_id_1>` token from the input. Now, as the system needs to generate more tokens, one needs to set a bigger value for the `max_length` argument of `t5_mlm.generate`.<|||||>How is it possible to use the workaround with target words like in the pipeline?
E.g. targets=["target1", "target2"] and get the probabilities definitely for those targets? |
transformers | 3,984 | closed | why the accuracy is very low when we test on sentences one by one of the words input? | # β Questions & Help
hi, friends:
there is a problem , that we trained a good lm model on gpt2 and sequence len is 30,when we batch test in training with 30 seqlen, the accuracy can reach 90%. But when we test like this:
[1]->[2],[1,2]->[3],[1,2,3]-->[4], left is the input, and we hope it can predict the right, we found the accurcy is very low which is only 6%.
| 04-26-2020 10:21:15 | 04-26-2020 10:21:15 | when we were training, the accuracy count method is like this:
input: [0,1,2,3,4,5,...,29]
label: [1,2,3,4,5,6,...,30]
so we count the right hits in 30 seqs.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,983 | closed | MLM Loss not decreasing when pretraining Bert from scratch | I want to pretrain bert base model from scratch with en-wikipedia dataset, since I havn't found a bookcorpus copy. The code I used was adapted from pytorch-pretrained-bert and Nvidia Megatron-LM. I can finetune BERT on SQuAD and get SOTA result. But for pretraining, the MLM loss stays around 7.3. I have no idea why this happens. Can someone offer some possible solutions? Great thanks! | 04-26-2020 10:08:34 | 04-26-2020 10:08:34 | |
transformers | 3,982 | closed | Can't install transformers from sources using poetry | # π Bug
When installing transformers from sources using poetry
```bash
poetry add git+https://github.com/huggingface/transformers.git
```
The following exception is thrown:
```bash
[InvalidRequirement]
Invalid requirement, parse error at "'extra =='"
```
This is a problem on poetry side. The issue here is just to cross-link the problem.
Ref: https://github.com/python-poetry/poetry/issues/2326
| 04-26-2020 10:01:50 | 04-26-2020 10:01:50 | In the meantime as a workaround, you can do the following:
1) Fork huggingface/transformers
2) On your fork remove this line https://github.com/huggingface/transformers/blob/97a375484c618496691982f62518130f294bb9a8/setup.py#L79
3) Install your fork with poetry<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,981 | closed | Improve split on token | This patch makes the code shorter, also solves a bug that is suppressed already[1]
old code:
split_on_token(tok = '[MASK]', text='')
> ['[MASK]']
This is a bug, because split_on_token shouldn't add token when the input doesn't have the token
[1] 21451ec (handle string with only whitespaces as empty, 2019-12-06)
| 04-26-2020 09:46:25 | 04-26-2020 09:46:25 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3981?src=pr&el=h1) Report
> Merging [#3981](https://codecov.io/gh/huggingface/transformers/pull/3981?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4e817ff41885063e08bb3bcd63e5adfd835b9911&el=desc) will **decrease** coverage by `0.02%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3981?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3981 +/- ##
==========================================
- Coverage 78.44% 78.42% -0.03%
==========================================
Files 111 111
Lines 18518 18513 -5
==========================================
- Hits 14527 14518 -9
- Misses 3991 3995 +4
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3981?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3981/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.78% <100.00%> (+0.05%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3981/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `91.94% <0.00%> (-0.83%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3981/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.49% <0.00%> (-0.37%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3981/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3981?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3981?src=pr&el=footer). Last update [4e817ff...410c538](https://codecov.io/gh/huggingface/transformers/pull/3981?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,980 | closed | When I run `import transformers` , it reports an error. But I had no problem with pyrotch-transformers before. Is it a TensorRT problem? | # β Questions & Help
when i run `import transformers`, i get this:
2020-04-26 17:02:59.210145: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
*** Error in `/conda-torch/bin/python': double free or corruption (!prev): 0x00007faa0eac23c0 ***
is there anything can fix it? | 04-26-2020 09:13:55 | 04-26-2020 09:13:55 | |
transformers | 3,979 | closed | Add modelcard for Hate-speech-CNERG/dehatebert-mono-arabic model | 04-26-2020 03:11:09 | 04-26-2020 03:11:09 | ||
transformers | 3,978 | closed | Fix t5 doc typos | Read through the docs for T5 at `docs/source/model_doc/t5.rst` and found some typos. Hope this helps! | 04-26-2020 03:02:57 | 04-26-2020 03:02:57 | Great thanks you @enzoampil |
transformers | 3,977 | closed | xlnet large | If you train in batch 8 at max length = 512 with the xlnet-base-cased, you will learn well with changes in values such as Loss, Accuracy, Precision, Recall, and F1-score.
However, if you train with batch 2 at max length = 512 with a xlnet-large-cased, even if you run several times, there is almost no change, and precision is always 1.00000, and other values are only fixed.
Is it because the batch size is too small? Help.
[epoch1]
Tr Loss | Vld Acc | Vld Loss | Vld Prec | Vld Reca | Vld F1
0.39480 | 0.87500 | 0.39765 | 1.00000 | 0.87500 | 0.93333
[epoch2]
Β Tr Loss | Vld Acc | Vld Loss | Vld Prec | Vld Reca | Vld F1
0.39772 | 0.87500 | 0.38215 | 1.00000 | 0.87500 | 0.93333 | 04-26-2020 02:52:36 | 04-26-2020 02:52:36 | Maybe the task is learning rate and batch size sensitive. You can try to decrease the learning rate when changing the batch size from 8 to 2. <|||||>> Maybe the task is learning rate and batch size sensitive. You can try to decrease the learning rate when changing the batch size from 8 to 2.
Using xlnet-large-cased with batch size 2 and learning rate = 1e-5 is the same symptom. Is there another way?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,976 | closed | Fixed Style Inconsistency | This fixes a style inconsistency in BertForSequenceClassification's constructor where 'config' is referenced using both the parameter 'config' and 'self.config' on the same line.
This line is the only instance in modeling_bert.py where the config parameter is referenced using self.config in a constructor. | 04-26-2020 02:05:45 | 04-26-2020 02:05:45 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3976?src=pr&el=h1) Report
> Merging [#3976](https://codecov.io/gh/huggingface/transformers/pull/3976?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4e817ff41885063e08bb3bcd63e5adfd835b9911&el=desc) will **decrease** coverage by `0.01%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3976?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3976 +/- ##
==========================================
- Coverage 78.44% 78.43% -0.02%
==========================================
Files 111 111
Lines 18518 18518
==========================================
- Hits 14527 14525 -2
- Misses 3991 3993 +2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3976?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.40% <100.00%> (ΓΈ)` | |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.49% <0.00%> (-0.37%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3976/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.59% <0.00%> (-0.17%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3976?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3976?src=pr&el=footer). Last update [4e817ff...6e5ef43](https://codecov.io/gh/huggingface/transformers/pull/3976?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks @jtaylor351 ! |
transformers | 3,975 | closed | Added support for pathlib.Path objects instead of string paths in from_pretrained() (resolves #3962) | Resolves issue/feature request #3962
This only covers string paths in `from_pretrained` (so vocab_file paths for` __init__` for tokenizers aren't covered). This may or may not be a feature that is desired (since it's fairly easy for the client to convert a path-like to a string). With the changes, you would be able to use `from_pretrained` with path-likes:
```
from pathlib import Path
from transformers import AutoModel, AutoTokenizer
my_path = Path("path/to/model_dir/")
AutoTokenizer.from_pretrained(my_path)
AutoModel.from_pretrained(my_path)
``` | 04-25-2020 22:42:25 | 04-25-2020 22:42:25 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3975?src=pr&el=h1) Report
> Merging [#3975](https://codecov.io/gh/huggingface/transformers/pull/3975?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4e817ff41885063e08bb3bcd63e5adfd835b9911&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `85.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3975?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3975 +/- ##
=======================================
Coverage 78.44% 78.45%
=======================================
Files 111 111
Lines 18518 18534 +16
=======================================
+ Hits 14527 14541 +14
- Misses 3991 3993 +2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3975?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/3975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `40.57% <50.00%> (-0.10%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.68% <75.00%> (-0.19%)` | :arrow_down: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.57% <100.00%> (+0.02%)` | :arrow_up: |
| [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/3975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `87.80% <100.00%> (+0.15%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.10% <100.00%> (+0.34%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.95% <100.00%> (+0.01%)` | :arrow_up: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/3975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.50% <100.00%> (+0.13%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.73% <100.00%> (+0.01%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3975?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3975?src=pr&el=footer). Last update [4e817ff...033d5f8](https://codecov.io/gh/huggingface/transformers/pull/3975?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>On second thought, I probably would't merge this. I think it's better to just let the user deal with this on their end.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,974 | closed | Weights from pretrained model not used in GPT2LMHeadModel | # π Bug
## Information
Model I am using (Bert, XLNet ...): gpt2
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. trained gpt2 using run_language_modeling
2. using run_generation.py
`backend_1 | 04/25/2020 18:37:27 - INFO - transformers.modeling_utils - Weights from pretrained model not u
sed in GPT2LMHeadModel: ['transformer.h.0.attn.masked_bias', 'transformer.h.1.attn.masked_bias', 'transformer.h.2.a
ttn.masked_bias', 'transformer.h.3.attn.masked_bias', 'transformer.h.4.attn.masked_bias', 'transformer.h.5.attn.mas
ked_bias', 'transformer.h.6.attn.masked_bias', 'transformer.h.7.attn.masked_bias', 'transformer.h.8.attn.masked_bia
s', 'transformer.h.9.attn.masked_bias', 'transformer.h.10.attn.masked_bias', 'transformer.h.11.attn.masked_bias']`
| 04-25-2020 18:44:57 | 04-25-2020 18:44:57 | @patrickvonplaten - could you help elaborate on this?<|||||>Yeah, we need to clean this logging. The weights are correctly loaded into the model as far as I know (we have integration tests for pretrained GPT2 models so the weights have to be correctly loaded). Just need to clean up the logging logic there.<|||||>Actually this logging info is fine. Those masked bias are saved values using the `register_buffer()` method and don't need to be loaded. |
transformers | 3,973 | closed | Pytorch 1.5.0 | PyTorch 1.5.0 doesn't allow specifying a standard deviation of 0 in normal distributions. We were testing that our models initialized with a normal distribution with a mean and a standard deviation of 0 had all their parameters initialized to 0.
This allows us to verify that all weights in the model are indeed initialized according to the specified weights initializations.
We now specify a tiny value (1e10) and round to the 9th decimal so that all these values are set to 0.
closes #3947
closes #3872 | 04-25-2020 17:33:14 | 04-25-2020 17:33:14 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3973?src=pr&el=h1) Report
> Merging [#3973](https://codecov.io/gh/huggingface/transformers/pull/3973?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4e817ff41885063e08bb3bcd63e5adfd835b9911&el=desc) will **decrease** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3973?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3973 +/- ##
==========================================
- Coverage 78.44% 78.43% -0.02%
==========================================
Files 111 111
Lines 18518 18518
==========================================
- Hits 14527 14525 -2
- Misses 3991 3993 +2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3973?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3973/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.49% <0.00%> (-0.37%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3973/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.59% <0.00%> (-0.17%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3973?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3973?src=pr&el=footer). Last update [4e817ff...4bee514](https://codecov.io/gh/huggingface/transformers/pull/3973?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>~~I've seen one error with 1.5 and the new trainer interface (using ner) - I'll raise an issue for that :)~~ |
transformers | 3,972 | closed | Sized Fill-in-the-blank or Multi Mask filling with T5 | In the Google T5 paper they mentioned :
For example, with the input, βI love peanut butter and _4_ sandwiches,β the outputs looked like:
I love peanut butter and jelly, which is what makes good sandwiches.
How do I achieve Multi Mask filling with T5 in hugging face transformers?
Code samples please :)

| 04-25-2020 17:30:56 | 04-25-2020 17:30:56 | Did you discover any way of doing this?<|||||>@edmar Nothing solid yet! I Will update if I do.<|||||>Also looking for a solution
https://github.com/google-research/text-to-text-transfer-transformer/issues/133<|||||>@franz101 @edmar
The closest thing I could come up with :
```
from transformers import RobertaTokenizer, RobertaForMaskedLM
import torch
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
model = RobertaForMaskedLM.from_pretrained('roberta-base')
sentence = "Tom has fully <mask> <mask> <mask> illness."
token_ids = tokenizer.encode(sentence, return_tensors='pt')
# print(token_ids)
token_ids_tk = tokenizer.tokenize(sentence, return_tensors='pt')
print(token_ids_tk)
masked_position = (token_ids.squeeze() == tokenizer.mask_token_id).nonzero()
masked_pos = [mask.item() for mask in masked_position ]
print (masked_pos)
with torch.no_grad():
output = model(token_ids)
last_hidden_state = output[0].squeeze()
print ("\n\n")
print ("sentence : ",sentence)
print ("\n")
list_of_list =[]
for mask_index in masked_pos:
mask_hidden_state = last_hidden_state[mask_index]
idx = torch.topk(mask_hidden_state, k=5, dim=0)[1]
words = [tokenizer.decode(i.item()).strip() for i in idx]
list_of_list.append(words)
print (words)
best_guess = ""
for j in list_of_list:
best_guess = best_guess+" "+j[0]
print ("\nBest guess for fill in the blank :::",best_guess)
```
The output is :
['Tom', 'Δ has', 'Δ fully', '<mask>', '<mask>', '<mask>', 'Δ illness', '.']
[4, 5, 6]
sentence : Tom has fully <mask> <mask> <mask> illness.
['recovered', 'returned', 'recover', 'healed', 'cleared']
['from', 'his', 'with', 'to', 'the']
['his', 'the', 'her', 'mental', 'this']
Best guess for fill in the blank ::: recovered from his<|||||>@ramsrigouthamg @edmar @franz101 Any update on how to do that ?<|||||>@Diego999 The above-provided answer is the best I have.
Google hasn't released the pretrained multi-mask fill model.<|||||>@ramsrigouthamg Thanks! This is also similar to BART architecture where they mask a span of text. A similar thread is available here https://github.com/huggingface/transformers/issues/4984 <|||||>@Diego999 There are few things that you can possibly explore. There is Spanbert https://huggingface.co/SpanBERT/spanbert-base-cased but I didn't explore on how to use it.
Then there is Google pegasus that is trained with sentences as masks. https://ai.googleblog.com/2020/06/pegasus-state-of-art-model-for.html<|||||>This example might be useful:
https://github.com/huggingface/transformers/issues/3985<|||||>@ramsrigouthamg Any luck with it yet on T5?
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I think the `__4__` is a special token, and they use the same text to text framework (seq2seq, instead of masked-lm) to train this task.
And that's why the screenshot above says:
> train it to fill in the blank with **approximately** 4 words<|||||>is there any updates or simple example code for doing this? thanks!<|||||>```
from transformers import T5Tokenizer, T5Config, T5ForConditionalGeneration
T5_PATH = 't5-base' # "t5-small", "t5-base", "t5-large", "t5-3b", "t5-11b"
DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu') # My envirnment uses CPU
t5_tokenizer = T5Tokenizer.from_pretrained(T5_PATH)
t5_config = T5Config.from_pretrained(T5_PATH)
t5_mlm = T5ForConditionalGeneration.from_pretrained(T5_PATH, config=t5_config).to(DEVICE)
# Input text
text = 'India is a <extra_id_0> of the world. </s>'
encoded = t5_tokenizer.encode_plus(text, add_special_tokens=True, return_tensors='pt')
input_ids = encoded['input_ids'].to(DEVICE)
# Generaing 20 sequences with maximum length set to 5
outputs = t5_mlm.generate(input_ids=input_ids,
num_beams=200, num_return_sequences=20,
max_length=5)
_0_index = text.index('<extra_id_0>')
_result_prefix = text[:_0_index]
_result_suffix = text[_0_index+12:] # 12 is the length of <extra_id_0>
def _filter(output, end_token='<extra_id_1>'):
# The first token is <unk> (inidex at 0) and the second token is <extra_id_0> (indexed at 32099)
_txt = t5_tokenizer.decode(output[2:], skip_special_tokens=False, clean_up_tokenization_spaces=False)
if end_token in _txt:
_end_token_index = _txt.index(end_token)
return _result_prefix + _txt[:_end_token_index] + _result_suffix
else:
return _result_prefix + _txt + _result_suffix
results = list(map(_filter, outputs))
results
```<|||||>does the above work? |
transformers | 3,971 | closed | Problem with downloading the XLNetSequenceClassification pretrained xlnet-large-cased | I've run this before and it was fine. However, this time, i keep encountering this runtime error. In fact, when i tried to initialize model = XLNetModel.from_pretrained('xlnet-large-cased') - it also gives me the 'negative dimension' error.
`model = XLNetForSequenceClassification.from_pretrained("xlnet-large-cased", num_labels = 2)
model.to(device)`
`---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-37-d6f698a3714b> in <module>()
----> 1 model = XLNetForSequenceClassification.from_pretrained("xlnet-large-cased", num_labels = 2)
2 model.to(device)
3 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/sparse.py in __init__(self, num_embeddings, embedding_dim, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse, _weight)
95 self.scale_grad_by_freq = scale_grad_by_freq
96 if _weight is None:
---> 97 self.weight = Parameter(torch.Tensor(num_embeddings, embedding_dim))
98 self.reset_parameters()
99 else:
RuntimeError: Trying to create tensor with negative dimension -1: [-1, 1024]` | 04-25-2020 17:08:40 | 04-25-2020 17:08:40 | Hey mate, I've encountered the same issue. How did you solve it?<|||||>I encounter this issue. How can I solve this?<|||||>the problem is when l import xlnet from pytorch_transformers. instead, you should be importing it from the module called 'transformers' |
transformers | 3,970 | closed | Fix GLUE TPU script | Temporary fix until we refactor this script to work with the trainer. | 04-25-2020 17:01:10 | 04-25-2020 17:01:10 | Pinging @jysohn23 to see if it fixed the issue.<|||||>Closing in favour of integrating TPU support for the trainer. |
transformers | 3,969 | closed | override weights name | Optional override of the weights name, relevant for cases where torch.save() is patched like in https://github.com/allegroai/trains | 04-25-2020 16:35:55 | 04-25-2020 16:35:55 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3969?src=pr&el=h1) Report
> Merging [#3969](https://codecov.io/gh/huggingface/transformers/pull/3969?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4e817ff41885063e08bb3bcd63e5adfd835b9911&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `82.60%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3969?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3969 +/- ##
==========================================
+ Coverage 78.44% 78.45% +0.01%
==========================================
Files 111 111
Lines 18518 18527 +9
==========================================
+ Hits 14527 14536 +9
Misses 3991 3991
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3969?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3969/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.81% <80.00%> (+0.04%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3969/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.11% <84.61%> (+0.17%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3969/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.49% <0.00%> (-0.37%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3969?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3969?src=pr&el=footer). Last update [4e817ff...56024ae](https://codecov.io/gh/huggingface/transformers/pull/3969?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,968 | closed | Remove boto3 dependency | Downloading a model from `s3://` urls was not documented anywhere and I suspect it doesn't work with our `from_pretrained` methods anyways.
Removing boto3 also kills its transitive dependencies (some of which are slow to support new versions of Python), contributing to a leaner library:
```
boto3-1.12.46
botocore-1.15.46
docutils-0.15.2
jmespath-0.9.5
python-dateutil-2.8.1
s3transfer-0.3.3
six-1.14.0
urllib3-1.25.9
``` | 04-25-2020 14:50:59 | 04-25-2020 14:50:59 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3968?src=pr&el=h1) Report
> Merging [#3968](https://codecov.io/gh/huggingface/transformers/pull/3968?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4e817ff41885063e08bb3bcd63e5adfd835b9911&el=desc) will **increase** coverage by `0.07%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3968?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3968 +/- ##
==========================================
+ Coverage 78.44% 78.51% +0.07%
==========================================
Files 111 111
Lines 18518 18486 -32
==========================================
- Hits 14527 14515 -12
+ Misses 3991 3971 -20
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3968?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `72.61% <100.00%> (+3.74%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3968/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3968?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3968?src=pr&el=footer). Last update [4e817ff...b440a19](https://codecov.io/gh/huggingface/transformers/pull/3968?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,967 | closed | Proposal: saner num_labels in configs. | See https://github.com/guillaume-be/rust-bert/pull/21 for more context | 04-25-2020 13:55:19 | 04-25-2020 13:55:19 | I like this better as well. Probably won't be backwards compatible with users that have their config with `_num_labels` though.<|||||>Like it better as well! Agree with @LysandreJik that some configs might need to be fixed manually<|||||>Well, `_num_labels` (if present) should always be consistent with the length of `id2label`, no?<|||||>You're right, it should!<|||||>All right, merging this. @patrickvonplaten if you want to re-run your cleaning script at some point feel free to do it :) (let me know before) |
transformers | 3,966 | closed | Add CALBERT (Catalan ALBERT) base-uncased model card | Hi there!
I just uploaded a new ALBERT model pretrained on 4.3 GB of Catalan text. Thank you for these fantastic libraries and platform to make this a joy! | 04-25-2020 09:19:10 | 04-25-2020 09:19:10 | Looks good! Model page for [CALBERT](https://huggingface.co/codegram/calbert-base-uncased)<|||||>One additional tweak: you should add a
```
---
language: catalan
---
```
metadata block at the top of the model page. Thanks! |
transformers | 3,965 | closed | Remove hard-coded pad token id in distilbert and albert | As the config adds `pad_token_id` attribute, `padding_idx` set the value of `config.pad_token_id` in BertEmbedding. ( PR #3793 )
But it seems that not only the config of `Bert`, but also that of `DistilBert` and `Albert` has `pad_token_id`. ([Distilbert config](https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-uncased-config.json), [Albert config](https://s3.amazonaws.com/models.huggingface.co/bert/albert-base-v1-config.json))
But in Embedding class of Distilbert and Albert, it seems that `padding_idx` is still hard-coded. So I've fixed those parts. | 04-25-2020 07:03:25 | 04-25-2020 07:03:25 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3965?src=pr&el=h1) Report
> Merging [#3965](https://codecov.io/gh/huggingface/transformers/pull/3965?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/73d6a2f9019960c327f19689c1d9a6c0fba31d86&el=desc) will **decrease** coverage by `0.01%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3965?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3965 +/- ##
==========================================
- Coverage 78.45% 78.44% -0.02%
==========================================
Files 111 111
Lines 18518 18518
==========================================
- Hits 14528 14526 -2
- Misses 3990 3992 +2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3965?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/3965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `75.31% <100.00%> (ΓΈ)` | |
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/3965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `98.15% <100.00%> (ΓΈ)` | |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.49% <0.00%> (-0.37%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3965/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.94% <0.00%> (-0.13%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3965?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3965?src=pr&el=footer). Last update [73d6a2f...9d92a30](https://codecov.io/gh/huggingface/transformers/pull/3965?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>LGTM<|||||>@LysandreJik @VictorSanh
Hi:) Can you please check this PR? This one makes issue on Korean BERT. (which use `pad_token_id=1` and `unk_token_id=0`)
I hope this PR will be applied on the next version of transformers library:)<|||||>lgtm!<|||||>@julien-c
Can you merge this PR? Thank you so much:) |
transformers | 3,964 | closed | Error on dtype in modeling_bertabs.py file | # π Bug
Traceback (most recent call last):
File "run_summarizationn.py", line 324, in <module>
main()
File "run_summarizationn.py", line 309, in main
evaluate(args)
File "run_summarizationn.py", line 84, in evaluate
batch_data = predictor.translate_batch(batch)
File "/kaggle/working/transformers/examples/summarization/bertabs/modeling_bertabs.py", line 797, in translate_batch
return self._fast_translate_batch(batch, self.max_length, min_length=self.min_length)
File "/kaggle/working/transformers/examples/summarization/bertabs/modeling_bertabs.py", line 844, in _fast_translate_batch
dec_out, dec_states = self.model.decoder(decoder_input, src_features, dec_states, step=step)
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/kaggle/working/transformers/examples/summarization/bertabs/modeling_bertabs.py", line 231, in forward
step=step,
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in __call__
result = self.forward(*input, **kwargs)
File "/kaggle/working/transformers/examples/summarization/bertabs/modeling_bertabs.py", line 328, in forward
dec_mask = torch.gt(tgt_pad_mask + self.mask[:, : tgt_pad_mask.size(1), : tgt_pad_mask.size(1)], 0)
RuntimeError: expected device cpu and dtype Byte but got device cpu and dtype Bool
## Information
Model I am using (Bert, XLNet ...):bertabs
Language I am using the model on (English, Chinese ...):English
The problem arises when using:
* [ ] the official example scripts: (give details below) - Yes
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below) - my own task
| 04-25-2020 06:31:34 | 04-25-2020 06:31:34 | |
transformers | 3,963 | closed | run_language_modeling, RuntimeError: expected scalar type Half but found Float | # π Bug
## Information
CUDA: 10.1
Python 3.6.8 (default, Oct 7 2019, 12:59:55)
Installed transformers from source.
Trying to train **gpt2-medium**
Command Line:
```
export TRAIN_FILE=dataset/wiki.train.raw
export TEST_FILE=dataset/wiki.test.raw
python3 examples/run_language_modeling.py --fp16 \
--per_gpu_eval_batch_size 1 \
--output_dir=output \
--model_type=gpt2-medium \
--model_name_or_path=gpt2-medium \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE \
--overwrite_output_dir
```
```
Traceback (most recent call last):
File "examples/run_language_modeling.py", line 284, in <module>
main()
File "examples/run_language_modeling.py", line 254, in main
trainer.train(model_path=model_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 314, in train
tr_loss += self._training_step(model, inputs, optimizer)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 388, in _training_step
outputs = model(**inputs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py", line 152, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply
output.reraise()
File "/usr/local/lib/python3.6/dist-packages/torch/_utils.py", line 385, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 616, in forward
use_cache=use_cache,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 500, in forward
use_cache=use_cache,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 237, in forward
use_cache=use_cache,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 196, in forward
attn_outputs = self._attn(query, key, value, attention_mask, head_mask)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 149, in _attn
w = torch.where(mask, w, self.masked_bias)
RuntimeError: expected scalar type Half but found Float
```
| 04-25-2020 06:06:35 | 04-25-2020 06:06:35 | What's your PyTorch version? Can you post the output of `transformers-cli env`
This does not seem specific to `examples/run_language_modeling.py` so I'm gonna ping @patrickvonplaten on this<|||||>@julien-c thanks for the prompt response.
`- `transformers` version: 2.8.0
- Platform: Linux-4.14.138+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.8
- PyTorch version (GPU?): 1.3.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no`<|||||>Hi @ankit-chadha,
I will take a look next week :-). Maybe @sshleifer - do you something from the top of your head? This issue is also related: https://github.com/huggingface/transformers/issues/3676.<|||||>I have tried to replicate in torch 1.4 and mask, masked_bias are float16, and there is no error.
Would it be possible to upgrade torch to 1.4 and see if the problem persists @ankit-chadha ?
<|||||>@sshleifer @patrickvonplaten
`- transformers version: 2.8.0
- Platform: Linux-4.14.138+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.8
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.0.0 (False)`
````
raceback (most recent call last): | 0/372 [00:00<?, ?it/s]
File "examples/run_language_modeling.py", line 284, in <module>
main()
File "examples/run_language_modeling.py", line 254, in main
trainer.train(model_path=model_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 314, in train
tr_loss += self._training_step(model, inputs, optimizer)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 388, in _training_step
outputs = model(**inputs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py", line 152, in forward
outputs = self.parallel_apply(replicas, inputs, kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/data_parallel.py", line 162, in parallel_apply
return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py", line 85, in parallel_apply
output.reraise()
File "/usr/local/lib/python3.6/dist-packages/torch/_utils.py", line 394, in reraise
raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 616, in forward
use_cache=use_cache,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 500, in forward
use_cache=use_cache,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 237, in forward
use_cache=use_cache,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 196, in forward
attn_outputs = self._attn(query, key, value, attention_mask, head_mask)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 149, in _attn
w = torch.where(mask, w, self.masked_bias)
RuntimeError: expected scalar type Half but found Float
```
<|||||>Could you provide a small subset of your data so that I can reproduce?<|||||>@sshleifer - I am using the wiki.train.raw default dataset. |
transformers | 3,962 | closed | `BertTokenizer.from_pretrained()` not working with native Python `pathlib` module | Consider this code that downloads models and tokenizers to disk and then uses `BertTokenizer.from_pretrained` to load the tokenizer from disk.
**ISSUE:** `BertTokenizer.from_pretrained()` does not seem to be compatible with Python's native [pathlib](https://docs.python.org/3/library/pathlib.html) module.
```
# -*- coding: utf-8 -*-
"""
Created on: 25-04-2020
Author: MacwanJ
ISSUE:
BertTokenizer.from_pretrained() is not compatible with pathlib constructs
"""
from pathlib import Path
import os
from transformers import BertModel, BertTokenizer
# enables proper path resolves when this script is run in terminal
# or in an interactive Python shell/notebook
try:
file_location = Path(__file__).parent.resolve()
except NameError:
file_location = Path.cwd().resolve()
PROJ_DIR = file_location.parent
#####################################################################
# DOWNLOAD MODELS & TOKENIZERS
#####################################################################
model_name = 'bert-base-uncased'
if not os.path.exists(PROJ_DIR/'models'/model_name):
print(model_name,
'folder does not exist. Creating folder now',
'and proceeding to download and save model')
os.makedirs(PROJ_DIR/'models'/model_name)
else:
print(model_name,
'folder already exists. Proceeding to download and save model')
tokenizer = BertTokenizer.from_pretrained(model_name)
model = BertModel.from_pretrained(model_name)
print('Download complete. Proceeding to save to disk')
model.save_pretrained(PROJ_DIR/'models'/model_name)
tokenizer.save_pretrained(PROJ_DIR/'models'/model_name)
print('Model saving complete')
#####################################################################
# LOAD MODEL & TOKENIZER FROM DISK
#####################################################################
model_name = 'bert-base-uncased'
# Load pre-trained model tokenizer (vocabulary)
# !! DOES NOT WORK UNLESS I CONVERT THE PATHLIB OBJECT TO STRING!!
tokenizer = BertTokenizer.from_pretrained(PROJ_DIR/'models'/model_name)
# Load pre-trained model weights
model = BertModel.from_pretrained(PROJ_DIR/'models'/model_name,
output_hidden_states=True)
```
Running:
`tokenizer = BertTokenizer.from_pretrained(PROJ_DIR/'models'/model_name)`
Yields this error:
```
Traceback (most recent call last):
File "C:\Users\macwanj\AppData\Local\miniconda3\envs\bert\lib\site-packages\IPython\core\interactiveshell.py", line 3331, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-94-eb0783e1626e>", line 1, in <module>
tokenizer = BertTokenizer.from_pretrained(PROJ_DIR/'models'/model_name)
File "C:\Users\macwanj\AppData\Local\miniconda3\envs\bert\lib\site-packages\transformers\tokenization_utils.py", line 393, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "C:\Users\macwanj\AppData\Local\miniconda3\envs\bert\lib\site-packages\transformers\tokenization_utils.py", line 424, in _from_pretrained
if os.path.isfile(pretrained_model_name_or_path) or is_remote_url(pretrained_model_name_or_path):
File "C:\Users\macwanj\AppData\Local\miniconda3\envs\bert\lib\site-packages\transformers\file_utils.py", line 146, in is_remote_url
parsed = urlparse(url_or_filename)
File "C:\Users\macwanj\AppData\Local\miniconda3\envs\bert\lib\urllib\parse.py", line 367, in urlparse
url, scheme, _coerce_result = _coerce_args(url, scheme)
File "C:\Users\macwanj\AppData\Local\miniconda3\envs\bert\lib\urllib\parse.py", line 123, in _coerce_args
return _decode_args(args) + (_encode_result,)
File "C:\Users\macwanj\AppData\Local\miniconda3\envs\bert\lib\urllib\parse.py", line 107, in _decode_args
return tuple(x.decode(encoding, errors) if x else '' for x in args)
File "C:\Users\macwanj\AppData\Local\miniconda3\envs\bert\lib\urllib\parse.py", line 107, in <genexpr>
return tuple(x.decode(encoding, errors) if x else '' for x in args)
AttributeError: 'WindowsPath' object has no attribute 'decode'
```
All the other functions and methods appear to be compatible with pathlib except for `BertTokenizer.from_pretrained()`
It would be good to ensure consistency with pathlib constructs across all functions because pathlib makes it easier to specify paths that work out of the box across operating systems.
- `transformers` version: 2.8.0
- Platform: Windows 10
- Python version: 3.7.0
- PyTorch version (GPU?):Non-GPU 1.3.1
- Tensorflow version (GPU?):2.1.0 non gpu
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 04-25-2020 06:05:02 | 04-25-2020 06:05:02 | Looking into the `BertTokenizer` code, it inherits the `from_pretrained` function so the real issue is coming from `PreTrainedTokenizer`. To confirm this, I tried loading the pretrained `roberta-base` tokenizer and reproduced the same error:
```
model_name = 'roberta-base'
tokenizer = AutoTokenizer.from_pretrained(PROJ_DIR/'models'/model_name)
model = AutoModel.from_pretrained(PROJ_DIR/'models'/model_name)
Output:
AttributeError: 'PosixPath' object has no attribute 'decode'
```
I actually also get the same error from the `AutoModel.from_pretrained()` line but oddly not when it's `BertModel.from_pretrained()`. Looking into the documentation for `from_pretrained` in `PreTrainedTokenizer` this [line](https://github.com/huggingface/transformers/blob/4e817ff41885063e08bb3bcd63e5adfd835b9911/src/transformers/tokenization_utils.py#L825) doesn't suggest that a Path object is a supported input anyways. You could just stringify the Path before passing it in:
```
tokenizer = AutoTokenizer.from_pretrained(str(PROJ_DIR/'models'/model_name))
model = AutoModel.from_pretrained(str(PROJ_DIR/'models'/model_name))
```
<|||||>Yes, stringify works. I upgraded from transformers 2.1 (from conda-forge) to the latest yesterday and the older version of transformers seems to work with `pathlib`. This isn't a dealbreaker for sure, but many other mature Python libraries, such as pandas, scikit-learn etc. have consistent compatibility with pathlib so it would be a *nice-to-have* to see this consistency with transformers too across all functions and classes.<|||||>From `pandas.io.common` they use a function [`stringify_path`](https://github.com/pandas-dev/pandas/blob/77a0f19c53279f7b2bf86c9e33daae2030b16e51/pandas/io/common.py#L96-L126) to convert any possible path-like objects to a string. This really comes down to whether the conversion from`pathlib.Path` to `str` should be handled by transformers or on the client side @julien-c .<|||||> I think issue is resolved now. It should be closed now<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,961 | closed | There are some warnings when I used AdamW and pytorch1.5. | # π Bug
## Information
pytorch/torch/csrc/utils/python_arg_parser.cpp:756: UserWarning: This overload of add_ is deprecated:
add_(Number alpha, Tensor other)
Consider using one of the following signatures instead:
add_(Tensor other, *, Number alpha)
| 04-25-2020 05:20:37 | 04-25-2020 05:20:37 | same here<|||||>This is not a bug but because pytorch 1.5 displays warnings that it didn't displayed before.<|||||>It may be this file https://github.com/huggingface/transformers/blob/master/src/transformers/optimization.py#L96
You can modify the code on those `add`, `addcdiv`, `addcmul` and use `alpha` or `value` keyword argument to eliminate these warnings. (or maybe submit a PR if that works!)
See pytorch doc: https://pytorch.org/docs/stable/torch.html#torch.add
pytorch/pytorch#32861<|||||>Hi is anyone submitting any PR for this issue?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,960 | closed | Question regarding glue examples | hi,
I have used the run_glue.py for QQP dataset and the tested the saved model for a small test set. I get really high F1-score on dev set but get lower F1-score on the test set. My question is that, is the dev set used in glue examples is used to set parameters or is it a held out set?
| 04-25-2020 02:58:42 | 04-25-2020 02:58:42 | I don't understand the question. The labels for the test sets for GLUE are privately-held, if that's what you're asking.
(Closing this as it's not really specific to this repo) |
transformers | 3,959 | closed | AttributeError: 'LambdaLR' object has no attribute 'get_last_lr' | I am trying to run the summarization example using BART and getting the following error.
```
tqdm_dict = {"loss": "{:.3f}".format(avg_loss), "lr": self.lr_scheduler.get_last_lr()[-1]}
AttributeError: 'LambdaLR' object has no attribute 'get_last_lr'
```
The error occurred at this [line](https://github.com/huggingface/transformers/blob/master/examples/transformer_base.py#L101). I am using PyTorch 1.3.
| 04-25-2020 02:19:04 | 04-25-2020 02:19:04 | I encountered the same problem, have you solved itοΌ<|||||>Upgrading to PyTorch 1.4 would solve that issue :)
Current `master` branch recommends that:
https://github.com/huggingface/transformers/blob/4e817ff41885063e08bb3bcd63e5adfd835b9911/setup.py#L70<|||||>Thank you so much, I have solved it.<|||||>`get_last_lr` is introduced in pytorch 1.4.0 . Maybe you need to upgrade your pytorch.<|||||>If you really are stuck with PyTorch <= 1.3 please feel free to open a PR to fix this with backward compatibility<|||||>I upgraded pytorch to v1.5.0, but run_glue.py with Bert model failed. So, you may have to set it to 1.4.0 |
transformers | 3,958 | closed | model.multiple_choice_head( ) function for Hugging Face GPT2 models | Hello,
I just have a quick question about the ```multiple_choice_head( )``` function for the Hugging Face GPT2 models.
when I execute the codes below, everything runs smoothly:
```python
# get pre-trained GPT2Model and GPT2DoubleHeadsModel
model_gpt2 = GPT2Model.from_pretrained('gpt2', output_hidden_states = True)
model_gpt2DoubleHeadsModel = GPT2DoubleHeadsModel.from_pretrained('gpt2')
# some parts of the codes are missing....
# trying to make a use of the gpt2DoubleHeadsModel.multiple_choice_head() function
hidden_states = model_gpt2(input_ids=input_ids, token_type_ids = token_type_ids)[2][1][:,:,:]
mc_logits = model_gpt2DoubleHeadsModel.multiple_choice_head(hidden_states).detach()
loss_fct = CrossEntropyLoss()
mc_loss = loss_fct(mc_logits.view(1,4), 1)
```
Here, I noticed that I do not need to specify the ```mc_token_ids``` when executing the line ```model_gpt2DoubleHeadsModel.multiple_choice_head(hidden_states)```.
My question is, does the ```multiple_choice_head()``` function from the line ```model_gpt2DoubleHeadsModel.multiple_choice_head(hidden_states)``` automatically take the
**last token** of my input text sequence as the ```cls_token``` (classification token) that is used by the ```mc_head``` to predict answers for the multiple-choice questions?
Thank you, | 04-25-2020 02:04:01 | 04-25-2020 02:04:01 | Method's description says this
_mc_token_ids (:obj:`torch.LongTensor` of shape :obj:`(batch_size, num_choices)`, `optional`, default to index of the last token of the input)
Index of the classification token in each input sequence.
Selected in the range ``[0, input_ids.size(-1) - 1[``._
So, the default is the last token of the input. |
transformers | 3,957 | closed | Allow the creation of "entity groups" for NerPipeline #3548 | ### This pull request applies the entity group transformation by setting the parameter: group=True.
This was done by reflecting the transformation inside NerPipeline. This is similar to a previously closed [PR](https://github.com/huggingface/transformers/pull/3607), which I closed because I accidentlly deleted my fork (apologies for my clumsiness).
Details of what I want to be able to do can be found in issue #3548.
cc @julien-c @mfuntowicz @petulla
Sample code:
```
# Install branch
# Make sure to restart runtime after installing if using Google Colab
!pip install -e git+git://github.com/enzoampil/transformers.git@add_index_to_ner_pipeline#egg=transformers
# Grouped NER
from transformers import pipeline
nlp = pipeline('ner', grouped_entities=True)
nlp("Enzo works at the Australian National University (AUN)")
# [{'entity_group': 'I-PER', 'score': 0.9968132972717285, 'word': 'Enzo'},
# {'entity_group': 'I-ORG', 'score': 0.9970400333404541, 'word': 'Australian National University'},
# {'entity_group': 'I-ORG', 'score': 0.9831967651844025, 'word': 'AUN'}]
# Ungrouped NER
nlp = pipeline('ner', grouped_entities=False)
nlp("Enzo works at the Australian National University (AUN)")
# [{'entity': 'I-PER', 'index': 1, 'score': 0.9983270168304443, 'word': 'En'},
# {'entity': 'I-PER', 'index': 2, 'score': 0.9952995777130127, 'word': '##zo'},
# {'entity': 'I-ORG', 'index': 6, 'score': 0.9984350204467773, 'word': 'Australian'},
# {'entity': 'I-ORG','index': 7, 'score': 0.9967807531356812, 'word': 'National'},
# {'entity': 'I-ORG', 'index': 8 'score': 0.9959043264389038, 'word': 'University'},
# {'entity': 'I-ORG', 'index': 10, 'score': 0.9900023937225342, 'word': 'AU'},
# {'entity': 'I-ORG', 'index': 11, 'score': 0.9763911366462708, 'word': '##N'}]
```
Tutorial on how to do Entity Grouping w/ NerPipeline [here](https://colab.research.google.com/drive/1CVLP0n3Q5t5qiWpode7jyhUNZpmLg0mS)
I'm very keen to get feedback for the above, so please let me know if I should change anything, or perform additional steps to bring its quality to an acceptable level. | 04-25-2020 01:13:07 | 04-25-2020 01:13:07 | Was hoping this could get added to the next release! Would be very useful for me.<|||||>_Note for @LysandreJik and myself_: We need to rebase the PR before merging, I've made some changes to the way batching is handled on pipelines, I just want to be sure we're not missing anything through the unittests.
Otherwise LGTM π <|||||>@mfuntowicz I've changed the `group` parameter to `grouped_entities` and have added corresponding unit tests for the "grouped" `NerPipeline` for both pytorch and tensorflow: `test_ner_grouped` and `test_tf_ner_grouped`.
Please let me know if that parameter name is acceptable or if you want me to change it!
Also, I noticed the tests failing but for tests outside the scope of this PR.<|||||>If you can rebase the PR against master, we can check the unittests (some failures are related to some recent changes we made) and then merge into master π <|||||>@mfuntowicz Rebased and got the tests to pass :smile:<|||||>Perfect, thanks for your contribution @enzoampil :)<|||||>Very welcome @mfuntowicz ! Looking forward to contributing more π <|||||>Reminder that we also want to return char offsets here too <|||||>@julien-c good point, can work on this in a separate PR over the weekend :)<|||||>@enzoampil That was more of an internal reminder for e.g. @mfuntowicz as this will be mostly built-in when we turn the flip on fast tokenizers<|||||>thank you @enzoampil ! this PR helped a lot for a downstream NER task.
Would someone from hugging face (maybe @mfuntowicz) be able to update a README or document to reflect this change so that other folks can avoid some unnecessary work?<|||||>Yes, more doc would be awesome<|||||>For the cases of NER using BIO tagging. This algorithm of grouping entities will separate `B-Label` from `I-Label` which is not correct.<|||||>Could you provide an example where it fails so that we may adapt the algorithm? Thank you.<|||||>@LysandreJik
`token_classifier = pipeline("ner", model=model,aggregation_strategy='simple', tokenizer=tokenizer,grouped_entities=True)`
```
{'entity_group': 'B_name', 'score': 0.96226656, 'word': 'Pratik', 'start': 1141, 'end': 1149}
{'entity_group': 'I_name', 'score': 0.9272271, 'word': 'kr', 'start': 1150, 'end': 1157}
{'entity_group': 'L_name', 'score': 0.7290683, 'word': 'kumar', 'start': 1158, 'end': 1163}
```
Ideally, it should be grouped to just `name` ? How to achieve this? |
transformers | 3,956 | closed | RuntimeError: Error(s) in loading state_dict for BertForTokenClassification | Since about 1600 BST, 24 Apr 2020, loading ner pipeline gives RuntimeError:
```
from transformers import pipeline
nlp = pipeline("ner", ignore_labels=[])
```
> Traceback (most recent call last):
File "test.py", line 2, in <module>
nlp = pipeline("ner", ignore_labels=[])
File "/Users/xxx/Github/xxx/xxx/.venv/lib/python3.6/site-packages/transformers/pipelines.py", line 1091, in pipeline
model = model_class.from_pretrained(model, config=config, **model_kwargs)
File "/Users/xxx/Github/xxx/xxx/.venv/lib/python3.6/site-packages/transformers/modeling_auto.py", line 1086, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/Users/xxx/Github/xxx/xxx/.venv/lib/python3.6/site-packages/transformers/modeling_utils.py", line 558, in from_pretrained
model.__class__.__name__, "\n\t".join(error_msgs)
RuntimeError: Error(s) in loading state_dict for BertForTokenClassification:
size mismatch for classifier.weight: copying a param with shape torch.Size([9, 1024]) from checkpoint, the shape in current model is torch.Size([2, 1024]).
size mismatch for classifier.bias: copying a param with shape torch.Size([9]) from checkpoint, the shape in current model is torch.Size([2]).
- `transformers` version: 2.5.1
- Platform: macos 10.14.6
- Python version: 3.6.5
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.0.0
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| 04-24-2020 22:10:10 | 04-24-2020 22:10:10 | An update: after upgrading to transformers 2.8.0, the reported issue is gone.
Nevertheless, could you share an explanation please? <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,955 | closed | Fix #3954 - GPT2 is not traceable | This PR replaces the `math.sqrt(...)` computation with `(...)**0.5` which allows the model to be correctly traced by `torch.jit.trace`. | 04-24-2020 21:14:14 | 04-24-2020 21:14:14 | Need to remove the `math` import to pass the Flake test.<|||||>Just deleted the math import to make the code quality check happy - Thanks a lot for the PR @jazzcook15 and for linking the issue @minimaxir |
transformers | 3,954 | closed | GPT2 is not fully torch.jit.trace-able | # π Bug
## Information
I'm trying to use PyTorch's tracing on a pre-trained GPT2 model and running into the following warning emitted from torch.jit.trace:
```
/opt/miniconda3/envs/py3/lib/python3.6/site-packages/transformers/modeling_gpt2.py:144: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
w = w / math.sqrt(v.size(-1))
```
Additionally, if I inspect the graph generated from the trace, I can see that the denominator in that division expression is a constant value, not one determined by the size of the tensor.
## To reproduce
The following snippet can be used to repro the problem (based on the example in the Quickstart guide)
```python
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2').eval()
example_input = torch.tensor([ tokenizer.encode("The Manhattan bridge")])
traced_model = torch.jit.trace(model, example_input)
```
## Environment info
- `transformers` version: 2.8.0
- Python version: 3.6.10
- PyTorch version (GPU?): 1.4.0 (False)
| 04-24-2020 21:02:22 | 04-24-2020 21:02:22 | see #3955 for a fix<|||||>Hi,
I have the same issue even after updating to transformers 4.1.0. The results from the traced model are poor when compared to the original model. Following are the warnings returned when I try to create a traced version of a fine-tuned GPT2 model(using torch.jit.trace()).
`/usr/local/lib/python3.6/dist-packages/transformers/models/gpt2/modeling_gpt2.py:168: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
w = w / (float(v.size(-1)) ** 0.5)
/usr/local/lib/python3.6/dist-packages/transformers/models/gpt2/modeling_gpt2.py:173: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
mask = self.bias[:, :, ns - nd : ns, :ns]
/usr/local/lib/python3.6/dist-packages/torch/jit/_trace.py:966: TracerWarning: Output nr 1. of the traced function does not match the corresponding output of the Python function. Detailed error:
With rtol=1e-05 and atol=1e-05, found 254695 element(s) (out of 27897075) whose difference(s) exceeded the margin of error (including 0 nan comparisons). The greatest difference was 5.1975250244140625e-05 (2.250950336456299 vs. 2.251002311706543), which occurred at index (0, 69, 37541).
_module_class,`
**Environment Info**
Python: 3.6.9
PyTorch: 1.7
transformers: 4.1.0
Any suggestions would be of great help.
Thanks.<|||||>It looks like subsequent changes to `modeling_gpt2.py` have added an explicit `float` cast in that calculation, and I think that's now what is causing the trace to be incorrect.<|||||>Should we have "slow" tests on this to avoid a regression @LysandreJik @sgugger ?<|||||>> It looks like subsequent changes to `modeling_gpt2.py` have added an explicit `float` cast in that calculation, and I think that's now what is causing the trace to be incorrect.
Thanks for your response. Is there any workaround for this? @jazzcook15 |
transformers | 3,953 | closed | Fix BERT example code for NSP and Multiple Choice | After #3790 was merged I noticed that the examples for BERT NSP and Multiple Choice were wrong, too.
`token_type_id` wasn't being correctly set but that's necessary for the multi sequence tasks (NSP, Multiple Choice, Question Answering). So I changed it to use `encode_plus` / `batch_encode_plus` which takes care of that.
I'm correct in assuming that for Next Sentence Prediction the Linear Classifier isn't initialized with pretrained weights and has to be trained first, correct? I didn't find any mention of it in the code. | 04-24-2020 19:09:46 | 04-24-2020 19:09:46 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=h1) Report
> Merging [#3953](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b290c32e1617fa74f44ccd8b83365fe764437be9&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3953 +/- ##
=======================================
Coverage 78.39% 78.40%
=======================================
Files 120 120
Lines 19932 19932
=======================================
+ Hits 15626 15627 +1
+ Misses 4306 4305 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.82% <ΓΈ> (ΓΈ)` | |
| [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.54% <ΓΈ> (ΓΈ)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.08% <0.00%> (+0.12%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=footer). Last update [b290c32...47a8b4b](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=h1) Report
> Merging [#3953](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3e0f06210646a440509efa718b30d18322d6a830&el=desc) will **increase** coverage by `0.23%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3953 +/- ##
==========================================
+ Coverage 78.16% 78.40% +0.23%
==========================================
Files 120 120
Lines 20058 19932 -126
==========================================
- Hits 15679 15627 -52
+ Misses 4379 4305 -74
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.82% <ΓΈ> (ΓΈ)` | |
| [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.54% <ΓΈ> (ΓΈ)` | |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `62.50% <0.00%> (-26.39%)` | :arrow_down: |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `80.95% <0.00%> (-6.38%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `68.62% <0.00%> (-2.93%)` | :arrow_down: |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `77.60% <0.00%> (-0.81%)` | :arrow_down: |
| [src/transformers/tokenization\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY2FtZW1iZXJ0LnB5) | `36.25% <0.00%> (-0.79%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.22% <0.00%> (-0.57%)` | :arrow_down: |
| [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `79.24% <0.00%> (-0.39%)` | :arrow_down: |
| ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/3953/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=footer). Last update [3e0f062...fa93925](https://codecov.io/gh/huggingface/transformers/pull/3953?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,952 | closed | BertForSequenceClassification is not optimum | In the documentation about TFBertModel it says:
```
hidden_states (tuple(tf.Tensor), optional, returned when config.output_hidden_states=True):
tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).
```
Hidden-states of the model at the output of each layer plus the initial embedding outputs.
But then in the code of BertForSequenceClassification:
```
outputs = self.bert(inputs, **kwargs)
pooled_output = outputs[1]
pooled_output = self.dropout(pooled_output, training=kwargs.get("training", False))
logits = self.classifier(pooled_output)
outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here
return outputs # logits, (hidden_states), (attentions)
```
Wouldn't give better results using `pooled_output = outputs[0][:,0]` ? | 04-24-2020 18:44:42 | 04-24-2020 18:44:42 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,951 | closed | run_xnli doesn't execute | # π Bug
## Information
(a related issue is this: #3950 )
I am trying to run the `run_xnli.py` script from the [Examples documentation](https://huggingface.co/transformers/examples.html#xnli), but I am getting an error. I am trying to run the script without training.
## To reproduce
Steps to reproduce the behavior:
I follow the [documentation](https://huggingface.co/transformers/examples.html#xnli)
My args are the following:
```
export XNLI_DIR=/path/to/xnli
python run_xnli.py \
--model_type bert \
--model_name_or_path bert-base-multilingual-uncased \
--language de \
--train_language en \
--do_eval \
--data_dir $XNLI_DIR \
--per_gpu_train_batch_size 32 \
--learning_rate 5e-5 \
--num_train_epochs 1.0 \
--max_seq_length 128 \
--output_dir bert-base-multilingual-uncased \
--save_steps -1
```
I get the following error:
```
Traceback (most recent call last):
File "run_xnli.py", line 646, in <module>
main()
File "run_xnli.py", line 638, in main
result = evaluate(args, model, tokenizer, prefix=prefix)
File "run_xnli.py", line 290, in evaluate
outputs = model(**inputs)
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/transformers/modeling_bert.py", line 1139, in forward
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 932, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 2317, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/torch/nn/functional.py", line 2115, in nll_loss
ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
IndexError: Target 2 is out of bounds.
```
## Expected behavior
I am not sure I am afraid. I am expecting some sort of accuracy measure as shown in the Example.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Linux-4.15.0-74-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.6
- PyTorch version (GPU?): 1.5.0 (False)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| 04-24-2020 17:53:18 | 04-24-2020 17:53:18 | |
transformers | 3,950 | closed | In run_xnli.py, output_dir seems to be used in place of tokenizer_name | # π Bug
## Information
I am trying to run the `run_xnli` example as found in the documentation. Unfortunately, I get a strange error were the script thinks the `output_dir` argument contains a model name.
It seems that `output_dir` has been used in place of `tokenizer_name` in some instances, such as this: `tokenizer = tokenizer_class.from_pretrained(args.output_dir, do_lower_case=args.do_lower_case)`
## To reproduce
Steps to reproduce the behavior:
1. Follow the example as found here: https://huggingface.co/transformers/examples.html#xnli
I get the following error:
```
Traceback (most recent call last):
File "run_xnli.py", line 646, in <module>
main()
File "run_xnli.py", line 624, in main
tokenizer = tokenizer_class.from_pretrained(args.output_dir, do_lower_case=args.do_lower_case)
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 868, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 971, in _from_pretrained
list(cls.vocab_files_names.values()),
OSError: Model name '/tmp/debug_xnli/' was not found in tokenizers model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-cased). We assumed '/tmp/debug_xnli/' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
``` | 04-24-2020 17:27:54 | 04-24-2020 17:27:54 | Hello! Could you show us with which arguments to do you run the script?<|||||>Hi!
I am using the arguments as found in the Doc Example, except it is the uncased MultiBert, and the epochs are set to 1.0.
```
export XNLI_DIR=/path/to/XNLI
python run_xnli.py \
--model_type bert \
--model_name_or_path bert-base-multilingual-uncased \
--language de \
--train_language en \
--do_train \
--do_eval \
--data_dir $XNLI_DIR \
--per_gpu_train_batch_size 32 \
--learning_rate 5e-5 \
--num_train_epochs 1.0 \
--max_seq_length 128 \
--output_dir /tmp/debug_xnli/ \
--save_steps -1
```<|||||>To be honest, someone should rewrite this script according to #3800
In case you want to do it @antmarakis (we can help) π<|||||>Hi!
I am not sure if I could rewrite it right now, but I can make a start and others can then take it from there? I have already fixed the issue proposed here, I will get on refactoring the script next week if nobody else picks this up :).<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,949 | closed | After reading the tutorial I can use the BertModel to extract word embedding but how to use it extract the sentence embedding? | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 04-24-2020 17:25:32 | 04-24-2020 17:25:32 | You need to do some sort of pooling technique to use the output of the bertmodel for sentence embedding. I found this resource to be extremely helpful in discussing the different types of pooling techniques to use with a bert model: https://github.com/hanxiao/bert-as-service#q-what-are-the-available-pooling-strategies<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,948 | closed | Add Type Hints to modeling_utils.py Closes #3911 | Add Type Hints to methods in `modeling_utils.py`
Note: The coverage isn't 100%. Mostly skipped internal methods (and some I wasn't sure of). | 04-24-2020 17:06:05 | 04-24-2020 17:06:05 | I think this is good for merge, no? @julien-c <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3948?src=pr&el=h1) Report
> Merging [#3948](https://codecov.io/gh/huggingface/transformers/pull/3948?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7b75aa9fa55bee577e2c7403301ed31103125a35&el=desc) will **decrease** coverage by `0.01%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3948?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3948 +/- ##
==========================================
- Coverage 78.39% 78.38% -0.02%
==========================================
Files 120 120
Lines 19925 19925
==========================================
- Hits 15620 15618 -2
- Misses 4305 4307 +2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3948?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3948/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.96% <100.00%> (-0.13%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3948/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (-0.17%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3948?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3948?src=pr&el=footer). Last update [7b75aa9...99848e8](https://codecov.io/gh/huggingface/transformers/pull/3948?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks @bglearning! |
transformers | 3,947 | closed | Many tests fails with PyTorch 1.5.0 | Several (56) tests fail on the newly released PyTorch 1.5.0. This is because the normal distribution can no longer accept a standard deviation of 0.
These errors should not reflect real-life usage, and, therefore, should not impact user experience too much. | 04-24-2020 16:07:32 | 04-24-2020 16:07:32 | |
transformers | 3,946 | closed | ImportError: cannot import name 'DefaultDataCollator' | # π Bug
## Information
Using "from transformers import DefaultDataCollator" raises an ImportError on my system. Unsure if this is really a bug or if I'm doing something stupid. Any help would be appreciated.
Model I am using (Bert, XLNet ...): N/A
Language I am using the model on (English, Chinese ...): N/A
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. from transformers import DefaultDataCollator
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
For the import to go through cleanly.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Ubuntu 18.04
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0, No
- Tensorflow version (GPU?): -
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: -
| 04-24-2020 14:35:33 | 04-24-2020 14:35:33 | Try reinstalling using
```pip install git+https://github.com/huggingface/transformers```<|||||>That didn't work, sorry<|||||>Try uninstalling before or starting again from a clean venv<|||||>Have the same issue, did force install; uninstall & install, same problem. If I even search the https://github.com/huggingface/transformers repo I dont see any DefaultDataCollator definitions.<|||||>It looks like this PR https://github.com/huggingface/transformers/pull/5015 removed DefaultDataCollator .... and this is backward compat issue
<|||||>[This HuggingFace Tutorial](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/chapter7/section3_tf.ipynb) has this error.<|||||>It looks like the way to fix this is to now use `from transformers.data.data_collator import tf_default_data_collator`. Is there any way to update this in the tutorial?<|||||>I got an ImportError using `from transformers.data.data_collator import tf_default_data_collator`.
To me, the problem seems to happen only when TPUs are turned on.
When I try to run on CPU or GPU, I can access and use `DefaultDataCollator` without any problem, but TPUs always run to this error.<|||||>you can open transformers/data/data_collator.pyοΌyou can find 'tf_default_data_collator' is not exsitοΌthen you can find function 'default_data_collator'
so you can replace by this code:
```python
from transformers.data.data_collator import default_data_collator
```
|
transformers | 3,945 | closed | BertConfig.to_json_file('config.json') saves "num_labels" as "_num_labels" on Google Colab | # π Bug
There is a problem with config.to_json_file() on Google Colab.
## Information
The "num_labels" key pair is saved as "_num_labels" key in the config.json file produced by BertConfig.to_json_file('config.json').
When this file is subsequently read with BertConfig.from_json_file('config.json') the "_num_labels" is loaded along with a "num_labels" key with default value 2.
This results in an improper configuration of a model TFBertForSequenceClassification.from_pretrained().
Model I am using (Bert, XLNet ...): TFBertForSequenceClassification
Language I am using the model on (English, Chinese ...): bert-base-german-cased
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x ] my own task or dataset: (give details below)
## To reproduce
1. modify the number of labels of the default bert-base-german-cased config and save to file
``` python
from transformers import BertConfig, TFBertForSequenceClassification
NUM_LABELS = 3
config = BertConfig.from_pretrained('bert-base-german-cased')
config.num_labels = NUM_LABELS
config.to_json_file('config.json')
```
This is what config.json looks like, notice the "_num_labels" key:
```json
{
"_num_labels": 3,
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bad_words_ids": null,
"bos_token_id": null,
"decoder_start_token_id": null,
"do_sample": false,
"early_stopping": false,
"eos_token_id": null,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_2": 2
},
"layer_norm_eps": 1e-12,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 512,
"min_length": 0,
"model_type": "bert",
"no_repeat_ngram_size": 0,
"num_attention_heads": 12,
"num_beams": 1,
"num_hidden_layers": 12,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": 0,
"prefix": null,
"pruned_heads": {},
"repetition_penalty": 1.0,
"task_specific_params": null,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 30000
}
```
2. read the saved config
``` python
config = BertConfig.from_json_file('config.json')
```
this is what the loaded config.json looks like:
```json
{
"_num_labels": 3,
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bad_words_ids": null,
"bos_token_id": null,
"decoder_start_token_id": null,
"do_sample": false,
"early_stopping": false,
"eos_token_id": null,
"finetuning_task": null,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_2": 2
},
"layer_norm_eps": 1e-12,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 512,
"min_length": 0,
"model_type": "bert",
"no_repeat_ngram_size": 0,
"num_attention_heads": 12,
"num_beams": 1,
"num_hidden_layers": 12,
"num_labels": 2,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"pad_token_id": 0,
"prefix": null,
"pruned_heads": {},
"repetition_penalty": 1.0,
"task_specific_params": null,
"temperature": 1.0,
"top_k": 50,
"top_p": 1.0,
"torchscript": false,
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 30000
}
```
3. the loaded config improperly configures the number of labels of a model:
```python
model = TFBertForSequenceClassification.from_pretrained(
'./model/bert_de/weights.h5', config=config
)
```
```
>>> model = TFBertForSequenceClassification.from_pretrained(
... './model/bert_de/weights.h5', config=config
... )
2020-04-24 16:20:44.590400: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/home/gontcharovd/anaconda3/envs/schwarz/lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 274, in from_pretrained
model.load_weights(resolved_archive_file, by_name=True)
File "/home/gontcharovd/anaconda3/envs/schwarz/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 234, in load_weights
return super(Model, self).load_weights(filepath, by_name, skip_mismatch)
File "/home/gontcharovd/anaconda3/envs/schwarz/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/network.py", line 1220, in load_weights
f, self.layers, skip_mismatch=skip_mismatch)
File "/home/gontcharovd/anaconda3/envs/schwarz/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/hdf5_format.py", line 777, in load_weights_from_hdf5_group_by_name
str(weight_values[i].shape) + '.')
ValueError: Layer #2 (named "classifier"), weight <tf.Variable 'tf_bert_for_sequence_classification/classifier/kernel:0' shape=(768, 2) dtype=float32, numpy=
array([[ 0.00399719, 0.01253725],
[ 0.00453608, -0.00098394],
[ 0.01605183, -0.02316079],
...,
[-0.0174976 , 0.00032987],
[-0.01292989, 0.00867058],
[-0.02766422, -0.00422863]], dtype=float32)> has shape (768, 2), but the saved weight has shape (768, 3).
```
I load the model with saved weights after transfer learning starting from bert-base-german-cased.
The weights were saved by ModelCheckpoint:
```python
mc = ModelCheckpoint('./model/bert_de/weights.h5', monitor='val_loss', mode='min',
verbose=1, save_best_only=True, save_weights_only=True)
```
## Expected behavior
The correct number of labels = 3 should be read from the config.json file and not the default 2.
```python
model_de = TFBertForSequenceClassification.from_pretrained(
'./model/bert_de/weights.h5', config=config_de
)
```
A hack for this problem is to specify the num_labels again after reading config.json:
```python
config = BertConfig.from_json_file('config.json')
config.num_labels = NUM_LABELS
mode = TFBertForSequenceClassification.from_pretrained(
'./model/bert_de/weights.h5', config=config
)
```
## Environment info
This happens on Google Colab. On my local machine I don't get the key "_num_labels". | 04-24-2020 14:22:54 | 04-24-2020 14:22:54 | Hello, I believe there's a mismatch between the versions on your computer and on Google colab. When running on master, saving the file as you did:
```py
from transformers import BertConfig
NUM_LABELS = 3
config = BertConfig.from_pretrained('bert-base-german-cased')
config.num_labels = NUM_LABELS
config.to_json_file('config.json')
```
when reloading it in the same environment and printing the number of labels:
```py
from transformers import BertConfig
config = BertConfig.from_pretrained('config.json')
print(config.num_labels) # 3
config = BertConfig.from_json_file("config.json")
print(config.num_labels) # 3
```
This happens since https://github.com/huggingface/transformers/pull/3147, which is necessary in order for the correct `id2label` to be instantiated. Please let me know if updating the version on your local machine does not solve the problem.<|||||>Hello, the transformers version on my computer was 2.1.1 and on Google Colab 2.8.0. Upgrading to the newest version solved the issue for me. Thanks! |
transformers | 3,944 | closed | Trainer: distributed eval | Tagging this as a Good First issue if anyone's interested. | 04-24-2020 13:18:21 | 04-24-2020 13:18:21 | Hi Julian, would you please provide more information on this issue?<|||||>In our Trainer, in `torch.distributed` mode, eval/predict are currently running on a single node, even though there's no fundamental reason for this to be the case.
Supporting would probably require writing a new instance of torch's [DistributedSampler](https://github.com/pytorch/pytorch/blob/master/torch/utils/data/distributed.py) maybe named `SequentialDistributedSampler` and then re-aggregating the inference results on a single node to perform the metrics computation.<|||||>Hey @julien-c! I'm new here and would like to work on this. Doesn't seem like anyone else is currently working on this.<|||||>Go for it @abhavk!<|||||>Hey @julien-c ! I've given some thought to what needs to be done after looking at the documentation and I see two ways to implement this -
1. Use an argument is_distributed in eval and predict functions and set that to true while calling from training loop in distributed setting. This does not mandate changes to the training loop to preserve current functionality (but the loop will need to be changed for actual implementation of distributed evaluation in the training loop).
2. Do the is_distributed check within the eval and predict functions. This will mandate changes to the training loop in order to preserve current functionality.
I'm personally leaning towards option 2 because that allows no explicit checks for distributed setting in the training loop - which seems cleaner, but there are still questions around how the training loop should change.
In either case, for the rest of the implementation my plan is to:
1. (as you have suggested) implement a slightly different version of the DistributedSampler (which does not repeat any indices while sampling over the dataset) and use that in the eval/predict functions.
2. After that call the prediction loop to calculate the outputs and use a function like dist.gather on the rank 0 node to collect outputs. I feel like this aggregating may need to be called within the prediction loop in order to allow for computation of custom metrics (which probably needs to be done on a single node), but correct me if I'm wrong here.
This is based on my current understanding so would be happy to reconsider the approach or implementation details. Thanks!<|||||>@abhavk I've written distributed eval in my own code and AFAIK it is not required to modify the sampler. The point of a distributed sampler in DDP is that each process (sampler + its dataloader) only receives a portion of the data. It is not the case that all processes process the same data!
Also see [the documentation](https://pytorch.org/docs/stable/data.html#torch.utils.data.distributed.DistributedSampler):
> Sampler that restricts data loading to a subset of the dataset.
>
> It is especially useful in conjunction with torch.nn.parallel.DistributedDataParallel. In such case, each process can pass a DistributedSampler instance as a DataLoader sampler, and load a subset of the original dataset that is exclusive to it.
That should be fine. What should be done, though is that to average well, all results for a given epoch (or step) should be saved (in memory) by each individual process. then, at the end of evaluation, all results should be gathered to a single process which can calculate the metrics over all results. So basically: you do the forward pass distributed, keep all predictions, and to average them you gather them on one GPU to do averaging/other computations.
So I think that the second point is indeed correct, but the first one is not necessary AFAIK.
If you need help or want to discuss it, let me know and mention me in this topic!<|||||>@BramVanroy Thanks for that input. I have gone through the documentation and implementation for DistributedSampler, and from [the code](https://github.com/pytorch/pytorch/blob/master/torch/utils/data/distributed.py) I can see that when the number of datapoints in the dataset is not divisible by the number of processes, it is adding extra samples by repeating the first few indices in order to make it evenly divisible. So for example a dataset with 5 datapoints and being distributed across 2 processes, will actually have 6 samples and that will be done by repeating the first sample (if the data is not shuffled, otherwise a different random sample) in both processes.
While this may be fine for training, I don't think this is desirable/okay during the evaluation/prediction phase, but do let me know if that understanding is not correct (Tagging @julien-c here too). That is primarily why I was suggesting writing a different implementation for the Distributed Sampler, where even though some nodes might have one fewer datum, there is no repetition. <|||||>Actually now that I think about it, even if it is not ideal to repeat eval samples, the difference may not be worth writing a different sampler implementation, since if someone need to use distributed processing, it may be safe to assume that they have a fairly large eval dataset where the number of samples is >> number of processes. <|||||>> Actually now that I think about it, even if it is not ideal to repeat eval samples, the difference may not be worth writing a different sampler implementation, since if someone need to use distributed processing, it may be safe to assume that they have a fairly large eval dataset where the number of samples is >> number of processes.
Exactly. In addition, the DataLoaders's "drop_last" might be of help here, though I am not entirely sure how it would integrate with a distributed sampler.<|||||>I don't really agree with this. For evaluation results to be comparable you pretty much need to have the inference results for exactly all the samples.
Also we use the same code for eval and for prediction (the loop is called `_prediction_loop()`), and in the case of prediction, e.g. when predicting on the GLUE hidden test dataset you need to output all the predictions in order.
You probably could do this using torch's DistributedSampler with some hacks (adding dummy samples to be divisible by number of nodes and dropping them afterwards) and knowledge of its internal implementation (how to reorder samples at the end) but it's probably way more explicit to just implement a dedicated one. (torch's DistributedSampler is very few lines of code anyways)
I might take a stab at this today and will ping you here.
**Update**: edited slightly to separate eval and prediction cases<|||||>Yes, to be able to compare studies, that is true. For production-oriented systems and large eval datasets that does not matter as much.
Idea, subclass the DistributedSampler and add a property `self.dummy_idxs: List[List[int]]`. The __iter__ method of DistributedSampler can stay the same with the only addition that we keep track of the indices that are larger than the length of self.dataset for each batch. self.dummy_idxs then contains a list of a list for each batch that contains the position in that batch where an item was a dummy item.
I don't have time to work on this, though, but happy to think along.<|||||>Sure @julien-c. I've taken a first stab and this is really quite an easy fix - `total_size` in the `__init__` function just changes to the length of the dataset and the below piece of code within the `iter` function should be removed (with minor changes to the `num_samples` variable) -
` # add extra samples to make it evenly divisible`
`indices += indices[:(self.total_size - len(indices))]`
`assert len(indices) == self.total_size`
If this solution looks fine, can you tell me the proper way to add this to the repository when making the update - as in which folder would this piece go in (is it /utils)? <|||||>I took a stab at it in the PR above but would love if you could review and/or test it.<|||||>Added my comments @julien-c <|||||>Closed by #4243 |
transformers | 3,943 | closed | How to input hidden state vectors from GPT2Model directly into mc_head of the GPT2DoubleHeads Model? | Hello,
I want to perform the following task:
**1.** _Feed in a text sequence to a GPT2Model (the GPT-2 model without output head);_
**2.** _Extract the hidden state vectors of the text sequence that are generated at each layer of the GPT2Model;_
**3.** _Input the hidden state vectors from each layer of the GPT2Model directly into the mc_head of the GPT2DoubleHeadsModel, and calculate the **mc_loss** that results from inputting the hidden state vectors._
I know that the code below can be used to input hidden state vectors into each individual layer of a HuggingFace GPT-2 model, but I am not sure if similar task can be done on the mc_head of the GPT2DoubleHeadsModel:
```python
model.transformer.h[i](embed)
```
My questions is:
Is there anyway that I can input the hidden state vectors from each layer of the GPT2Model directly into the multiple choice head of the GPT2DoubleHeadsModel, and extract the **mc_loss** that results from it (i.e. is there any way that I can perform the task **3**)?
Thank you, | 04-24-2020 12:17:47 | 04-24-2020 12:17:47 | |
transformers | 3,942 | closed | XLNetLMHeadModel: target mapping with num_predict > 1 and labels not working | Hi,
While trying to fine tune the XLNetLMHeadModel, I ran into the following error message:
` File "[...]/lib/python3.8/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "[...]/lib/python3.8/site-packages/transformers/modeling_xlnet.py", line 1062, in forward
loss = loss_fct(logits.view(-1, logits.size(-1)), labels.view(-1))
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
`
while calling the forward function like this:
outputs = xlnet(input, # shape = (8, 50)
attention_mask=attention_mask, # shape = (8, 50)
perm_mask=perm_mask, # shape = (8, 50, 50)
target_mapping=target_mapping, # shape = (8, 2, 50)
token_type_ids=sequence_ids, # shape = (8, 50)
labels=target) # shape = (8, 2)
According to the [docs](https://huggingface.co/transformers/model_doc/xlnet.html#transformers.XLNetLMHeadModel) the labels shape is expected to be (batch_size, num_predict). However using this shape causes the above mentioned error.
The error does not appear if we set num_predict to 1 and set labels.shape = torch.size([8])
The error also does not appear if we omit the labels and manually compute the loss | 04-24-2020 09:58:25 | 04-24-2020 09:58:25 | Hi @hannodje,
sorry to answer so late. Could you provide a complete code sample that can reproduce the error? One in which you set define all the variables `input`, `attention_mask`, `perm_mask`, `target_mapping`, `sequence_ids` and `target` and which defines which XLNetLMHeadModel you use. Ideally, I can copy / paste the code sample in a script without having to add code of my own to reproduce the error :-) It's kinda hard to reproduce the error otherwise. Thanks! |
transformers | 3,941 | closed | Decoding output sequences from TFGPT2Model | # β Questions & Help
I asked this on SO without any luck.
https://stackoverflow.com/questions/61222878/how-can-you-decode-output-sequences-from-tfgpt2model
## Details
I'm trying to get generated text from the TFGPT2Model in the Transformers library. I can see the output tensor, but I'm not able to decode it. Is the tokenizer not compatible with the TF model for decoding?
```
import tensorflow as tf
from transformers import (
TFGPT2Model,
GPT2Tokenizer,
GPT2Config,
)
model_name = "gpt2-medium"
config = GPT2Config.from_pretrained(model_name)
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
model = TFGPT2Model.from_pretrained(model_name, config=config)
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute",
add_special_tokens=True))[None, :] # Batch size 1
outputs = model(input_ids)
print(outputs[0])
result = tokenizer.decode(outputs[0])
print(result)
```
The output is:
```
$ python run_tf_gpt2.py
2020-04-16 23:43:11.753181: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6
2020-04-16 23:43:11.777487: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer_plugin.so.6
2020-04-16 23:43:27.617982: W tensorflow/python/util/util.cc:319] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
2020-04-16 23:43:27.693316: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcuda.so.1
2020-04-16 23:43:27.824075: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA n
ode, so returning NUMA node zero
...
...
2020-04-16 23:43:38.149860: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 10565 MB memory) -> physical GPU (device: 1, name: Tesla K80, pci bus id: 0000:25:00.0, compute capability: 3.7)
2020-04-16 23:43:38.150217: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-04-16 23:43:38.150913: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:2 with 10565 MB memory) -> physical GPU (device: 2, name: Tesla K80, pci bus id: 0000:26:00.0, compute capability: 3.7)
2020-04-16 23:43:44.438587: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10
tf.Tensor(
[[[ 0.671073 0.60760975 -0.10744217 ... -0.51132596 -0.3369941
0.23458953]
[ 0.6403012 0.00396247 0.7443729 ... 0.2058892 -0.43869907
0.2180479 ]
[ 0.5131284 -0.35192695 0.12285632 ... -0.30060387 -1.0279727
0.13515341]
[ 0.3083361 -0.05588413 1.0543617 ... -0.11589152 -1.0487361
0.05204075]
[ 0.70787597 -0.40516227 0.4160383 ... 0.44217822 -0.34975922
0.02535546]
[-0.03940453 -0.1243843 0.40204537 ... 0.04586177 -0.48230025
0.5768887 ]]], shape=(1, 6, 1024), dtype=float32)
Traceback (most recent call last):
File "run_tf_gpt2.py", line 19, in <module>
result = tokenizer.decode(outputs[0])
File "/home/.../transformers/src/transformers/tokenization_utils.py", line 1605, in decode
filtered_tokens = self.convert_ids_to_tokens(token_ids, skip_special_tokens=skip_special_tokens)
File "/home/.../transformers/src/transformers/tokenization_utils.py", line 1575, in convert_ids_to_tokens
index = int(index)
File "/home/.../venv/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py", line 853, in __int__
return int(self._numpy())
TypeError: only size-1 arrays can be converted to Python scalars
``` | 04-24-2020 07:10:57 | 04-24-2020 07:10:57 | Hi @xeb, sorry to reply so late. I think you are using the wrong model. Instead of using `TFGPT2Model` you should use GPT2 with a LM Head on top `GPT2LMHeadModel`. The code should work then :-) |
transformers | 3,940 | closed | fix resize_token_embeddings to accept padding_idx for xlm-roberta models | When resize_token_embedding there is no option for padding_idx
and it leads to the wrong embedding for xlm-roberta models
which needs embedding to have padding_idx as 1 not 0.
I fixed issue by adding option for padding_idx and considered padding_idx as None (which is 0 as default) for compatibility for other types of transformers models. | 04-24-2020 06:06:28 | 04-24-2020 06:06:28 | |
transformers | 3,939 | closed | Continue training args and tqdm in notebooks | Added a little more description to the metadata of training args for new scripts.
`--overwrite_output_dir` can be used to continue training from checkpoint if `--output_dir` points to a directory already having checkpoints.
This can help reduce the confusion when migrating from older script.
tqdm in colab prints every step in new line.
so updated to
`from tqdm.auto import tqdm` | 04-24-2020 04:50:13 | 04-24-2020 04:50:13 | Updates committed<|||||>Thanks! |
transformers | 3,938 | closed | [Benchmark] Parameter settings of XLM-R on NER tasks | # π₯ Benchmarking `transformers`
## Benchmark
`XLM-RoBerta` on CoNLL 03 English NER dataset
## Set-up
`model_name_path`: xlm-roberta-base
`max_steps`: 3000 (roughly 6 epochs)
`warmup_steps`: 300
`save_steps`: 200
`learning_rate`: 5e-5
`per_gpu_train_batch_size`: 8
`n_gpu`: 3 (Titan X)
## Results
91.017 (averaged results of five runs)
## Issues
As reported in original [paper](https://arxiv.org/pdf/1911.02116.pdf), XLM-R can achieve 92.25 on this English NER dataset. Since they do not share any parameter settings for fine-tuning, I have to adopt the above ones and the F1 score is about 91.017. Has anybody ever obtained similar results to those in this paper with appropriate settings?
| 04-24-2020 03:31:47 | 04-24-2020 03:31:47 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,937 | closed | What should I do if I want to change the padding idx in pytorch bert(Huggingface)? |
I find ,in RobertaModel class, the padding idx is limited to 1(is it true?),which is different from my tokenizer and data.So what should I do to change or just apply the model structure...
###
class RobertaEmbeddings(BertEmbeddings):
"""
Same as BertEmbeddings with a tiny tweak for positional embeddings indexing.
"""
def __init__(self, config):
super().__init__(config)
self.padding_idx = 1
self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=self.padding_idx)
self.position_embeddings = nn.Embedding(
config.max_position_embeddings, config.hidden_size, padding_idx=self.padding_idx
)
def forward(self, input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None):
if position_ids is None:
if input_ids is not None:
# Create the position ids from the input token ids. Any padded tokens remain padded.
position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx).to(input_ids.device)
else:
position_ids = self.create_position_ids_from_inputs_embeds(inputs_embeds)
return super().forward(
input_ids, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds
) | 04-24-2020 00:17:25 | 04-24-2020 00:17:25 | Hi @R-craft,
That's actually a bug. Thanks for spotting it :-) Will open a PR. |
transformers | 3,936 | closed | Pytorch 1.5 DataParallel | # π Bug
## Information
Can't run forward in PyTorch 1.5.0, works fine in 1.4.0
Model I am using (Bert, XLNet ...): XLNet
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
Transformer + custom head + custom losses + differential learning rates, I don't think it matters.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
Custom news classification
## To reproduce
Steps to reproduce the behavior:
1. Install PyTorch 1.5.0
2. Run forward on xlnet
3.
```
File "transformers/modeling_xlnet.py", line 761, in forward
dtype_float = next(self.parameters()).dtype
StopIteration
```
## Expected behavior
Runs forward
## Environment info
- `transformers` version: 2.8.0
- Platform: Ubuntu 18.04
- Python version: Anaconda 3.7
- PyTorch version (GPU?): 1.5, Yes
- Tensorflow version (GPU?): N/A
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
| 04-24-2020 00:09:36 | 04-24-2020 00:09:36 | I'm experiencing the same problem running `transformers/examples/run_language_modeling.py` with RoBERTa. Works well with PyTorch 1.4.0 tho.<|||||>Also happens with RoBERTa, but only in distributed mode (only tested with DataParallel for now)<|||||>@Rizhiy, do you mind putting a code example? I can't reproduce on `master` by doing an inference through the model. Thanks.<|||||>@LysandreJik I will try to put one together but it's a bit weird. I only observed it when I use a transformer model via lightning in DataParallel mode so far<|||||>Ah, got it
```python
import transformers
import torch
m = transformers.AutoModel.from_pretrained("roberta-base")
m.to("cuda:0")
k = torch.nn.DataParallel(m, device_ids=[0,1])
k.forward(m.dummy_inputs['input_ids'])
```
gives
```text
StopIteration: Caught StopIteration in replica 0 on device 0.
Original Traceback (most recent call last):
File "<snip>/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "<snip>/lib/python3.8/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/<snip>/lib/python3.8/site-packages/transformers/modeling_bert.py", line 707, in forward
attention_mask, input_shape, self.device
File "<snip>/lib/python3.8/site-packages/transformers/modeling_utils.py", line 113, in device
return next(self.parameters()).device
StopIteration
```
Using torch 1.5.0 and something like yesterday's transformers master<|||||>The same for me and Bert
```
transformers - 2.3.0
pytorch - 1.5.0
```<|||||>See also https://github.com/PyTorchLightning/pytorch-lightning/issues/1649<|||||>Hello Everybody, do we have an update on this?. Today i managed to gather some more data to train a RoBERTa Model from scratch, i have been running experiementes in Pytorch 1.4, and i found this bug today that updated to Pytorch 1.5.<|||||>Same problem here, running BERT.
```
torch==1.5.0
transformers==2.8.0
```
I'm running on GPUs, using `export CUDA_VISIBLE_DEVICES=5,6,7` before running (I have 8 1080TIs on this server).
```run_language_modeling.py --output_dir=models --model_type=bert --model_name_or_path=bert-base-uncased --do_train --train_data_file=Vol45.sample --mlm --save_steps-2000 --line_by_line --per_gpu_train_batch_size=8```
Vol45.sample is a .txt with one doc per line
EDIT: It seems to work if I downgrade pytorch to 1.4<|||||>Same here.
This might have to do with the first issue listed under Known Issues in the [pytorch changelog](https://github.com/pytorch/pytorch/releases) of version 1.5, i.e. the recent change in `torch.nn.parallel.replicate`<|||||>> Same problem here, running BERT.
>
> ```
> torch==1.5.0
> transformers==2.8.0
> ```
>
> I'm running on GPUs, using `export CUDA_VISIBLE_DEVICES=5,6,7` before running (I have 8 1080TIs on this server).
>
> `run_language_modeling.py --output_dir=models --model_type=bert --model_name_or_path=bert-base-uncased --do_train --train_data_file=Vol45.sample --mlm --save_steps-2000 --line_by_line --per_gpu_train_batch_size=8`
>
> Vol45.sample is a .txt with one doc per line
>
> EDIT: It seems to work if I downgrade pytorch to 1.4
Thanks. It also works for me!
```
torch==1.4.0
transformers==2.8.0
```
<|||||>The same issue: #4189 <|||||>Just to scope this bug a little bit better, all of you are using `torch.nn.DataParallel` (not `DistributedDataParallel` or single-GPU), correct?<|||||>> Just to scope this bug a little bit better, all of you are using `torch.nn.DataParallel` (not `DistributedDataParallel` or single-GPU), correct?
Sure, please use the following code to reproduce the error:
> import torch, transformers
> model = transformers.AutoModel.from_pretrained("bert-base-multilingual-cased")
> model = torch.nn.DataParallel(model)
> model = model.cuda()
> input = torch.ones([16, 10], dtype=torch.long)
> input = input.cuda()
> model(input)<|||||>> Just to scope this bug a little bit better, all of you are using `torch.nn.DataParallel` (not `DistributedDataParallel` or single-GPU), correct?
I was using the run_language_modeling.py script, which AFAIK uses torch.nn.DataParallel.<|||||>This seems to be due to https://github.com/pytorch/pytorch/pull/33907
Still looking for the most correct fix on our side.<|||||>Worked for me when downgraded to
torch==1.4.0<|||||>Can you guys take a look at https://github.com/huggingface/transformers/issues/4657 and suggest what environment I should use. I've tried several with no luck.<|||||>Can you install the repo from source and try again? There have been some issues with PyTorch upstream that Julien addressed here: #4300. So you can try with the latest master branch.<|||||>Can confirm that installing from source (2.10) solves the issue.<|||||>Hello!
Just for the record, this seems to be solved with the latest release of transformers (3.0.1 and pytorch 1.5.1, cuda 10.1). At least the provided MWE does not fail.
Best,
<|||||>> Hello!
>
> Just for the record, this seems to be solved with the latest release of transformers (3.0.1 and pytorch 1.5.1, cuda 10.1). At least the provided MWE does not fail.
>
> Best,
As per the previous comments: this was probably already fixed in 2.10.<|||||>Seems like there's an issue with DataParallel since 1.5, still no fix tho<|||||>> Seems like there's an issue with DataParallel since 1.5, still no fix tho
YesοΌI use torch==1.80οΌencounter this just now.<|||||>> > Seems like there's an issue with DataParallel since 1.5, still no fix tho
>
> YesοΌI use torch==1.80οΌencounter this just now.
For now you can only down-grade to 1.4.0 to escape the bug (It's been almost a year and the bug is still unfixed). Some people edit model codes (avoiding `next(self.parameters())`) for same purpose.<|||||>This issue should have been fixed in https://github.com/huggingface/transformers/pull/4300, at least available in v3.0.1.
Could you open a new issue with the specific error and link it here so that we may see what's going wrong? Thank you. |
transformers | 3,935 | closed | How to feeding hidden state vectors from one transformer directly into a layer of different transformer | Hello,
I want to perform the following task:
**1.** _Feed in a text sequence to a GPT2Model (the GPT-2 model without output head);_
**2.** _Extract the hidden state vectors of the text sequence that are generated at each layer of the GPT2Model;_
**3.** _Feed the hidden state vectors from each layer of the GPT2Model as an input to the GPT2DoubleHeadsModel (assuming that the n_embd of the GPT2Model and the GPT2DoubleHeadsModel are equal), and calculate the **mc_loss** that results from inputting the hidden state vectors. My GPT2DoubleHeadsModel would only be consisted of (1 layer) + (the output heads)._
I know that the code below can be used to feed in hidden state vectors as inputs to each individual layer of a HuggingFace GPT-2 model:
```python
model.transformer.h[i](embed)
```
But the code above will only give me the output of an individual layer, so it wouldn't give me the mc_loss that I am looking for (since to calculating the mc_loss from the hidden state vectors, I would first need to feed in the hidden state vectors into the 1st layer of the GPT2DoubleHeadsModel, and somehow feed in the output of the layer as an input to the mc_head (multiple choice output head) of the GPT2DoubleHeadsModel)
My questions are:
1. Is there any way to perform the task 3 with the Hugging Face GPT-2 models?
2. Is there anyway that I can access the GPT2DoubleHeadsModel's mc_head (multiple choice head), similar to the code that I wrote above?
Thank you, | 04-23-2020 23:32:56 | 04-23-2020 23:32:56 | Did you ever find the answer? |
transformers | 3,934 | closed | [qol] example scripts: parse args from .args file or JSON | You can either:
- pass the path to a json file as the unique argument
- automatically load args from a .args file that's a sibling of the entrypoint script.
```json
{
"model_name_or_path": "bert-base-cased",
"task_name": "mnli",
"data_dir": "./data/glue/MNLI",
"output_dir": "./models/mnli"
}
```
```
--model_name_or_path distilroberta-base
--task_name mnli
```
I use the .args method myself, but I think both are ok.
Tagging @jplu and @stefan-it who have expressed interest in this. | 04-23-2020 23:10:29 | 04-23-2020 23:10:29 | Thanks for adding this! Could you also integrate it into the `run_ner` example π€<|||||>Yes @stefan-it I'll add it to all (Trainer-updated) scripts.
Do you have a preference between the JSON approach and the args one? Or should we do both?<|||||>Perfect!!! it works like a charm. I think both would be nice.<|||||>Both would be nice, thanks!<|||||>Very nice!<|||||>@julien-c Would it be helpful when I open a PR that adds this to the `run_ner` example, or do you already have this feature on a branch π€<|||||>I do not have a branch that does that currently so feel free to do it (+ also run_language_modeling and run_multiple_choice if you're so inclined :)<|||||>@julien-c I am a bit late to the party, but I went over it quickly nevertheless. As my comments show, if the only function of `trim_suffix` is to remove the extension or replace it, it might be more readable to simply use the built-in `Path.with_extension` functionality.
If you agree I can do a PR for this.
Apart from that: great stuff. especially with a growing number of parameters and options, a config file is really welcome!<|||||>@BramVanroy Yes, I didn't know this feature, PR is welcome. |
transformers | 3,933 | closed | Add ALBERT to the Tensorflow to Pytorch model conversion cli | This PR adds ALBERT to the `convert` command of `transformers-cli` to allow the conversion of pre-trained models from Tensorflow to Pytorch. The documentation is updated to show how to run the conversion. | 04-23-2020 22:15:08 | 04-23-2020 22:15:08 | LGTM! |
transformers | 3,932 | closed | [ci] Load pretrained models into the default (long-lived) cache | There's an inconsistency right now where:
- we load some models into CACHE_DIR
- and some models in the default cache
- and often, in both for the same models
When running the RUN_SLOW tests, this takes a lot of disk space, time, and bandwidth.
I'd rather always use the default cache
| 04-23-2020 21:29:18 | 04-23-2020 21:29:18 | Sorry I missed this. This is cool! |
transformers | 3,931 | closed | Loading a TF pretrained model into BertForSequenceClassification module | Hi, there might be something I am doing wrong, but I cannot figure out what that is, then any help would be welcome.
After downloading a TF checkpoint (containing model.index, model.data, model.meta, config.json and vocab.txt files), I used it to perform pretraining using some more text, more relevant to the task I would have ahead. Pretraining was performed using the API from BERT's official github. This generated other model.index, model.data and model.meta files. I am now trying to load them into the BertForSequenceClassification module, using the `from_pretrained` method. I figured I should include a config instance as well, so I used that config.json file that was attached to that very first one TF checkpoint I mentioned. But when passing the index file to `from_pretrained`, I get the error:
`AttributeError: 'BertForSequenceClassification' object has no attribute 'bias'`
Any help would be much appreciated.
| 04-23-2020 19:16:43 | 04-23-2020 19:16:43 | Hi! You can't directly load an official TensorFlow checkpoint in the PyTorch model, you first need to convert it. You can use this [script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py) to convert it to a PyTorch checkpoint.
If you want to use our TensorFlow interface in order to do this, you would still need to use this script to convert it to our interface, and then use the `TFBertForSequenceClassification` while specifying the `from_pt` option (as the result would be a PyTorch checkpoint):
```py
model = TFBertForSequenceClassification.from_pretrained("directory", from_pt=True)
```<|||||> I had understood that parameter `from_tf` would take care of this, did I
get it wrong? Your suggestion worked though, thank you!
Em qui., 23 de abr. de 2020 Γ s 16:55, Lysandre Debut <
[email protected]> escreveu:
> Hi! You can't directly load an official TensorFlow checkpoint in the
> PyTorch model, you first need to convert it. You can use this script
> <https://github.com/huggingface/transformers/blob/master/src/transformers/convert_bert_original_tf_checkpoint_to_pytorch.py>
> to convert it to a PyTorch checkpoint.
>
> If you want to use our TensorFlow interface in order to do this, you would
> still need to use this script to convert it to our interface, and then use
> the TFBertForSequenceClassification while specifying the from_pt option
> (as the result would be a PyTorch checkpoint):
>
> model = TFBertForSequenceClassification.from_pretrained("directory", from_pt=True)
>
> β
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/3931#issuecomment-618629373>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ADBVWZB5OSLJ2X6WEC63COTROCMKPANCNFSM4MPLTASQ>
> .
>
<|||||>`from_tf` specified we're loading a TensorFlow checkpoint that is already in the HuggingFace format (TF2), which is usually different than the original implementation (TF1).
The original implementation's checkpoints should first be converted in order to be usable.<|||||>> `from_tf` specified we're loading a TensorFlow checkpoint that is already in the HuggingFace format (TF2), which is usually different than the original implementation (TF1).
>
> The original implementation's checkpoints should first be converted in order to be usable.
how can I convert ckpt(TF1) to h5(TF2)?
I only found convert_bert_original_tf_checkpoint_to_pytorch.py<|||||>You could use the script you mention to convert the model to PyTorch; then this PyTorch checkpoint can be seamlessly loaded in a TensorFlow implementation, see comment above: https://github.com/huggingface/transformers/issues/3931#issuecomment-618629373
After that you can just do
```py
model.save_pretrained("here")
```
and you should have a `tf_model.h5` under `here`. |
transformers | 3,930 | closed | How can i finetune an encode-decoder combination? | Hi, I want to finetune an encode-decoder model to train on a parallel dataset(something like translation) and I'm not sure what should I do. I read [this](https://medium.com/huggingface/encoder-decoders-in-transformers-a-hybrid-pre-trained-architecture-for-seq2seq-af4d7bf14bb8) blog post but It didn't help. It doesn't really matter for me which encode or decoder I choose, I just need the power of a pre-trained transformer. | 04-23-2020 18:01:39 | 04-23-2020 18:01:39 | Well I found the summarization example, I'm going to read it and I'll close this issue if it helps. Would still be happy if you can help.<|||||>@Palipoor Can you please share the summarization example. I am looking for one.<|||||>> @Palipoor Can you please share the summarization example. I am looking for one.
Here it is:
https://github.com/huggingface/transformers/tree/master/examples/summarization
Although it didn't help me and I'm trying to finetune a T5.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,929 | closed | Add support for LayerNorm before residual connection in transformer | Recent work has shown that placing the layer norm before the residual connection in a transformer leads to better gradient propagation and more stable training. For example, [Learning Deep Transformer Models for Machine Translation](https://arxiv.org/abs/1906.01787) shows that when stacking more layers, the Pre-LN transformer performs better than the Post-LN transformer. A more detailed discussion is found in [On Layer Normalization in the Transformer Architecture](https://openreview.net/pdf?id=B1x8anVFPr), which aids understanding despite being rejected from ICLR 2020 for lack of novelty.
_In my own experiments pretraining from scratch with TFAlbert, I have found pre-layer normalization to have greater training stability and improved performance than the default post-layer normalization._
<img width="331" alt="Screen Shot 2020-04-23 at 11 37 36 AM" src="https://user-images.githubusercontent.com/4564897/80131074-e8f0d380-8556-11ea-9459-0d2c255c4428.png"> <img width="280" alt="Screen Shot 2020-04-23 at 11 41 05 AM" src="https://user-images.githubusercontent.com/4564897/80131507-95cb5080-8557-11ea-8fcf-7757c15ab604.png">
**This is not a final PR, just opening the discussion. Some caveats:**
- If approved, I can extend this to all transformer models, not just ALBERT.
- Loading an existing pretrained model with `config.pre_layer_norm = True` leads to poor performance. A separate pretrained model would need to be made available with this configuration option. I would be happy to do so for ALBERT, but this would be outside my pay grade for all the models. Since the current focus of the repo is on finetuning rather than pretraining, I understand if this isn't a complexity tradeoff you want to make.
What do you think about adding this configuration option?
| 04-23-2020 17:54:01 | 04-23-2020 17:54:01 | Because I just stumbled upon it: a counterpoint in [Understanding the Difficulty of Training Transformers](https://arxiv.org/abs/2004.08249), where the author founds that while this tends to stabilize the convergence, it also results in worse performances for the runs that do converge.
(Still, nice PR, I was toying with the idea of doing it too :-)<|||||>> Because I just stumbled upon it: a counterpoint in [Understanding the Difficulty of Training Transformers](https://arxiv.org/abs/2004.08249), where the author founds that while this tends to stabilize the convergence, it also results in worse performances for the runs that do converge.
>
> (Still, nice PR, I was toying with the idea of doing it too :-)
Hi, author here (On Layer Normalization in the Transformer Architecture),
For Pre-LN, a slightly larger dropout rate is usually needed for median scale tasks, such as translation. This is not required for BERT. Pre-LN has larger expressiveness power than Post-LN and when the data scale is small, it is more likely to overfit.
When you use a larger dropout in translation tasks, Pre-LN is no worse than Post-LN. [There are three 'dropout' places in Transformer, For translation tasks, we apply 0.1 to all the three places which make Pre-LN faster to converge and achieve better/competitive performance].
You can reproduce the experiments in our paper.
<|||||>Oh, thanks for chiming in, I must read the paper again then!<|||||>Bumping this discussion. Any thoughts @LysandreJik @julien-c ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Not stale<|||||>> Not stale
I think the pre-ln version should be added to the branch. As far as I know, more and more works are developed based on this architecture. BTW, although our paper is rejected by ICLR, it is accepted by ICML:)<|||||>Bumping this, any thoughts @thomwolf ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Not stale<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 3,928 | closed | Fix TFAlbertForSequenceClassification classifier dropout probability.β¦ | β¦ It was set to config.hidden_dropout_prob, but should be config.classifier_dropout_prob. | 04-23-2020 17:13:05 | 04-23-2020 17:13:05 | |
transformers | 3,927 | closed | β DistilBert test perplexity based on WikiText-2: ppl is too low? | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
I tried to use the pre-trained model to fine-tune. I got the ppl 6.12 of the DistilBert model, which is much lower than the GPT-2 model (ppl: 18.34, ref: https://paperswithcode.com/sota/language-modelling-on-wikitext-2).
Is the DistilBert model works much better than the GPT-2? Or is it just because the loss functions are different?
Here are the commands:
```
# 1) code:
git clone https://github.com/huggingface/transformers.git
# 2) Download dataset:
cd transformers/examples/
wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-raw-v1.zip
unzip wikitext-2-raw-v1.zip
# 3) Benchmart:
## distilbert:
export TRAIN_FILE=./wikitext-2-raw/wiki.train.raw
export TEST_FILE=./wikitext-2-raw/wiki.test.raw
CUDA_VISIBLE_DEVICES=6 python run_language_modeling.py \
--output_dir=output_distilbert \
--model_type=distilbert \
--model_name_or_path=distilbert-base-uncased \
--do_train \
--per_gpu_train_batch_size 15 \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE \
--mlm
```
Result:
```
04/22/2020 13:25:33 - INFO - __main__ - ***** Running evaluation *****
04/22/2020 13:25:33 - INFO - __main__ - Num examples = 535
04/22/2020 13:25:33 - INFO - __main__ - Batch size = 4
Evaluating: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 134/134 [00:05<00:00, 24.02it/s]
04/22/2020 13:25:38 - INFO - __main__ - ***** Eval results *****
04/22/2020 13:25:38 - INFO - __main__ - perplexity = tensor(6.1200)
```
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 04-23-2020 16:33:13 | 04-23-2020 16:33:13 | @VictorSanh can correct me if I'm wrong, but in general, the perplexity for masked language models like BERT is much lower than the perplexity for causal language models.
That's because the "MLM perplexity" isn't the actual perplexity (in the sense that it is the probability of a sentence, computed from the predicted next word), but rather the "masked perplexity" which is computed differently.
It isn't surprising to me that you obtain ~6 perplexity on WikitText-2 with DistilBERT.<|||||>Thank you! |
transformers | 3,926 | closed | Remove 50k limits bug | This is a bug needs to be removed | 04-23-2020 15:12:21 | 04-23-2020 15:12:21 | |
transformers | 3,925 | closed | Using the default trainer args | Hey guys!
Just looked through the transformer_base... why not also allow any of the trainer args to be used?
I imagine the best set up for you guys is:
- add default trainer args (since people can look into the lightning docs for details on this)
- add hf specific args (nlp related or whatever you guys need)
- add model specific args
But instead it looks like you only use a subset of the args
https://github.com/huggingface/transformers/blob/dd9d483d03962fea127f59661f3ae6156e7a91d2/examples/transformer_base.py#L275
Something like this should work:
```python
parser = ArgumentParser()
# enable all trainer args
parser = Trainer.add_argparse_args(parser)
# add the HF args
parser.add_argument(--some_hf_specific_thing, ...)
# add the model args
parser = GoodGAN.add_model_specific_args(parser)
# cook them all up :)
args = parser.parse_args()
```
Check here for mode details:
https://pytorch-lightning.readthedocs.io/en/stable/hyperparameters.html#multiple-lightning-modules
@srush @nateraw | 04-23-2020 14:59:50 | 04-23-2020 14:59:50 | This sounds good. We originally were trying to reimplement exactly the other examples. But now we can have the lightning examples support a wider range of options.<|||||>@srush I can take care of this when I go to work on #4494 if that's alright with you.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,924 | closed | Change uses of pow(x, 3) to pow(x, 3.0) to resolve #3873 | This minor pull request fixes #3873 by changing the type of the exponent parameter for the _torch.pow()_ call in _gelu_new()_ from integer to float.
| 04-23-2020 14:14:29 | 04-23-2020 14:14:29 | |
transformers | 3,923 | closed | Feat/add model card | Add model card for sentiment control model. | 04-23-2020 13:45:11 | 04-23-2020 13:45:11 | Thanks! [Model page](https://huggingface.co/lvwerra/gpt2-imdb-ctrl) |
transformers | 3,922 | closed | LineByLineTextDataset limits the total number of examples to 50000 documents | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hi,
While running run_language_modeling.py with argument --line_by_line, found out that it only takes first 50000 documents.
https://github.com/huggingface/transformers/blob/cb3c2212c79d7ff0a4a4e84c3db48371ecc1c15d/src/transformers/data/datasets/language_modeling.py#L93
Please confirm is this a bug or intended behavior?
| 04-23-2020 12:54:35 | 04-23-2020 12:54:35 | It's a bug. Do you want to submit a PR to remove?<|||||>Yes that would be great<|||||>Closed by #3926 |
transformers | 3,921 | closed | New run_language_modeling.py does not save vocab.txt and tokenizer_config.json | # π Bug
Hi,
While running new run_language_modeling.py, at the end it only saves pytorch model and config.json and training arguments
Unlike before it also used to store vocab.txt and tokenizer_config.json
I am not using any custom tokenizer, only using tokenizer provided by the model.
## Command Details I used
python run_language_modeling.py
--output_dir=bert-base-uncased-4
--overwrite_output_dir
--model_type=bert
--model_name_or_path="bert-base-uncased"
--do_train --train_data_file="./test.txt"
--per_gpu_train_batch_size 3
--mlm
--num_train_epochs 1
Model I am using (Bert, XLNet ...):
Bert-base-uncased
Language I am using the model on (English, Chinese ...):
Engilsh
## Expected behavior
It should also save vocab.txt and tokenizer_config.json
| 04-23-2020 12:11:58 | 04-23-2020 12:11:58 | Yes, you would now need to do it manually by calling `tokenizer.save_pretrained(training_args.output_dir)`
I'll keep this issue open to see if many people ask for it (we can re-add if it's the case). cc @LysandreJik <|||||>I think if an extra step is needed, it should say it in the guideline ;)
So perhaps auto save would be better.<|||||>Ok, I will re-add it by default |
transformers | 3,920 | closed | run_language_modeling.py line 251: checking if it is a directory | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Hi, I'm curious why you changed the code to check if model_args.model_name_or_path is a directory. I am trying to train OpenAI GPT in Google Colab, and using arguments as following:
!python /content/transformers/examples/run_language_modeling.py
--output_dir="/content/gdrive/My Drive/output"
--overwrite_output_dir
--model_type=openai-gpt
--model_name_or_path=openai-gpt
--do_train
--train_data_file="/content/gdrive/My Drive/train.txt"
--do_eval
--eval_data_file="/content/gdrive/My Drive/dev.txt"
--num_train_epochs 5
--learning_rate 1.5e-4
--save_steps 1
--save_total_limit 5
but this line causes following error:
Traceback (most recent call last):
File "/content/transformers/examples/run_language_modeling.py", line 283, in
main()
File "/content/transformers/examples/run_language_modeling.py", line 257, in main
trainer.train(model_path=model_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 363, in train
self._rotate_checkpoints()
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 458, in _rotate_checkpoints
checkpoints_sorted = self._sorted_checkpoints(use_mtime=use_mtime)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 443, in _sorted_checkpoints
regex_match = re.match(".*{}-([0-9]+)".format(checkpoint_prefix), path)
File "/usr/lib/python3.6/re.py", line 172, in match
return _compile(pattern, flags).match(string)
TypeError: expected string or bytes-like object
It seems that openai-gpt is not a directory, thus making model_path variable equal to None. Is this due to my implementation or can this part be fixed?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**: | 04-23-2020 12:02:36 | 04-23-2020 12:02:36 | I also faced the similler issue while the arguments
--save_steps 200
--save_total_limit 3
Removing this arguments worked. But need to know what could be the issue<|||||>Did removing them worked through the end completely? Because there is a default value for save_steps, and I just set it to 1 to see if the bug still appears after some manual fixes... Looking at the function names like rotate_checkpoints(), I feel like save_total_limit might be the key, since its default is set to None, but again not sure....<|||||>Yes Removing them worked till the end for me. Total Optimization steps for me was around 1600. so it did save 3 times the checkpoints and correctly loaded them as well. <|||||>Ah okay removing the --save_total_limit make it work for me, but then it would start saving gigabytes of models without top limit. I think the directory checking might come from removing the should_continue argument, but will require us to constantly update where we load the latest model from...?<|||||>I can confirm this is a bug. I'll push a fix on master shortly. |
transformers | 3,919 | closed | xlm-roberta (large/base) : run_language_modeling.py cannot starting training | Hi HuggingFace, thank you very much for your great contribution.
# β Questions & Help
My problem is : run_language_modeling.py takes abnormally long time for `xlm-roberta-large & base` **_"before" start training_**
. It got stuck at the following step for 7 hours (so I gave up eventually) :
`transformers.data.datasets.language_modeling - Creating features from dataset file at ./`
I have successfully running `gpt2-large`, `distilbert-base-multilingual-cased` using exactly the same command below (just change model) which start training within just 2-3 minutes. At first I thought that because of the big size of XLM-Roberta. However, as `gpt2-large` has similar size, is there somehow problem on finetuning with XLM-Roberta? (So maybe a bug in the current version)
I also tried to rerun the same command in another machine, but got the same stuck (which is not the case for `gpt2-large`, `distilbert-base-multilingual-cased` )
**update** the same thing happen to `xlm-roberta-base`
## Command Details I used
Machine AWS p3.2xlarge (V100, 64GB Ram)
Training file size is around 60MB
!python transformers/examples/run_language_modeling.py \
--model_type=xlm-roberta \
--model_name_or_path=xlm-roberta-large \
--do_train \
--mlm \
--per_gpu_train_batch_size=1 \
--gradient_accumulation_steps=8 \
--train_data_file={TRAIN_FILE} \
--num_train_epochs=2 \
--block_size=225 \
--output_dir=output_lm \
--save_total_limit=1 \
--save_steps=10000 \
--cache_dir=output_lm \
--overwrite_cache \
--overwrite_output_dir | 04-23-2020 11:53:29 | 04-23-2020 11:53:29 | Have you tried launching a debugger to see exactly what takes a long time?
I would use vscode remote debugging.<|||||>I would guess that your tokenization process takes too long. If you're training a new LM from scratch, I would recommend using the fast [Tokenizers library](https://github.com/huggingface/tokenizers) written in Rust. You can initialize a new `ByteLevelBPETokenizer` instance in your `LineByLineTextDataset` class and `encode_batch` your text with it.<|||||>Thanks you guys, I finally managed to finetune XLM-Roberta-Large, but have to wait for 11 hours, before the training start!
Since I did not want training from scratch, I took a tip from @mfilipav to convert pretrained tokenizer to fast-tokenizer (and since it's SentencePiece, I have to use` sentencepiece_extractor.py` ), and modify `use_fast = True` in `run_language_modeling.py` ... However, since it's still 11 hours of waiting, maybe this doesn't help.
**UPDATED** : By adding `--line_by_line` option, the training start very quickly, close the issue!<|||||>@ratthachat and how fast it became after enabling "--line_by_line true" ? I am waiting for almost 1 hour. My training set size is 11 gb and here goes my parameters
`export TRAIN_FILE=/hdd/sifat/NLP/intent_classification/bert_train.txt
export TEST_FILE=/hdd/sifat/NLP/intent_classification/data_corpus/test.txt
python examples/run_language_modeling.py \
--output_dir ./bert_output \
--model_type=bert \
--model_name_or_path=bert-base-multilingual-cased \
--mlm \
--line_by_line true \
--do_train \
--train_data_file=$TRAIN_FILE \
--do_eval \
--eval_data_file=$TEST_FILE \
--learning_rate 1e-4 \
--num_train_epochs 3 \
--save_total_limit 2 \
--save_steps 2000 \
--per_gpu_train_batch_size 5 \
--evaluate_during_training \
--seed 42`<|||||>Zaowad, your training file is much bigger than mine so I guess 1 hour is not bad ;) You can also try fp16 option as well |
transformers | 3,918 | closed | change scheduler.get_last_lr to get_lr to avoid bug | Learning rate scheduler function for getting last lr is `get_lr` instead of `get_last_lr`. | 04-23-2020 08:31:23 | 04-23-2020 08:31:23 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3918?src=pr&el=h1) Report
> Merging [#3918](https://codecov.io/gh/huggingface/transformers/pull/3918?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cb3c2212c79d7ff0a4a4e84c3db48371ecc1c15d&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `0.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3918?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3918 +/- ##
==========================================
- Coverage 78.45% 78.45% -0.01%
==========================================
Files 111 111
Lines 18521 18521
==========================================
- Hits 14531 14530 -1
- Misses 3990 3991 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3918?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/3918/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `43.59% <0.00%> (ΓΈ)` | |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3918/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.59% <0.00%> (-0.17%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3918?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3918?src=pr&el=footer). Last update [cb3c221...f42b609](https://codecov.io/gh/huggingface/transformers/pull/3918?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I donβt think thatβs right. Which version of PyTorch are you running?<|||||>> I donβt think thatβs right. Which version of PyTorch are you running?
Well, my version is 1.2.0, I checked the API in 1.4.0, the current version is right.
Sorry for my hurry PR, |
transformers | 3,917 | closed | Create README.md | 04-23-2020 07:03:29 | 04-23-2020 07:03:29 | Thanks! [Model page](https://huggingface.co/SparkBeyond/roberta-large-sts-b)<|||||>PS: you should upload an avatar for the SparkBeyond org here: https://huggingface.co/SparkBeyond |
|
transformers | 3,916 | closed | feat: add logging through Weights & Biases | Hi,
I've been using hugging-face for a while and I've been loving it. I think it could become a central reference for NLP problems and the new `Trainer` class made it so much easier to add new functionality.
I would like to contribute in solving a few limitations I observed:
* some models don't have clear details on how they have been trained (hyper-parameters) and for which task, making it hard to reproduce their results (and even ensure those results were actually reached) -> ideally we would track & log their training process as well as config parameters used
* some new models keep on being requested/proposed without seeing their actual efficiency -> ideally we would be able to compare them to baselines
* I've seen issues where "better" models are offered to replace previous ones but there is no way to prove they are actually better -> maybe we can add a metrics table (linked to actual runs) to the model card
* it is hard to keep track of models and datasets
* overall I'd like to bring more traceability and transparency between users
This is why IΒ propose to use [Weights &Β Biases](https://docs.wandb.com/) which is free for open source and will help solve those issues. Also it is already part of Pytorch-Lightning so it will be easy to integrate it with those scripts, if extra setup is required.
This PR brings :
* logging through Weights & Biases when `wandb` is installed
* logging of evaluation metrics at the end of training
See example run on distilbert-base-cased -> [W&B run](https://app.wandb.ai/borisd13/transformers-examples/runs/3iuvnwam?workspace=user-borisd13)

If you navigate around, you will see that config parameters, metrics, gradients, computer resources, actual code (including git repo & command line used) and stdout have all been uploaded for full traceability and reproducibility. I can also easily upload trained model but maybe we could add an arg to make it optional in case users have limited connection (as you prefer)?
IΒ can also easily compare bert-base-uncased and distilbert-base-uncased from my [project page](https://app.wandb.ai/borisd13/transformers-examples).
*Note: I didn't try any fine-tuning and just used same parameters on both in this example.*

This makes it easy to quickly see if a new model is actually interesting.
Finally, this integration let us create [cool reports](https://app.wandb.ai/stacey/keras_finetune/reports/Curriculum-Learning-in-Nature--Vmlldzo1MjcxNw) where graphs directly pool data and are interactive.
If you like it, then I'd love to add other features in later PR's:
* log trained model -> I'll just need to have access to path of relevant files so that IΒ don't upload entire output directory
* add [W&B init args](https://docs.wandb.com/library/init) as parameters based on what can be useful for hugging-face
* log all command line args -> IΒ need to access the parser so that IΒ can also log parameters such as `model_path_or_name` and `max_seq_length` (for run_glue script) which will be convenient to create custom summary tables and compare models (or finetune)
* maybe log called script (unless `finetuning_taks` is enough?)
* in addition to metrics, track model size & inference speed which can help users choose the model they want
* use artifacts (W&B version control system) to track models, tokenizers & datasets -> central repository that will let us see easily that runs are based on same dataset or same pre-trained model + see all trained tasks associated to one pre-trained model
* log prediction samples (based on the task)
* create a central repo on W&B (entity:Β hugging-face) to have access to other's people runs
* integrate model card issue template so that we always add a link to a run as well as performance metrics
* add [sweeps](https://docs.wandb.com/sweeps) -> make it easy to optimize hyper-parameters
* we could add a hook to run automatically on all tasks any new model added (and link the run to the model cards)
Please let me know your thoughts and feel free to contact me or Weights & Biases directly to discuss this integration. | 04-23-2020 06:20:53 | 04-23-2020 06:20:53 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3916?src=pr&el=h1) Report
> Merging [#3916](https://codecov.io/gh/huggingface/transformers/pull/3916?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cb3c2212c79d7ff0a4a4e84c3db48371ecc1c15d&el=desc) will **decrease** coverage by `0.01%`.
> The diff coverage is `38.46%`.
[](https://codecov.io/gh/huggingface/transformers/pull/3916?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3916 +/- ##
==========================================
- Coverage 78.45% 78.43% -0.02%
==========================================
Files 111 111
Lines 18521 18533 +12
==========================================
+ Hits 14531 14537 +6
- Misses 3990 3996 +6
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3916?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/3916/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `43.52% <38.46%> (-0.08%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3916/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.92% <0.00%> (+0.16%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3916?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3916?src=pr&el=footer). Last update [cb3c221...1330605](https://codecov.io/gh/huggingface/transformers/pull/3916?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Hi,
I'm just checking if there's anything you would like me to change on this PR.
Ideally I would also like to be able to pass a few extra optional kwargs (saving model, project nameβ¦) but the integration can go as deep as you want based on the project philosophy.
Also feel free if you want me to test other use cases for this integration. IΒ want to write a post with the W&B guys after that and compare models, so I'd be glad to do it on something you find most useful!
The idea is mainly to give more traceability to models and a convenient way to compare and optimize models.<|||||>Will take a look very soon @borisdayma! (can i ping you on the W&B Slack or is there a better channel?)<|||||>Perfect, you can ping me on the W&B slack!<|||||>That's great @julien-c !
The next big piece that could be useful is to log the trained model weights on W&B.
I am not sure it's a good idea to do it automatically. Would there be an easy way to add an arg for it `save_model`?
Otherwise if you think it makes more sense (like for `watch` method), people can call `wandb.save` at the end of their script separately.
I'll suggest to add it in wandb docs but do you think there would be a good place to document how to use `wandb` within hugging-face docs?<|||||>I'll add some doc about wandb to our examples' README.md<|||||>Awesome, let me know if I can be of any further help! |
transformers | 3,915 | closed | New run_language_modeling.py continuing trainng | I tried using new script for Language Modeling to train model from scratch.
I was training model with old script. When I tried continuing using new one, there is no option for `--should_continue`.
Does `--overwrite_output_dir` makes it train from scratch if i use same directory for `--model_name_or_path` and `--output_dir`?
So, I am continuing training with old script. | 04-23-2020 06:01:17 | 04-23-2020 06:01:17 | Yes, we removed the `--should_continue` flag. You can just use your latest checkpoint as `--model_name_or_path`
Let us know if this helps<|||||>Yup, The new script is really clean and understandable. To continue training, I should use `--overwrite_output_dir`? Because, when I point to last checkpoint directory using `-- model_name_or_path`, It asks me for `--output_dir` which must be empty, so I had to switch between two directories. <|||||>Yes you would use `--overwrite_output_dir` (it was implicitly added before)
You would do:
```
--model_name_or_path ./model_name/checkpoint-9000
--output_dir ./model_name
--overwrite_output_dir
```
If this is a frequent request we could add it back but I feel like it was more confusing than anything.<|||||>Yes would like to get this feature back. <|||||>I am using [TrainingArguments](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments) but couldn't continue training from last checkpoint. It is starting from step 0. Any suggestion?
```
training_args = TrainingArguments(
output_dir=last_checkpoint,
overwrite_output_dir=True,
...
)
trainer = Trainer( model=model, #roberta config model
args=training_args,
...
)
```
e.g., `last_checkpoint="/saved_model/checkpoint-20000/"`
@julien-c |
transformers | 3,914 | closed | Unknown Device when training GPT2 with TPUs in Colab | # π Bug
## Information
Model I am using (Bert, XLNet ...): DialoGPT2-small from microsoft
Language I am using the model on (English, Chinese ...): Spanish Conversations
The problem arises when using: Pytorch's XLA library for trying to train a GPT2 model on google colab TPUS
* [ ] the official example scripts: (give details below)
* [ x ] my own modified scripts: (give details below)
I adapted a TPU training script for the RoBERTa model to attempt to work with a GPT2 model
https://colab.research.google.com/drive/1LTH0LpHxWQYEy9U7vBWTk4-4sLo7YF5B
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ x ] my own task or dataset: Multi Turn Dialog
## To reproduce
Steps to reproduce the behavior:
1. Run the following Colab Notebook: https://colab.research.google.com/drive/1LTH0LpHxWQYEy9U7vBWTk4-4sLo7YF5B
2. Make sure the Runtime is set to be a TPU
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
Exception in device=TPU:7: Unknown device
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 484, in forward
hidden_states, layer_past=layer_past, attention_mask=attention_mask, head_mask=head_mask[i]
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 119, in _start_fn
fn(gindex, *args)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "<ipython-input-17-4d2a1ccbaa5f>", line 3, in _mp_fn
a = run(trn_df, val_df, model, tokenizer, args)
File "<ipython-input-16-d526a8f464d8>", line 81, in run
scheduler
File "<ipython-input-11-be3e41b46d25>", line 10, in train_fn
outputs = model(inputs, labels = labels)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 599, in forward
inputs_embeds=inputs_embeds,
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 484, in forward
hidden_states, layer_past=layer_past, attention_mask=attention_mask, head_mask=head_mask[i]
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 231, in forward
m = self.mlp(self.ln_2(x))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_gpt2.py", line 210, in forward
h = self.act(self.c_fc(x))
RuntimeError: Unknown device
```
## Expected behavior
GPT2 model to start training leveraging the Google Colab TPU
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0a0+d6149a7 (False)
- Tensorflow version (GPU?): 2.2.0-rc3 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Yes
- Pytorch XLA Version: torch-xla-1.6+e788e5b
Any help is greatly appreciative and thanks for the amazing library! | 04-23-2020 05:39:19 | 04-23-2020 05:39:19 | Fixed when building from source |
transformers | 3,913 | closed | Summarization | 04-23-2020 05:36:49 | 04-23-2020 05:36:49 | ||
transformers | 3,912 | closed | ImportError: cannot import name 'DataCollatorForLanguageModeling'_File "run_language_modeling.py" | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
_https://huggingface.co/blog/how-to-train_
I followed everything in this colab without changing anything.
And this is the problem I encountered.
**Traceback (most recent call last):
File "run_language_modeling.py", line 29, in <module>
from transformers import (
ImportError: cannot import name 'DataCollatorForLanguageModeling'
CPU times: user 21.5 ms, sys: 18 ms, total: 39.5 ms
Wall time: 4.52 s**
How can I fix this problem?
Thank you for your kindness and support
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
| 04-23-2020 05:35:33 | 04-23-2020 05:35:33 | Unfortunatelly I have the same error
Traceback (most recent call last):
File "run_language_modeling.py", line 29, in <module>
from transformers import (
ImportError: cannot import name 'DataCollatorForLanguageModeling' from 'transformers' (E:\PycharmProjects\Ancient_BERT\venv\lib\site-packages\transformers\__init__.py)<|||||>> Unfortunatelly I have the same error
>
> Traceback (most recent call last):
> File "run_language_modeling.py", line 29, in
> from transformers import (
> ImportError: cannot import name 'DataCollatorForLanguageModeling' from 'transformers' (E:\PycharmProjects\Ancient_BERT\venv\lib\site-packages\transformers__init__.py)
I tried to do this (instead of !pip install transformers)
!git clone https://github.com/huggingface/transformers
cd transformers
pip install .
And I get the following error:
**Traceback (most recent call last):
File "run_language_modeling.py", line 280, in
main()
File "run_language_modeling.py", line 225, in main
if training_args.do_train
File "run_language_modeling.py", line 122, in get_dataset
tokenizer=tokenizer, file_path=file_path, block_size=args.block_size, local_rank=local_rank
File "/usr/local/lib/python3.6/dist-packages/transformers/data/datasets/language_modeling.py", line 84, in init
assert os.path.isfile(file_path)
AssertionError**
I think it should be about **tokenizers-0.7.0.**
But I now still don't know how to fix it.<|||||>Both PyCharm running and example in Colab script has the same problem:
https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb
Traceback (most recent call last):
File "run_language_modeling.py", line 29, in <module>
from transformers import (
ImportError: cannot import name 'DataCollatorForLanguageModeling'
CPU times: user 36.2 ms, sys: 22.9 ms, total: 59.1 ms
Wall time: 15.2 s<|||||>I change in Colab installation command to:
!pip install git+https://github.com/huggingface/transformers
and its running :), now only to solve problem with out of memory<|||||>> !pip install git+https://github.com/huggingface/transformers
You have to change batch size.
cmd = """
python run_language_modeling.py
--train_data_file ./oscar.eo.txt
--output_dir ./EsperBERTo-small-v1
--model_type roberta
--mlm
--config_name ./EsperBERTo
--tokenizer_name ./EsperBERTo
--do_train
--line_by_line
--learning_rate 1e-4
--num_train_epochs 1
--save_total_limit 2
--save_steps 2000
**--per_gpu_train_batch_size 4**
--seed 42
""".replace("\n", " ")
And Thank you for your help.<|||||>Maybe somone had error (error occurs after reaching save step value=2000):
Iteration: 16% 1997/12500 [10:38<49:42, 3.52it/s]
Iteration: 16% 1998/12500 [10:39<47:03, 3.72it/s]
{"learning_rate": 8.4e-05, "loss": 7.684231301307678, "step": 2000}
Epoch: 0% 0/1 [10:39<?, ?it/s]
Iteration: 16% 1999/12500 [10:39<49:56, 3.50it/s]04/23/2020 11:55:42 - INFO - transformers.trainer - Saving model checkpoint to ./EsperBERTo-small-v1/checkpoint-2000
04/23/2020 11:55:42 - INFO - transformers.configuration_utils - Configuration saved in ./EsperBERTo-small-v1/checkpoint-2000/config.json
04/23/2020 11:55:42 - INFO - transformers.modeling_utils - Model weights saved in ./EsperBERTo-small-v1/checkpoint-2000/pytorch_model.bin
Traceback (most recent call last):
File "run_language_modeling.py", line 280, in <module>
main()
File "run_language_modeling.py", line 254, in main
trainer.train(model_path=model_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 363, in train
self._rotate_checkpoints()
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 458, in _rotate_checkpoints
checkpoints_sorted = self._sorted_checkpoints(use_mtime=use_mtime)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 443, in _sorted_checkpoints
regex_match = re.match(".*{}-([0-9]+)".format(checkpoint_prefix), path)
File "/usr/lib/python3.6/re.py", line 172, in match
return _compile(pattern, flags).match(string)
TypeError: expected string or bytes-like object
Epoch: 0% 0/1 [10:40<?, ?it/s]
Iteration: 16% 1999/12500 [10:40<56:04, 3.12it/s]
CPU times: user 2.78 s, sys: 928 ms, total: 3.71 s
Wall time: 11min 33s
My configuration is:
config = {
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"max_position_embeddings": 514,
"model_type": "roberta",
"num_attention_heads": 12,
"num_hidden_layers": 6,
"type_vocab_size": 1,
"vocab_size": 52000
}
cmd = """
python run_language_modeling.py
--train_data_file ./oscar.eo.txt
--output_dir ./EsperBERTo-small-v1
--model_type roberta
--mlm
--config_name ./EsperBERTo
--tokenizer_name ./EsperBERTo
--do_train
--line_by_line
--learning_rate 1e-4
--num_train_epochs 1
--save_total_limit 2
--save_steps 2000
--per_gpu_train_batch_size 4
--seed 42
""".replace("\n", " ")
<|||||>Closing in favor of #3920 <|||||>How to solve this? Why is this closed? |
transformers | 3,911 | closed | Type Hints for modeling_utils.py | add [type hints](https://docs.python.org/3/library/typing.html) to the methods for better readability and autocomplete.
If possible, check your work with an automated tool and tell us how you did it! | 04-23-2020 04:27:03 | 04-23-2020 04:27:03 | |
transformers | 3,910 | closed | GPT2 generations with specific words | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
@patrickvonplaten
Just curious to generate a text with the specified words.
Generate function has a parameter bad_words, instead of removing these, I want to include these in generations. I try to edit in src/transformers/modeling_utils.py line 1194; to
next_token_logits[batch_idx, banned_tokens[batch_idx]] = -float("inf")
changed to :
next_token_logits[batch_idx, banned_tokens[batch_idx]] = 1
But this didn't work. Is there is any way we can do this?
| 04-23-2020 01:08:01 | 04-23-2020 01:08:01 | These `next_token_logits[batch_idx, banned_tokens[batch_idx]]` are logits in the `range(-inf, inf)` not probabilities. You can try to set all other logits except your banned_tokens to `-float("inf")` |
transformers | 3,909 | closed | Shuffle train subset for summarization example | Fix #3892
@sshleifer | 04-22-2020 23:23:07 | 04-22-2020 23:23:07 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3909?src=pr&el=h1) Report
> Merging [#3909](https://codecov.io/gh/huggingface/transformers/pull/3909?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cb3c2212c79d7ff0a4a4e84c3db48371ecc1c15d&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3909?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3909 +/- ##
==========================================
+ Coverage 78.45% 78.47% +0.01%
==========================================
Files 111 111
Lines 18521 18521
==========================================
+ Hits 14531 14534 +3
+ Misses 3990 3987 -3
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/3909?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.06% <0.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.92% <0.00%> (+0.16%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/3909/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `68.86% <0.00%> (+0.36%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3909?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3909?src=pr&el=footer). Last update [cb3c221...3361612](https://codecov.io/gh/huggingface/transformers/pull/3909?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,908 | closed | MarianMTModel.from_pretrained('Helsinki-NLP/opus-marian-en-de') | Adds support for `opus/marian-en-de` translation models:
- There are 900 models with this `MarianSentencePieceTokenizer`, `MarianMTModel` setup. - This PR only adds en-de to avoid massive S3 maintenance if names/other things change.
- There is no formal connection to the bart authors, but the bart code is well-tested and fast and I didn't want to rewrite it. You can still read everything in one file: modeling_bart.py.
- unittest failures are from isort.
The only differences from BART are:
- static (sinusoid) positional embeddings (`config.static_position_embeddings`)
- a new `final_logits_bias` (`config.add_bias_logits`)
- no `layernorm_embedding` (`config.normalize_embedding`)
I could have also used T5 but that has more prefix token logic and relative position bias.
Solutions that split the changes across two files were much harder to read.
Why MarianSentencePieceTokenizer instead of MarianTokenizer?
- about 10% of the models (another 100) use BPE instead of SentencePiece. We will use that tokenizer (or integrate it with this one and rename) in a future PR.
### Future PRs
- load in many models
- support BPE tokenizer in some way.
- make lm_labels on the fly for training loop, like T5.
- `TranslationPipeline`
`TODO[Code]`:
- [x] investigate discrepancies in generation parameters: `length_penalty`, `decoder_start_token_id`, etc. These may explain small differences in generations (detailed in comment below).
- [x] isort
`TODO[Docs]`:
- [x] Naming: `MarianMTModel`, `MarianSentencePieceTokenizer`
- [x] put en-de model in S3, update integration tests.
- [ ] docstring examples
- [ ] notebook to show?
- [ ] marian.rst
- [ ] README.md
- [ ] `AutoModelWithLMHead`
| 04-22-2020 22:40:48 | 04-22-2020 22:40:48 | I think that's awesome! Such little code changes to include that many models is amazing :-)
Does the model with the pre-trained weights produce 1-to-1 the same results as the official Marian models in C++?<|||||>@patrickvonplaten the generations are very sensible, but not identical to marian, due to slightly different generation parameters:
Equivalent (4/7):
C+: Ich bin ein kleiner Frosch.
HF: Ich bin ein kleiner Frosch.
C+: Tom bat seinen Lehrer um Rat.
HF: Tom bat seinen Lehrer um Rat.
C+: Tom bewunderte Marias Mut wirklich.
HF: Tom bewunderte Marias Mut wirklich.
C+: Umdrehen und die Augen schlieΓen.
HF: Umdrehen und die Augen schlieΓen.
Not Equivalent (3/7):
C+: Jetzt kann ich die 100 WΓΆrter vergessen, die ich kenne.
HF: Jetzt kann ich die 100 WΓΆrter **des Deutschen** vergessen, die ich kenne.
C+: **O** (Input="O")
HF:
C+: So wΓΌrde ich das **machen**.
HF: So wΓΌrde ich das **tun**.
I'm investigating.<|||||>> @patrickvonplaten the generations are very sensible, but not identical to marian, due to slightly different generation parameters:
>
> Equivalent (4/7):
>
> C+: Ich bin ein kleiner Frosch.
> HF: Ich bin ein kleiner Frosch.
>
> C+: Tom bat seinen Lehrer um Rat.
> HF: Tom bat seinen Lehrer um Rat.
>
> C+: Tom bewunderte Marias Mut wirklich.
> HF: Tom bewunderte Marias Mut wirklich.
>
> C+: Umdrehen und die Augen schlieΓen.
> HF: Umdrehen und die Augen schlieΓen.
>
> Not Equivalent (3/7):
> C+: Jetzt kann ich die 100 WΓΆrter vergessen, die ich kenne.
> HF: Jetzt kann ich die 100 WΓΆrter **des Deutschen** vergessen, die ich kenne.
>
> C+: **O** (Input="O")
> HF:
>
> C+: So wΓΌrde ich das **machen**.
> HF: So wΓΌrde ich das **tun**.
>
> I'm investigating.
Sounds all very coherent :-) <|||||>Merging. Will add docs in next PR, when I add tons of models.<|||||>@sshleifer, can you elaborate on "investigate discrepancies in generation parameters: length_penalty, decoder_start_token_id, etc. These may explain small differences in generations (detailed in comment below)"? I see https://github.com/huggingface/transformers/pull/3908#issuecomment-618473222 but was this resolved?<|||||>_TLDR Was not resolved, the investigation continues._
Our lib has many parameters to control generations, `length_penalty`, `repetition_penalty`, `bad_word_ids`, `decoder_start_token_id`.
They are documented [here](https://huggingface.co/transformers/model_doc/bart.html?#transformers.BartForConditionalGeneration.generate).
The marian lib has `normalize` and `word-penalty`, which are not identical, but similar. Part of the discrepancy may involve configuring our parameters to be closer to theirs.
<|||||>I see, thanks!<|||||>Could anyone help with this issue: #5040 ? |
transformers | 3,907 | closed | Fix TF optimization classes and apply the changes in the NER TF script | This PR fix the AdamW and GradientAccumulator optimization classes for Tensorflow. Furthermore, we apply these new changes in the NER TF script. | 04-22-2020 22:06:58 | 04-22-2020 22:06:58 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3907?src=pr&el=h1) Report
> Merging [#3907](https://codecov.io/gh/huggingface/transformers/pull/3907?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cb3c2212c79d7ff0a4a4e84c3db48371ecc1c15d&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/3907?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #3907 +/- ##
=======================================
Coverage 78.45% 78.45%
=======================================
Files 111 111
Lines 18521 18521
=======================================
Hits 14531 14531
Misses 3990 3990
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/3907?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/3907?src=pr&el=footer). Last update [cb3c221...f75b213](https://codecov.io/gh/huggingface/transformers/pull/3907?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 3,906 | closed | [Cleanup] Fix typos in modeling_utils.py | 04-22-2020 21:52:01 | 04-22-2020 21:52:01 | ||
transformers | 3,905 | closed | BERT TF-Lite conversion not working in TensorFlow 2.2.0 | # π Bug
Model conversion succeeds but inputs and outputs are not recognized.
## Information
Model I am using (Bert, XLNet ...): BERT (`bert-base-uncased`)
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
- TensorFlow 2.1.0 + Transformers 2.8.0 - has no problem converting `bert-base-uncased` model to tflite version.
- TensorFlow 2.2.0rc3 + Transformers 2.8.0 - has issues with interoperability.
The tasks I am working on is:
* Convert BERT models to TF-Lite format to use it in mobile apps.
* Trying to use the latest TF-Lite package version for Android in the place of TF-Lite package provided in the repo `huggingface/tflite-android-transformers`.
## To reproduce
Please execute the following code with TensorFlow versions 2.1.0 and 2.2.0-rc3
```
import transformers
from transformers import TFBertModel, BertConfig
import tensorflow as tf
print('TensorFlow version =', tf.__version__)
print('Transformers version =', transformers.__version__)
MODEL_DIR = 'bert-base-uncased'
MAX_SEQ_LEN = 50
# Read the model
config = BertConfig.from_pretrained(MODEL_DIR)
model = TFBertModel(config)
# Set input Spec
input_spec = [
tf.TensorSpec([1, MAX_SEQ_LEN], tf.int32),
tf.TensorSpec([1, MAX_SEQ_LEN], tf.int32),
tf.TensorSpec([1, MAX_SEQ_LEN], tf.int32)
]
model._set_inputs(input_spec, training=False)
print(model.inputs)
print(model.outputs)
```
- For TensorFlow 2.2.0-rc3: Model outputs and inputs are **None**
```
TensorFlow version = 2.2.0-rc3
Transformers version = 2.8.0
None
None
```
- For TensorFlow 2.1.0:
```
TensorFlow version = 2.1.0
Transformers version = 2.8.0
...
[<tf.Tensor 'input_1:0' shape=(None, 50) dtype=int32>, <tf.Tensor 'input_2:0' shape=(None, 50) dtype=int32>, <tf.Tensor 'input_3:0' shape=(None, 50) dtype=int32>]
[<tf.Tensor 'tf_bert_model/Identity:0' shape=(None, 50, 768) dtype=float32>, <tf.Tensor 'tf_bert_model/Identity_1:0' shape=(None, 768) dtype=float32>]
```
## Expected behavior
- I expect the BERT model conversion to work properly to TensorFlow 2.2.0-rc{1/2/3}
- Preferably BERT should use the default TF-Lite supported layers just like MobileBERT model provided by Google.
- Image - MobileBERT from Google's bert-qa android example (left) vs BERT converted using the above script using TensorFlow v2.1.0 (right)

<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Windows
- Python version: 3.7.7
- PyTorch version (GPU?): 1.4.0 (No GPU)
- Tensorflow version (GPU?): 2.1.0 (working), 2.2.0-rc3 (not working) (no GPU for both versions)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 04-22-2020 21:03:13 | 04-22-2020 21:03:13 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,904 | closed | how i pass from an AutoModelWithLMHead to a GPT2DoubleHeadsModel ? | I used your script of run_language_modeling.py to train a gpt2 model from scratch. Here i save it and upload:
from transformers import AutoModelWithLMHead, AutoTokenizer
import os
directory = "./checkpoint"
model = AutoModelWithLMHead.from_pretrained(directory)
tokenizer = AutoTokenizer.from_pretrained(directory)
out = "gpt_2"
os.makedirs(out, exist_ok=True)
model.save_pretrained(out)
tokenizer.save_pretrained(out)
!transformers-cli upload ./gpt_2/
After if i want to use it for fine-tuning in a specific task and i want to use a GPT2DoubleHeadsModel
i can just load it like this?
from transformers import GPT2DoubleHeadsModel,GPT2Tokenizer
model = GPT2DoubleHeadsModel.from_pretrained("ex/gpt_2")
tokenizer = GPT2Tokenizer.from_pretrained("ex/gpt_2")
Or it will not load the weights? | 04-22-2020 20:13:49 | 04-22-2020 20:13:49 | It will load the weights, but will not load the multiple choice head as it's not in the checkpoint. That's why fine-tuning it is a good idea, so that this head's weights get trained! |
transformers | 3,903 | closed | T5 shows weird behaviour in a sanity check of its pre-training task. | # π Bug
## Information
Model I am using (Bert, XLNet ...): T5
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* no script, just a few simple commands. watch below.
## To reproduce
Steps to reproduce the behavior:
1.
```
from transformers import T5Tokenizer, T5ForConditionalGeneration
import torch
tokenizer = T5Tokenizer.from_pretrained('t5-small')
model = T5ForConditionalGeneration.from_pretrained('t5-small')
input_ids = tokenizer.encode('The <extra_id_1> walks in <extra_id_2> park', return_tensors='pt')
lm_labels = tokenizer.encode('<extra_id_1> cute dog <extra_id_2> the <extra_id_3> </s>', return_tensors='pt')
outputs = model(input_ids=input_ids, lm_labels=lm_labels)
probs, preds = torch.topk(outputs[1], 10)
preds = preds[0]
for ind_lst in preds:
tokenizer.convert_ids_to_tokens(ind_lst)
```
This yeilds the following:
```
['βThe', 'βWalking', 'βPark', 'βWalk', 'β', 'βwalking', 'βthe', 'βWe', 'βIn', 'βLa']
['βpark', 'βPark', 'βparks', 'βWalking', 'βNational', 'βwalk', 'β', 'βForest', 'βwalking', 'park']
['βpark', '<extra_id_2>', 'βPark', 'βwalks', 'βwalking', 'βparks', 'βnature', 'βwalk', 'βWalking', 'βand']
['βpark', 'βwalks', '<extra_id_2>', 'βparks', 'βPark', 'βwalk', 'βwalking', 'walk', 'park', 'βalso']
['βthe', 'βThe', 'β', 'βpark', 'βto', 'βPark', 'βand', 'βat', 'βwalking', 'βin']
['βpark', '<extra_id_3>', 'βPark', 'βThe', 'βparks', 'βparking', 'β', 'βcar', 'βbeautiful', 'βthe']
['βpark', 'βPark', 'βWalking', 'βparks', 'βwalk', 'βwalking', 'βWalk', 'park', 'βNational', 'βwalks']
```
## Expected behavior
I would expect that '<extra_id_1>' will show up as a possible top10 prediction.
during pre-training T5 gets supervision to return the first token to be <extra_id_1>. So if it doesn't return it, it is weird behavior. I hope that I didn't get it wrong. Thank you. @patrickvonplaten
## Environment info
- `transformers` version: 2.8.0
- Platform: Darwin-19.3.0-x86_64-i386-64bit
- Python version: 3.7.0
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no | 04-22-2020 19:12:15 | 04-22-2020 19:12:15 | Hi @yuvalkirstain,
Thanks for your message!
When you put in `input_ids` and `lm_labels` at the some time the following happens:
the `decoder_input_ids` are created automatically depending on the `lm_labels` (shift `lm_labels` to the right and pre-prend `<PAD>`) and look as follows:
`<PAD> <extra_id_1> cute dog <extra_id_2> the <extra_id_3>`.
see https://github.com/huggingface/transformers/blob/4e817ff41885063e08bb3bcd63e5adfd835b9911/src/transformers/modeling_t5.py#L1060
Now in order for `<extra_id_1>` to show up, it should be guessed from `<PAD>`, but I'm not sure whether this makes much sense. After `<extra_id_1>` has been processed though from the input, it's much more likely that another `<extra_id_2>` is in the labels, so it does make more sense to me that `<extra_id_2>` is guessed.
Let me know if it does not makes sense to you :-)
<|||||>ahh ok. Thank you so much and sorry that I bothered you with it. Really enjoying working with it so far :) |
transformers | 3,902 | closed | RuntimeError: index out of range at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:237 | # π Bug
## Information
Model I am using (Bert, XLNet ...): Bert-base-uncased
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x ] my own task or dataset: (give details below)
## To reproduce
I am trying to fine-tune BERT for sentiment analysis on the IMDB Dataset. Most of the code is based on this blog: http://mccormickml.com/2019/07/22/BERT-fine-tuning/
I create my model as follows:
```
from transformers import BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained(
"bert-base-uncased", # Use the 12-layer BERT model, with an uncased vocab.
num_labels = 2, # The number of output labels--2 for binary classification.
output_attentions = False, # Whether the model returns attentions weights.
output_hidden_states = False, # Whether the model returns all hidden-states.
)
```
In my training loop, I do the following:
```
for step, batch in enumerate(train_dataloader):
b_input_ids = batch[0].to(device)
b_input_mask = batch[1].to(device)
b_labels = batch[2].to(device)
model.zero_grad()
loss, logits = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask,
labels=b_labels)
```
I get the following error:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-13-77ffde63bb0d> in <module>()
109 token_type_ids=None,
110 attention_mask=b_input_mask,
--> 111 labels=b_labels)
112
113 # Accumulate the training loss over all of the batches so that we can
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
545 result = self._slow_forward(*input, **kwargs)
546 else:
--> 547 result = self.forward(*input, **kwargs)
548 for hook in self._forward_hooks.values():
549 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels)
1030 position_ids=position_ids,
1031 head_mask=head_mask,
-> 1032 inputs_embeds=inputs_embeds)
1033
1034 pooled_output = outputs[1]
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
545 result = self._slow_forward(*input, **kwargs)
546 else:
--> 547 result = self.forward(*input, **kwargs)
548 for hook in self._forward_hooks.values():
549 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask)
733 head_mask = [None] * self.config.num_hidden_layers
734
--> 735 embedding_output = self.embeddings(input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds)
736 encoder_outputs = self.encoder(embedding_output,
737 attention_mask=extended_attention_mask,
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
545 result = self._slow_forward(*input, **kwargs)
546 else:
--> 547 result = self.forward(*input, **kwargs)
548 for hook in self._forward_hooks.values():
549 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/transformers/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
185 if inputs_embeds is None:
186 inputs_embeds = self.word_embeddings(input_ids)
--> 187 position_embeddings = self.position_embeddings(position_ids)
188 token_type_embeddings = self.token_type_embeddings(token_type_ids)
189
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
545 result = self._slow_forward(*input, **kwargs)
546 else:
--> 547 result = self.forward(*input, **kwargs)
548 for hook in self._forward_hooks.values():
549 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/sparse.py in forward(self, input)
112 return F.embedding(
113 input, self.weight, self.padding_idx, self.max_norm,
--> 114 self.norm_type, self.scale_grad_by_freq, self.sparse)
115
116 def extra_repr(self):
~/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1465 # remove once script supports set_grad_enabled
1466 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1467 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1468
1469
RuntimeError: index out of range: Tried to access index 512 out of table with 511 rows. at ../aten/src/TH/generic/THTensorEvenMoreMath.cpp:237
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.3.0
- Platform: Mac OS
- Python version: 3.6
- PyTorch version (GPU?): 1.2.0, no GPU
- Tensorflow version (GPU?): 1.2.0, no GPU
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 04-22-2020 19:09:45 | 04-22-2020 19:09:45 | How big are your batches? BERT can only accept tensors of maximum length of 512. Is it possible one of your inputs is longer than 512?<|||||>I had the same issue while my `per_gpu_eval_batch_size=8` and `per_gpu_train_batch_size=8 `. <|||||>Yes, it's unrelated to batch size. Is your sequence length longer than 512?<|||||>Also having this issue, getting a different error when I try to use GPU but I understand that the error is caused by the same thing and the CPU stack trace is just more informative. My sequence lengths are definitely <512<|||||>> I had the same issue while my `per_gpu_eval_batch_size=8` and `per_gpu_train_batch_size=8 `.
I got the same error when I use batch_size = 8. After I change it to 64, then no errors. <|||||>> Also having this issue, getting a different error when I try to use GPU but I understand that the error is caused by the same thing and the CPU stack trace is just more informative. My sequence lengths are definitely <512
did you find the solution?
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||> return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: index out of range: Tried to access index 0 out of table with 202 rows. at /pytorch/aten/src/TH/generic/THTensorEvenMoreMath.cpp:237
is there anyone knows what's the problem ? , I checked the embedding matrix of shape (203, 16), it shows index 0 out of range, how it is possible??? |
transformers | 3,901 | closed | How to only use part of pretrained model | Hi
I am using BERT for classification, I have a pretrained model, and my question is when I load the model like this:
model = model_class.from_pretrained(
args.model_name_or_path,
from_tf=bool(".ckpt" in args.model_name_or_path),
config=config,
cache_dir=args.cache_dir if args.cache_dir else None,
)
How can I not load the classifier, and only load the BERT encoder.
thanks
| 04-22-2020 18:36:46 | 04-22-2020 18:36:46 | How did you obtain your pre-trained model? Does your pre-train model have a classification head? What is `model_class`? Is it `BertForSequenceClassification`?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,900 | closed | quick fix wording readme for community models | 04-22-2020 17:54:29 | 04-22-2020 17:54:29 | ||
transformers | 3,899 | closed | NLQ application | # β Questions & Help
I have a pdf extractor and from this I got a dataframe with 2 columns (sections,paragraphs).
Is there any easy way to do a question and get answer like: (example)
Question: "where is the book?"
Answer: "It's on the bookshelf."
Section: "1.2.3 The Book"
Paragraph: "(Full section paragraph)"
Sorry for bad english. | 04-22-2020 15:05:51 | 04-22-2020 15:05:51 | I assume you concatenate all paragraphs in a single text string, encode it with your question, and give the string as an input to your QA model (or pipeline). What you get back, is an answer span: indexes of the first and last character of the answer in the input text. To find a paragraph it belongs to, compute indexes of the paragraph spans in your concatenated text strings. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,898 | closed | Bump tokenizers version to final 0.7.0 | 04-22-2020 15:00:04 | 04-22-2020 15:00:04 | ||
transformers | 3,897 | closed | How to fine-tune DialoGPT with your own data? | # π How to fine-tune DialoGPT with your own data?
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
I really liked the new DialoGPT that allows to make chatbots, but how do i fine-tune this for own dataset?
| 04-22-2020 13:50:23 | 04-22-2020 13:50:23 | DialoGPT is fine-tuned in the same way GPT2 in fine-tuned. Please take a look at the paper:
https://arxiv.org/pdf/1911.00536.pdf .
I will also add some training tips to the doc string. |
transformers | 3,896 | closed | ImportError: cannot import name 'DataCollatorForLanguageModeling' in run_language_modeling.py | # π Bug ImportError: cannot import name 'DataCollatorForLanguageModeling' in run_language_modeling.py
## Information
> Traceback (most recent call last):
> File "transformers/examples/run_language_modeling.py", line 29, in <module>
> from transformers import (
> ImportError: cannot import name 'DataCollatorForLanguageModeling'
Model I am using (Bert, XLNet ...): roberta, Trying to build the model from scratch using the tutorial.
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
transformers/examples/run_language_modeling.py
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
My own Data set
## To reproduce
Steps to reproduce the behavior:
Follow the tutorial [here](https://huggingface.co/blog/how-to-train)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: transformers-2.8.0 sacremoses-0.0.41 sentencepiece-0.1.85 tokenizers-0.5.2
- Platform:
- Python version: 3.x
- PyTorch version (GPU?):1.4.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: None
| 04-22-2020 11:30:01 | 04-22-2020 11:30:01 | Duplicate of #3893
Please install from source until we release a new version of the library on PyPI |
transformers | 3,895 | closed | run_bertology: AttributeError: 'NoneType' object has no attribute 'abs' | # π Bug
## Information
One more question did run_bertology also support albert model?
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...):English
The problem arises when using:
* [ ] the official example scripts: (give details below)
run_bertology.py
* [ ] my own modified scripts: (give details below)
python3.7/site-packages/transformers/data/processors/glue.py
# Because the dataset SciEntsBank-3way labeled as "correct", "incorrect", "contradictory".
-223 return ["contradiction", "entailment", "neutral"]
+223 return ["correct", "incorrect", "contradictory"]
# Because the dataset SciEntsBank-3way structure. label at first position, text_a at second position and text_b at third position.
-232 text_a = line[8]
-233 text_b = line[9]
-234 label = line[-1]
+232 text_a = line[1]
+233 text_b = line[2]
+234 label = line[0]
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
mnli
* [ ] my own task or dataset: (give details below)
dataset: SciEntsBank-3way(https://www.cs.york.ac.uk/semeval-2013/task7.html)
## To reproduce
Steps to reproduce the behavior:
python ./run_bertology.py --data_dir SciEntsBank-3way
--model_name bert-base-uncased
--task_name mnli
--max_seq_length 128
--output_dir ./tmp/$TASK_NAME/
--try_masking
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Iteration: 100%|ββββββββββ| 4561/4561 [02:15<00:00, 33.57it/s]
INFO:__main__:Attention entropies
INFO:__main__:lv, h > 1 2 3 4 5 6 7 8 9 10 11 12
INFO:__main__:layer 1: 3.03255 2.82196 1.77876 1.64802 3.27255 2.91101 3.34266 3.03600 2.73255 3.09043 1.35738 2.52412
INFO:__main__:layer 2: 2.73629 1.11241 2.86221 2.44852 0.95509 2.39331 0.45580 2.82749 2.93869 2.88269 2.19532 2.48865
INFO:__main__:layer 3: 0.05847 1.66529 1.91624 2.79214 2.31408 2.67645 2.18180 2.62745 2.48442 0.05168 2.52636 2.49648
INFO:__main__:layer 4: 1.54150 2.90387 2.40694 2.06858 2.77907 0.80181 2.69664 2.88957 2.70095 1.19583 2.33666 1.83265
INFO:__main__:layer 5: 2.34246 2.64519 2.03515 1.37404 2.88754 1.67422 2.14421 1.41457 2.03571 2.69347 1.98139 1.44582
INFO:__main__:layer 6: 1.71052 1.10676 2.28401 1.87228 2.55920 1.75916 1.22450 1.35704 1.92916 1.02535 1.67920 1.60766
INFO:__main__:layer 7: 1.63887 1.93625 1.83002 1.20811 1.58296 1.65662 1.55572 2.38742 2.09030 1.69326 1.42275 1.08153
INFO:__main__:layer 8: 1.95536 1.73146 1.59791 1.17307 1.12128 1.95980 1.11606 1.11680 1.97816 1.64787 1.53183 1.28007
INFO:__main__:layer 9: 1.54698 1.96436 1.45466 2.03807 1.60202 1.44075 1.36014 2.32559 2.59592 2.09076 1.75704 1.85274
INFO:__main__:layer 10: 2.00444 1.91784 2.12478 1.99289 1.58305 2.48627 2.08822 1.69971 2.70500 1.71860 2.03850 2.38604
INFO:__main__:layer 11: 2.76158 1.53031 1.99278 2.26007 1.97855 1.66471 1.90139 2.13217 2.45516 1.83803 1.99372 2.15438
INFO:__main__:layer 12: 1.73656 2.10304 2.72498 1.85723 2.04607 2.20456 2.16210 1.82173 2.18728 2.71702 1.84256 1.83663
INFO:__main__:Head importance scores
INFO:__main__:lv, h > 1 2 3 4 5 6 7 8 9 10 11 12
INFO:__main__:layer 1: 0.30328 0.05899 0.06971 0.03727 0.13938 1.00000 0.04436 0.03679 0.22807 0.07911 0.19918 0.05241
INFO:__main__:layer 2: 0.04867 0.02256 0.21194 0.04069 0.23058 0.15942 0.65188 0.38251 0.47535 0.40172 0.10869 0.34316
INFO:__main__:layer 3: 0.06349 0.08003 0.56604 0.41141 0.38410 0.16264 0.29070 0.37301 0.28161 0.18325 0.45048 0.02401
INFO:__main__:layer 4: 0.74869 0.15986 0.29754 0.02072 0.20961 0.06570 0.35717 0.44580 0.01144 0.11113 0.26962 0.28707
INFO:__main__:layer 5: 0.05413 0.58029 0.29859 0.64154 0.25539 0.11611 0.36774 0.05591 0.19390 0.34493 0.04906 0.02742
INFO:__main__:layer 6: 0.24067 0.06599 0.45376 0.22384 0.40461 0.53808 0.06806 0.21937 0.04209 0.13334 0.19226 0.57838
INFO:__main__:layer 7: 0.33972 0.12576 0.31489 0.10031 0.29630 0.19341 0.28052 0.29937 0.78337 0.09395 0.23640 0.05812
INFO:__main__:layer 8: 0.23342 0.27415 0.27682 0.22111 0.23234 0.79778 0.03235 0.09092 0.40418 0.01651 0.21795 0.22528
INFO:__main__:layer 9: 0.01306 0.88878 0.08858 0.45180 0.04019 0.08035 0.13417 0.15899 0.39753 0.01761 0.10785 0.01428
INFO:__main__:layer 10: 0.01597 0.01365 0.08691 0.04718 0.01268 0.32052 0.00453 0.05614 0.81534 0.00000 0.02659 0.66734
INFO:__main__:layer 11: 0.86446 0.00818 0.05306 0.12751 0.13587 0.00293 0.06480 0.22173 0.21643 0.04838 0.48050 0.32190
INFO:__main__:layer 12: 0.08048 0.32489 0.56753 0.28201 0.37204 0.09334 0.26549 0.07130 0.00372 0.53481 0.24909 0.36108
INFO:__main__:Head ranked by importance scores
INFO:__main__:lv, h > 1 2 3 4 5 6 7 8 9 10 11 12
INFO:__main__:layer 1: 41 108 102 123 81 0 119 124 62 100 72 114
INFO:__main__:layer 2: 116 129 70 121 61 79 8 28 17 25 89 35
INFO:__main__:layer 3: 107 99 13 22 27 77 46 29 49 76 20 128
INFO:__main__:layer 4: 6 78 44 130 71 105 33 21 138 88 53 47
INFO:__main__:layer 5: 112 10 43 9 55 87 31 111 73 34 115 126
INFO:__main__:layer 6: 57 104 18 64 23 14 103 67 120 84 75 11
INFO:__main__:layer 7: 36 86 40 91 45 74 50 42 5 92 58 109
INFO:__main__:layer 8: 59 52 51 66 60 4 125 94 24 132 68 63
INFO:__main__:layer 9: 136 1 95 19 122 98 83 80 26 131 90 134
INFO:__main__:layer 10: 133 135 96 118 137 39 140 110 3 143 127 7
INFO:__main__:layer 11: 2 139 113 85 82 142 106 65 69 117 16 38
INFO:__main__:layer 12: 97 37 12 48 30 93 54 101 141 15 56 32
Iteration: 100%|ββββββββββ| 4561/4561 [01:54<00:00, 39.80it/s]
INFO:__main__:Attention entropies
INFO:__main__:lv, h > 1 2 3 4 5 6 7 8 9 10 11 12
INFO:__main__:layer 1: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 2: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 3: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 4: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 5: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 6: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 7: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 8: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 9: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 10: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 11: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 12: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:Head importance scores
INFO:__main__:lv, h > 1 2 3 4 5 6 7 8 9 10 11 12
INFO:__main__:layer 1: 0.30328 0.05899 0.06971 0.03727 0.13938 1.00000 0.04436 0.03679 0.22807 0.07911 0.19918 0.05241
INFO:__main__:layer 2: 0.04867 0.02256 0.21194 0.04069 0.23058 0.15942 0.65188 0.38251 0.47535 0.40172 0.10869 0.34316
INFO:__main__:layer 3: 0.06349 0.08003 0.56604 0.41141 0.38410 0.16264 0.29070 0.37301 0.28161 0.18325 0.45048 0.02401
INFO:__main__:layer 4: 0.74869 0.15986 0.29754 0.02072 0.20961 0.06570 0.35717 0.44580 0.01144 0.11113 0.26962 0.28707
INFO:__main__:layer 5: 0.05413 0.58029 0.29859 0.64154 0.25539 0.11611 0.36774 0.05591 0.19390 0.34493 0.04906 0.02742
INFO:__main__:layer 6: 0.24067 0.06599 0.45376 0.22384 0.40461 0.53808 0.06806 0.21937 0.04209 0.13334 0.19226 0.57838
INFO:__main__:layer 7: 0.33972 0.12576 0.31489 0.10031 0.29630 0.19341 0.28052 0.29937 0.78337 0.09395 0.23640 0.05812
INFO:__main__:layer 8: 0.23342 0.27415 0.27682 0.22111 0.23234 0.79778 0.03235 0.09092 0.40418 0.01651 0.21795 0.22528
INFO:__main__:layer 9: 0.01306 0.88878 0.08858 0.45180 0.04019 0.08035 0.13417 0.15899 0.39753 0.01761 0.10785 0.01428
INFO:__main__:layer 10: 0.01597 0.01365 0.08691 0.04718 0.01268 0.32052 0.00453 0.05614 0.81534 0.00000 0.02659 0.66734
INFO:__main__:layer 11: 0.86446 0.00818 0.05306 0.12751 0.13587 0.00293 0.06480 0.22173 0.21643 0.04838 0.48050 0.32190
INFO:__main__:layer 12: 0.08048 0.32489 0.56753 0.28201 0.37204 0.09334 0.26549 0.07130 0.00372 0.53481 0.24909 0.36108
INFO:__main__:Head ranked by importance scores
INFO:__main__:lv, h > 1 2 3 4 5 6 7 8 9 10 11 12
INFO:__main__:layer 1: 41 108 102 123 81 0 119 124 62 100 72 114
INFO:__main__:layer 2: 116 129 70 121 61 79 8 28 17 25 89 35
INFO:__main__:layer 3: 107 99 13 22 27 77 46 29 49 76 20 128
INFO:__main__:layer 4: 6 78 44 130 71 105 33 21 138 88 53 47
INFO:__main__:layer 5: 112 10 43 9 55 87 31 111 73 34 115 126
INFO:__main__:layer 6: 57 104 18 64 23 14 103 67 120 84 75 11
INFO:__main__:layer 7: 36 86 40 91 45 74 50 42 5 92 58 109
INFO:__main__:layer 8: 59 52 51 66 60 4 125 94 24 132 68 63
INFO:__main__:layer 9: 136 1 95 19 122 98 83 80 26 131 90 134
INFO:__main__:layer 10: 133 135 96 118 137 39 140 110 3 143 127 7
INFO:__main__:layer 11: 2 139 113 85 82 142 106 65 69 117 16 38
INFO:__main__:layer 12: 97 37 12 48 30 93 54 101 141 15 56 32
INFO:__main__:Pruning: original score: 0.091866, threshold: 0.082679
INFO:__main__:Heads to mask: [117, 125, 140, 114, 121, 44, 112, 96, 109, 107, 108, 93, 105, 39]
INFO:__main__:lv, h > 1 2 3 4 5 6 7 8 9 10 11 12
INFO:__main__:layer 1: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 2: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 3: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 4: 1.00000 1.00000 1.00000 0.00000 1.00000 1.00000 1.00000 1.00000 0.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 5: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 6: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 7: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 8: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 0.00000 1.00000 1.00000
INFO:__main__:layer 9: 0.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 0.00000 1.00000 0.00000
INFO:__main__:layer 10: 0.00000 0.00000 1.00000 1.00000 0.00000 1.00000 0.00000 1.00000 1.00000 0.00000 1.00000 1.00000
INFO:__main__:layer 11: 1.00000 0.00000 1.00000 1.00000 1.00000 0.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 12: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 0.00000 1.00000 1.00000 1.00000
Iteration: 100%|ββββββββββ| 4561/4561 [01:54<00:00, 39.68it/s]
INFO:__main__:Attention entropies
INFO:__main__:lv, h > 1 2 3 4 5 6 7 8 9 10 11 12
INFO:__main__:layer 1: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 2: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 3: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 4: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 5: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 6: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 7: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 8: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 9: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 10: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 11: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 12: 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000
INFO:__main__:Head importance scores
INFO:__main__:lv, h > 1 2 3 4 5 6 7 8 9 10 11 12
INFO:__main__:layer 1: 0.39574 0.02468 0.08140 0.12466 0.12766 1.00000 0.09364 0.03733 0.21125 0.05515 0.27669 0.01294
INFO:__main__:layer 2: 0.04871 0.01492 0.02147 0.00766 0.25008 0.16705 0.73248 0.48019 0.40388 0.39327 0.06609 0.38033
INFO:__main__:layer 3: 0.14803 0.02220 0.64758 0.29125 0.45867 0.02242 0.34411 0.33109 0.30959 0.29897 0.41782 0.01806
INFO:__main__:layer 4: 0.82395 0.20768 0.33463 0.05595 0.30457 0.01353 0.38665 0.33563 0.02096 0.12274 0.15990 0.30594
INFO:__main__:layer 5: 0.24467 0.77639 0.26634 0.43066 0.28580 0.15136 0.29888 0.09479 0.20161 0.38652 0.07106 0.09292
INFO:__main__:layer 6: 0.24192 0.04294 0.19242 0.18251 0.64465 0.64657 0.03439 0.17273 0.05866 0.20935 0.14715 0.51240
INFO:__main__:layer 7: 0.24221 0.11722 0.54783 0.09908 0.30887 0.33625 0.18271 0.09798 0.76243 0.19917 0.26639 0.02415
INFO:__main__:layer 8: 0.34639 0.10483 0.42852 0.23310 0.20756 0.85146 0.05960 0.06187 0.25805 0.12922 0.14193 0.28091
INFO:__main__:layer 9: 0.02696 0.97099 0.08023 0.36748 0.05116 0.06451 0.07015 0.23535 0.39404 0.14999 0.01570 0.01164
INFO:__main__:layer 10: 0.02307 0.01918 0.09727 0.05241 0.03105 0.32034 0.02875 0.08710 0.92011 0.00000 0.05011 0.59763
INFO:__main__:layer 11: 0.81794 0.00753 0.05141 0.14622 0.09715 0.00008 0.03071 0.09413 0.17420 0.05189 0.69652 0.31542
INFO:__main__:layer 12: 0.09312 0.36467 0.56134 0.25867 0.37935 0.06446 0.34758 0.09796 0.03789 0.56186 0.25654 0.37430
INFO:__main__:Head ranked by importance scores
INFO:__main__:lv, h > 1 2 3 4 5 6 7 8 9 10 11 12
INFO:__main__:layer 1: 24 126 100 85 84 0 96 120 64 111 52 138
INFO:__main__:layer 2: 117 136 131 140 58 75 8 18 23 26 104 29
INFO:__main__:layer 3: 79 130 10 49 19 129 36 40 43 47 22 134
INFO:__main__:layer 4: 4 66 39 110 46 137 27 38 132 86 76 45
INFO:__main__:layer 5: 59 6 54 20 50 77 48 94 68 28 102 98
INFO:__main__:layer 6: 61 118 70 72 12 11 121 74 109 65 80 17
INFO:__main__:layer 7: 60 87 16 89 44 37 71 90 7 69 53 127
INFO:__main__:layer 8: 35 88 21 63 67 3 108 107 56 83 82 51
INFO:__main__:layer 9: 125 1 101 32 115 105 103 62 25 78 135 139
INFO:__main__:layer 10: 128 133 92 112 122 41 124 99 2 143 116 13
INFO:__main__:layer 11: 5 141 114 81 93 142 123 95 73 113 9 42
INFO:__main__:layer 12: 97 33 15 55 30 106 34 91 119 14 57 31
INFO:__main__:Masking: current score: 0.092085, remaning heads 130 (90.3 percents)
INFO:__main__:Heads to mask: [15, 11, 41, 13, 106, 35, 14, 25, 29, 83, 1, 126, 66, 7]
INFO:__main__:lv, h > 1 2 3 4 5 6 7 8 9 10 11 12
INFO:__main__:layer 1: 1.00000 0.00000 1.00000 1.00000 1.00000 1.00000 1.00000 0.00000 1.00000 1.00000 1.00000 0.00000
INFO:__main__:layer 2: 1.00000 0.00000 0.00000 0.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 3: 1.00000 0.00000 1.00000 1.00000 1.00000 0.00000 1.00000 1.00000 1.00000 1.00000 1.00000 0.00000
INFO:__main__:layer 4: 1.00000 1.00000 1.00000 0.00000 1.00000 0.00000 1.00000 1.00000 0.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 5: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 6: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 0.00000 1.00000 1.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 7: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 0.00000
INFO:__main__:layer 8: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 0.00000 1.00000 1.00000
INFO:__main__:layer 9: 0.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 0.00000 0.00000 0.00000
INFO:__main__:layer 10: 0.00000 0.00000 1.00000 1.00000 0.00000 1.00000 0.00000 1.00000 1.00000 0.00000 1.00000 1.00000
INFO:__main__:layer 11: 1.00000 0.00000 1.00000 1.00000 1.00000 0.00000 0.00000 1.00000 1.00000 1.00000 1.00000 1.00000
INFO:__main__:layer 12: 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 1.00000 0.00000 1.00000 1.00000 1.00000
Iteration: 0%| | 0/4561 [00:00<?, ?it/s]
Traceback (most recent call last):
File "run_bertology.py", line 426, in <module>
main()
File "run_bertology.py", line 421, in main
head_mask = mask_heads(args, model, eval_dataloader)
File "run_bertology.py", line 179, in mask_heads
args, model, eval_dataloader, compute_entropy=False, head_mask=new_head_mask
File "run_bertology.py", line 104, in compute_heads_importance
head_importance += head_mask.grad.abs().detach()
AttributeError: 'NoneType' object has no attribute 'abs'
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:2.8.0
- Platform:
- Python version:3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:yes
- Using distributed or parallel set-up in script?:
| 04-22-2020 10:54:18 | 04-22-2020 10:54:18 | Sorry, I think it is the same issue as #4103. <|||||>Please don't open duplicate issues. If you have more info about this issue, post it here. Also, please use [code blocks](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks).<|||||>> Please don't open duplicate issues. If you have more info about this issue, post it here. Also, please use [code blocks](https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks).
Sorry for this.<|||||>Facing the same issue -
Environment -
- transformers version:2.2.2
- Platform:
- Python version:3.7.3
- PyTorch version (GPU?): 1.3.1 (no GPU)
- Tensorflow version (GPU?): 1.14.0 (no GPU)
- Using GPU in script?:No
- Using distributed or parallel set-up in script?: No
While computing the initial importance score (When the head_mask is None) I am getting the following error. -
File "/Users/user/Desktop/org//prune_attention_heads.py", line 226, in compute_heads_importance
head_importance += head_mask.grad.abs().detach()
AttributeError: 'NoneType' object has no attribute 'abs'
On putting the line above in a try and except block and printing the head_mask I get the following -
tensor([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]], requires_grad=True)
Here, the head_mask has requires_grad=True and I am passing the head_mask into the model and calling loss.backward() as in the bertology script.
<|||||>I met the same problem, the head_mask has no grad at the second time.<|||||>Seems that during the second pruning, the head_mask tensor becomes a non-leaf node, when the grad is None, I printed the `head_mask.is_leaf` attribute and get the warning(PyTorch 1.5.0) as below:
> UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
warnings.warn("The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad "
head_mask is leaf: False<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 3,894 | closed | FileNotFoundError: [Errno 2] No such file or directory: 'mnli/dev_matched.tsv' | # π Bug
## Information
A strange errorοΌ
INFO:transformers.data.datasets.glue:Creating features from dataset file at mnli
Traceback (most recent call last):
File "run_bertology.py", line 426, in <module>
main()
File "run_bertology.py", line 407, in main
eval_dataset = GlueDataset(args, tokenizer=tokenizer, evaluate=True, local_rank=args.local_rank)
File "/Users/thomas/opt/anaconda3/lib/python3.7/site-packages/transformers/data/datasets/glue.py", line 100, in __init__
if evaluate
File "/Users/thomas/opt/anaconda3/lib/python3.7/site-packages/transformers/data/processors/glue.py", line 219, in get_dev_examples
return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev_matched.tsv")), "dev_matched")
File "/Users/thomas/opt/anaconda3/lib/python3.7/site-packages/transformers/data/processors/utils.py", line 115, in _read_tsv
with open(input_file, "r", encoding="utf-8-sig") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'mnli/dev_matched.tsv'
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
run_bertology.py
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
mnli
## To reproduce
Steps to reproduce the behavior:
export TASK_NAME=mnli
python ./run_bertology.py --data_dir $GLUE_DIR/$TASK_NAME
--model_name bert-base-uncased
--task_name $TASK_NAME
--max_seq_length 128
--output_dir ./tmp/$TASK_NAME/
--try_masking
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
INFO:transformers.data.datasets.glue:Creating features from dataset file at mnli
Traceback (most recent call last):
File "run_bertology.py", line 426, in <module>
main()
File "run_bertology.py", line 407, in main
eval_dataset = GlueDataset(args, tokenizer=tokenizer, evaluate=True, local_rank=args.local_rank)
File "/Users/thomas/opt/anaconda3/lib/python3.7/site-packages/transformers/data/datasets/glue.py", line 100, in __init__
if evaluate
File "/Users/thomas/opt/anaconda3/lib/python3.7/site-packages/transformers/data/processors/glue.py", line 219, in get_dev_examples
return self._create_examples(self._read_tsv(os.path.join(data_dir, "dev_matched.tsv")), "dev_matched")
File "/Users/thomas/opt/anaconda3/lib/python3.7/site-packages/transformers/data/processors/utils.py", line 115, in _read_tsv
with open(input_file, "r", encoding="utf-8-sig") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'mnli/dev_matched.tsv'
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform:
- Python version:3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:yes
- Using distributed or parallel set-up in script?:
| 04-22-2020 10:34:38 | 04-22-2020 10:34:38 | Well, it looks like you don't have the "mnli/dev_matched.tsv" file downloaded? You can download the GLUE datasets using [utils/download_glue_data.py](https://github.com/huggingface/transformers/blob/master/utils/download_glue_data.py) |
transformers | 3,893 | closed | Can not import DataCollatorForLanguageModeling | # π Bug
## Information
Model I am using (ALBERT):
Language I am using the model on (Sanskrit, Hindi):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. In Google Colab
2. ` !python /content/transformers/examples/run_language_modeling.py \
--train_data_file /content/corpus/train/full.txt \
--eval_data_file /content/corpus/valid/full_val.txt \
--model_type albert-base-v2 \ `
3. This worked yesterdy, bbut the latest added `DataCollatorForLanguageModeling` can't be imported.
The error i am getting
`2020-04-22 05:12:25.640328: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1
Traceback (most recent call last):
File "/content/transformers/examples/run_language_modeling.py", line 29, in <module>
from transformers import (
ImportError: cannot import name 'DataCollatorForLanguageModeling'`
So, I checked if it can be imported directly.
`from transformers import DataCollatorForLanguageModeling`
ERROR
`from transformers import DataCollatorForLanguageModeling
ImportError: cannot import name 'DataCollatorForLanguageModeling' `
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Colab
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0
- Tensorflow version (GPU?): 2.2.0-rc3
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| 04-22-2020 05:30:29 | 04-22-2020 05:30:29 | I was installing from pip and it isn't updated yet. Building from source solved this.<|||||>I am trying to run this from source and still I am getting the same error!
Please have a look at this [here](https://github.com/huggingface/transformers/issues/3896#issue-604681806)<|||||>It's because the pip package hasn't been updated. The script to train is changed fundamentally. so you can try building from scratch using
`git clone https://github.com/huggingface/transformers
cd transformers
pip install .`
or
You can use old script of `run_language_modeling.py` from previous commits. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.