repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 12,025 | closed | Extend pipelines for automodel tupels | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR allows for multiple `AutoModel...` classes to be attached to a single pipeline. It should unblock this PR: https://github.com/huggingface/transformers/pull/11525
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik for another set of eyes if possible.
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-04-2021 07:35:16 | 06-04-2021 07:35:16 | FYI, there are 536 model_ids (out of all public ones) that have no `config.architectures`.
They can be accessed here: https://github.com/patrickvonplaten/files_to_link_to/blob/master/model_ids_no_config.txt<|||||>@sgugger I think it should be good to go, but the failing test is not from this PR can you confirm ?<|||||>Yes the failing test is unrelated, a fix is on its way. This is safe to merge. |
transformers | 12,024 | closed | Add CANINE | # What does this PR do?
It adds Google's new [CANINE](https://arxiv.org/abs/2103.06874) model. It's a tokenizer-free model, meaning you can throw away your `vocab.txt` file. The model trains at a character level, namely by turning each character into its unicode code point. In Python, this can be done using the built-in `ord()` function. This means that `input_ids` can be created simply as `[ord(char) for char in text]`. This is different from ByT5.
However, there's still a good use for a `CanineTokenizer`, namely for padding/truncating unicode code points up to the max length of 2048. It's also handy as it let's you easily convert text (string) to ids (unicode code points) and vice versa.
Due to the bigger sequence length (2048), the model downsamples the characters to what is called "molecules", of length 512. Then, a regular BERT encoder is applied. The `pooled_output` (which can be used for sequence classification) is then simply equal to the last hidden state of the first [CLS] token followed by a linear layer. In order to get a `sequence_output` (useful for token classification tasks) which is again of length 2048, an upsampling technique is used (details can be found in the paper).
To do:
- [x] remove `is_decoder` logic. Is it possible to add support for `is_decoder` in a future PR? Will this be backwards compatible? Answer: yes
- [x] fix tests (currently 35 pass, 1 fail). Once the official checkpoints are on the hub under the "Google" namespace, all tests pass.
- [ ] Update namespace on the hub from nielsr to google (update the tests for this too), and add model cards.
A question here: the CANINE model uses 3 Transformer encoders (2 shallow ones, consisting of only a single layer, of which the first one uses local attention, and a deep one similar to BERT). Should it be possible to return the `hidden_states` and `attentions` of all of these 3 encoders in the output? Or only of the deep one? Otherwise I need to define a custom `CanineModelOutput`.
Fixes #11016
cc @patil-suraj @patrickvonplaten
Also tagging one of the original authors: @dhgarrette | 06-04-2021 07:34:13 | 06-04-2021 07:34:13 | Hi @NielsRogge , thanks for that PR! I'm currently trying to run the token classification example and here's some initial feedback:
- AutoTokenizer is currently not working/finding the `CanineTokenizer`, so I had to manually specify it
- It seems that the token classification example only works with Fast Tokenizers: I disabled that assertion check, but then the following message is thrown:
``` File "run_ner.py", line 514, in <module>
main()
File "run_ner.py", line 361, in main
load_from_cache_file=not data_args.overwrite_cache,
File "/opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1407, in map
update_data = does_function_return_dict(test_inputs, test_indices)
File "/opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1378, in does_function_return_dict
function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "run_ner.py", line 330, in tokenize_and_align_labels
word_ids = tokenized_inputs.word_ids(batch_index=i)
File "/mnt/transformers-canince/src/transformers/tokenization_utils_base.py", line 347, in word_ids
raise ValueError("word_ids() is not available when using Python-based tokenizers")
ValueError: word_ids() is not available when using Python-based tokenizers
```
is there any chance to get this example running in the final version of PR :thinking: would be highly interesting to see the results :hugs: <|||||>@patrickvonplaten @sgugger @LysandreJik my PR is ready for review.
Just a question: I might define a custom `CanineModelOutput`, as CANINE consists of 3 Transformer encoders (2 shallow ones, which only consists of a single layer, and one "deep" BERT-like). If a user specifies `output_hidden_states=True` for example, then it could return the hidden states of all of these 3 encoders (however, the hidden states won't have the same shape then). Currently, I'm just using a `BaseModelOutputWithPooling`, and it only returns the hidden states of the deep encoder (which all have the same shape). Would appreciate your feedback here.
Also @LysandreJik: would be great if you could review the tokenizer. Perhaps it would be better if a space is added after the CLS token and before the SEP token when decoding, e.g. "hello world" is currently decoded as "[CLS]hello world[SEP]". Do I need to update the `lstrip` and `rstrip` parameters of the `AddedToken` instances for that?
<|||||>@NielsRogge just one question (I tried to use `CanineModel` with the latest commit):
```python
In [36]: output[-1].shape
Out[36]: torch.Size([1, 3, 768])
In [37]: encoding = tokenizer(["hello world and hugging face"], padding="longest", return_tensors="pt")
In [38]: hidden_states = model(**encoding).hidden_states
In [39]: hidden_states[-1].shape
Out[39]: torch.Size([1, 7, 768])
In [40]: encoding = tokenizer(["huggingface"], padding="longest", return_tensors="pt")
In [41]: hidden_states = model(**encoding).hidden_states
In [42]: hidden_states[-1].shape
Out[42]: torch.Size([1, 3, 768])
```
I don't really understand the shape of the hidden states from the model. I would expect one tensor per character, but why is it sometimes 3 or 7? <|||||>Hi @stefan-it,
The reason is that CANINE downsamples the character sequence length before applying the deep Transformer encoder. The downsampling rate is by default set to 4, and the max sequence length (in terms of characters) is set to 2048. So `2048 // 4 = 512`, which is the regular length of models like BERT and RoBERTa.
In your case, you're not padding anything, so if you just provide `"HuggingFace"`, then the character sequence length is 13 (special tokens included), and `13 // 4 = 3`, hence the hidden states of the deep encoder have length 3. If you just want the final hidden states for each character (which are upsampled by another shallow Transformer encoder), you can use `outputs.last_hidden_state`.
But, and that's what my question is about above, I could, instead of only returning the `hidden states` of the deep encoder, also return the hidden states of the initial and final Transformer encoders. In that case, you will have `hidden_states` of the initial encoder at the character level, then `hidden_states` of the deep encoder at the downsampled level (the authors call this "molecule level"), and the `hidden_states` of the final encoder at the character level. <|||||>Update: I'm working on a separate branch called `updating_outputs_canine` which replaces the `BaseModelOutputWithPooling` by a custom `CanineModelOutputWithPooling`, and it returns the attentions and hidden states of all 3 Transformer encoders. All tests passing :) I'll merge it with this main branch once I've got approval<|||||>Thanks a lot for your work @NielsRogge, a fantastic addition to the library once again!<|||||>Thanks! I haven't uploaded the model checkpoints yet, will do soon<|||||>Hi @NielsRogge , thanks for adding this! I'm current playing around with the model and found this corner case:
```python
from transformers import AutoConfig, CanineModel, CanineTokenizer
model_name = "google/canine-s"
config = AutoConfig.from_pretrained(model_name, output_hidden_states=True)
model = CanineModel.from_pretrained(pretrained_model_name_or_path=model_name, config=config)
tokenizer = CanineTokenizer.from_pretrained(model_name)
encoding = tokenizer(["."], padding="longest", return_tensors="pt")
hidden_states = model(**encoding).last_hidden_state
```
This happens, when input has a length of 1:
```bash
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-1-3719c2ae7d25> in <module>
7
8 encoding = tokenizer(["."], padding="longest", return_tensors="pt")
----> 9 hidden_states = model(**encoding).last_hidden_state
/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1013 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1014 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1015 return forward_call(*input, **kwargs)
1016 # Do not call functions when jit is used
1017 full_backward_hooks, non_full_backward_hooks = [], []
/mnt/europeana-bert/transformers/src/transformers/models/canine/modeling_canine.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, outpu
t_attentions, output_hidden_states, return_dict)
1185 # this, it seems that molecules and characters require a very different
1186 # feature space; intuitively, this makes sense.
-> 1187 init_molecule_encoding = self.chars_to_molecules(input_char_encoding)
1188
1189 # Deep BERT encoder
/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1013 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1014 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1015 return forward_call(*input, **kwargs)
1016 # Do not call functions when jit is used
1017 full_backward_hooks, non_full_backward_hooks = [], []
/mnt/europeana-bert/transformers/src/transformers/models/canine/modeling_canine.py in forward(self, char_encoding)
324 # We transpose it to be [batch, hidden_size, char_seq]
325 char_encoding = torch.transpose(char_encoding, 1, 2)
--> 326 downsampled = self.conv(char_encoding)
327 downsampled = torch.transpose(downsampled, 1, 2)
328 downsampled = self.activation(downsampled)
/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1013 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1014 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1015 return forward_call(*input, **kwargs)
1016 # Do not call functions when jit is used
1017 full_backward_hooks, non_full_backward_hooks = [], []
/opt/conda/lib/python3.8/site-packages/torch/nn/modules/conv.py in forward(self, input)
261
262 def forward(self, input: Tensor) -> Tensor:
--> 263 return self._conv_forward(input, self.weight, self.bias)
264
265
/opt/conda/lib/python3.8/site-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
257 weight, bias, self.stride,
258 _single(0), self.dilation, self.groups)
--> 259 return F.conv1d(input, weight, bias, self.stride,
260 self.padding, self.dilation, self.groups)
261
RuntimeError: Calculated padded input size per channel: (3). Kernel size: (4). Kernel size can't be greater than actual input size
```
(I have one NER dataset, and one sentence consists of one token, and that token is the `.` 😅)
Thanks for your patience :hugs: <|||||>Yeah so the input has length 3 (3 unicode code points, namely for "[CLS]", "." and "[SEP]". However, CANINE uses a convolutional operation to downsample the sequence length, and the kernel size is 4. So of course you can't apply a kernel of size 4 to an input of size 3. So it's advised to pad the input up a size that's at least 4. |
transformers | 12,023 | closed | Flax CLM script | This PR adds causal language model training script for flax.
| 06-04-2021 07:01:57 | 06-04-2021 07:01:57 | |
transformers | 12,022 | closed | Getting IndexError: index out of range in self while finetuning GPTNeo on Text Classification | ## Environment info]
- `transformers` version: current master branch
- Platform: Colab
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101
- Tensorflow version (GPU?): Not using
- Using GPU in script?: With GPU also has issue
- Using distributed or parallel set-up in script?: yes I am using trainer
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below) mention in below colab
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: emotion
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Run this [colab](https://github.com/bhadreshpsavani/ExploringSentimentalAnalysis/blob/main/SentimentalAnalysisWithGPTNeo.ipynb)
## Error:
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-13-5845bc74dd04> in <module>()
5 train_dataset=emotions_encoded["train"],
6 eval_dataset=emotions_encoded["validation"])
----> 7 trainer.train();
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs)
1261 tr_loss += self.training_step(model, inputs)
1262 else:
-> 1263 tr_loss += self.training_step(model, inputs)
1264 self.current_flos += float(self.floating_point_ops(inputs))
1265
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in training_step(self, model, inputs)
1739 loss = self.compute_loss(model, inputs)
1740 else:
-> 1741 loss = self.compute_loss(model, inputs)
1742
1743 if self.args.n_gpu > 1:
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs)
1771 else:
1772 labels = None
-> 1773 outputs = model(**inputs)
1774 # Save past state if it exists
1775 # TODO: this needs to be fixed and made cleaner later.
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/usr/local/lib/python3.7/dist-packages/transformers/models/gpt_neo/modeling_gpt_neo.py in forward(self, input_ids, past_key_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict)
1098 output_attentions=output_attentions,
1099 output_hidden_states=output_hidden_states,
-> 1100 return_dict=return_dict,
1101 )
1102 hidden_states = transformer_outputs[0]
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/usr/local/lib/python3.7/dist-packages/transformers/models/gpt_neo/modeling_gpt_neo.py in forward(self, input_ids, past_key_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict)
811
812 if inputs_embeds is None:
--> 813 inputs_embeds = self.wte(input_ids)
814 position_embeds = self.wpe(position_ids)
815 hidden_states = inputs_embeds + position_embeds
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py in forward(self, input)
156 return F.embedding(
157 input, self.weight, self.padding_idx, self.max_norm,
--> 158 self.norm_type, self.scale_grad_by_freq, self.sparse)
159
160 def extra_repr(self) -> str:
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1914 # remove once script supports set_grad_enabled
1915 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1916 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1917
1918
IndexError: index out of range in self
```
Note: Issue comes while using it with CPU. In case of GPU also Issue is coming but different error
### Who can help
@patil-suraj @sgugger | 06-04-2021 05:45:01 | 06-04-2021 05:45:01 | This is because you are using a pretrained model but add new tokens without changing the weights. There is thus a problem when you pass that new token as input ID, it doesn't have a matching embedding. You can resize the embedding matrix but you will then lose all the weights of that embedding matrix, so you won't be properly applying transfer learning.<|||||>Thanks, @sgugger,
If I train the model with 1 batch size and no new padding token it works as you said that the exact reason.
So to add any new token in the tokenizer I need to train the model again for language modeling?<|||||>Not necessarily for language modeling, but you will need more training since all the embeddings will be randomly initialized. The only way I see around it is to manually create the embeddings weights to use the pretrained weights for all the tokens except the padding token and then put a line of 0s for the padding token. That should work. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,021 | closed | [Deepspeed] Assert on mismatches between ds and hf args | This is another iteration to make things less prone to errors when dealing with the complex space of partially overlapping DS and HF Trainer configs.
- validate params and assert on mismatch - revamps how the config massage is done
- add a test
- add docs
- this new code uncovered a config mismatch in the deepspeed tests thanks to the new validation, so fixed that too.
As example on a really bad mismatch (as in the new test), the user gets an assert with:
```
Please correct the following DeepSpeed config values that mismatch TrainingArguments values:
- ds train_micro_batch_size_per_gpu=4 vs hf per_device_train_batch_size=2
- ds gradient_accumulation_steps=4 vs hf gradient_accumulation_steps=2
- ds train_batch_size=1000 vs hf train_batch_size (calculated)=4
- ds gradient_clipping=1.1 vs hf max_grad_norm=1.0
- ds optimizer.params.betas=[0.8, 0.89] vs hf adam_beta1+adam_beta2=[0.9, 0.99]
- ds fp16.enabled=False vs hf fp16+fp16_backend(amp)=True
The easiest method is to set these DeepSpeed config values to 'auto'.
```
Fixes: https://github.com/microsoft/DeepSpeed/issues/1107
@sgugger | 06-04-2021 02:07:18 | 06-04-2021 02:07:18 | @stas00 - this looks great. Thanks for the added bonus of showing the conflicting names, values, and sources!<|||||>I'm glad to hear you found it useful, @rfernand2.
This is all new and experimental so if you find other suggestions for improvements please don't hesitate to file an issue. Thank you! |
transformers | 12,020 | closed | Remove "Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation." | When using GPT2 for generation, there is always info:
> Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation
I find this message hides in https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py
It seems redundancy. Can it be removed for simplicity ?
@patrickvonplaten @patil-suraj | 06-04-2021 01:37:30 | 06-04-2021 01:37:30 | +1, it seems like this method gets triggered on each call to open-ended generation, which pollutes logs quite a bit :/ Is there a way to have it run just once on `pipeline` creation?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>You can manually set the pad_token_id to prevent this error message before running the pipeline, e.g.
```python
gen_pipe = pipeline("text-generation")
gen_pipe.model.config.pad_token_id = gen_pipe.model.config.eos_token_id
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,019 | closed | Allow registerable Components | # 🚀 Feature request
Currently, the Auto* classes use hardcoded mappings from model_name to class. This means they cannot be used to extend transformers with custom implementations, which in turn makes it difficult to perform experiments for example using alternate tokenisation particularly integration with other frameworks such as Allen NLP.
I have a custom tokeniser that is based on the tokenizers library. Since it uses a custom python pre-tokenisation step it cannot be serialised to the tokenizer.json file. To get around this I created a custom class that builds the pipeline on the fly. i.e.
```
class CustomUnigramTokenizer(PreTrainedTokenizerFast,FromParams):
def __init__(
self,
tokenizer_file:str=None,
bos_token:str="<s>",
eos_token:str="</s>",
sep_token:str="</s>",
cls_token:str="<s>",
unk_token:str="<unk>",
pad_token:str="<pad>",
mask_token:str="<mask>",
**kwargs
):
super().__init__(
tokenizer_file=tokenizer_file,
bos_token=bos_token,
eos_token=eos_token,
unk_token=unk_token,
sep_token=sep_token,
cls_token=cls_token,
pad_token=pad_token,
mask_token=mask_token,
**kwargs,
)
self.backend_tokenizer.pre_tokenizer = self._pre_tokenizer()
```
The issue is I have to use `CustomUnigramTokenizer.from_pretrained()` rather than `AutoTokenizer.from_pretrained()`
Most frameworks, i.e. the trainers in the transformers library, or the allennlp library use `AutoTokenizer.from_pretrained()`
So it would be nice if I could register my implementation, i.e.
AutoTokenizer.register(model_name="my_custom_model", CustomUnigramTokenizer)
Allowing AutoTokenizer.from_pretrained("/path_to_my_custom_tokeniser")
I'm assuming I have a config.json in /path_to_my_custom_tokeniser with model_name="my_custom_model"
Even better would be an allennlp registration system, i.e.
```
PreTrainedTokenizer.register("my_custom_model")
class CustomUnigramTokenizer(PreTrainedTokenizerFast,FromParams):
def __init__():
```
## Motivation
It's quite difficult to extend transformers and experiment with different configurations. | 06-04-2021 01:08:39 | 06-04-2021 01:08:39 | Similar question to https://github.com/huggingface/transformers/issues/10256, and it is indeed a nice proposal!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,018 | closed | [TrainerArguments] format and sort __repr__, add __str__ | This PR:
- makes the `TrainerArguments` log dump more readable - by sorting and formatting the output
- `__repr__` wasn't actually used in examples, but `__str__` was needed - so fixing that buglet
New dump looks like:
```
06/03/2021 17:05:12 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(
_n_gpu=1,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-06,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
ddp_find_unused_parameters=None,
debug=[],
deepspeed=None,
disable_tqdm=False,
do_eval=True,
do_predict=False,
do_train=False,
eval_accumulation_steps=None,
eval_steps=2500,
evaluation_strategy=IntervalStrategy.STEPS,
fp16=False,
fp16_backend=auto,
fp16_full_eval=True,
fp16_opt_level=O1,
gradient_accumulation_steps=1,
greater_is_better=None,
group_by_length=False,
ignore_data_skip=False,
label_names=None,
label_smoothing_factor=0.1,
learning_rate=3e-05,
length_column_name=length,
load_best_model_at_end=False,
local_rank=-1,
log_on_each_node=True,
logging_dir=runs/Jun03_17-05-12_hope,
logging_first_step=True,
logging_steps=500,
logging_strategy=IntervalStrategy.STEPS,
lr_scheduler_type=SchedulerType.LINEAR,
max_grad_norm=1.0,
max_steps=-1,
metric_for_best_model=None,
mp_parameters=,
no_cuda=False,
num_train_epochs=1.0,
output_dir=output_dir,
overwrite_output_dir=True,
past_index=-1,
per_device_eval_batch_size=16,
per_device_train_batch_size=8,
predict_with_generate=True,
prediction_loss_only=False,
push_to_hub=False,
remove_unused_columns=True,
report_to=['tensorboard'],
resume_from_checkpoint=None,
run_name=output_dir,
save_steps=500,
save_strategy=IntervalStrategy.STEPS,
save_total_limit=None,
seed=42,
sharded_ddp=[],
skip_memory_metrics=True,
sortish_sampler=True,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_legacy_prediction_loop=False,
warmup_ratio=0.0,
warmup_steps=50,
weight_decay=0.0,
)
```
@sgugger | 06-04-2021 00:08:12 | 06-04-2021 00:08:12 | |
transformers | 12,017 | closed | Fix deberta 2 Tokenizer Integration Test | fixes #12016 | 06-03-2021 19:18:33 | 06-03-2021 19:18:33 | |
transformers | 12,016 | closed | FAILED tests/test_tokenization_deberta_v2.py::DebertaV2TokenizationTest::test_tokenizer_integration | This integration (`@slow`) test fails:
see https://github.com/huggingface/transformers/runs/2723622794?check_suite_focus=true#step:7:2663
I will provide a PR | 06-03-2021 19:05:10 | 06-03-2021 19:05:10 | > I will provide a PR
see #12017 |
transformers | 12,015 | closed | Fix problem_type to match with the applied loss function for distillbert sequence classification | # What does this PR do?
The problem_type in config is not correct with the loss function applied. Can be seen [here](https://github.com/huggingface/transformers/blob/242ec31aa59b358e631d981b545fd08330584ea8/src/transformers/models/distilbert/modeling_distilbert.py#L649).
This PR fixes this so that the applied loss is consistent with the problem type: `BCEWithLogitsLoss` is applied for the `problem_type` of `single_label_classification`, and `CrossEntropyLoss` is applied for the problem type `multi_label_classification`
Fixes #12014
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik | 06-03-2021 16:59:35 | 06-03-2021 16:59:35 | cc @abhi1thakur @sgugger <|||||>No, this is incorrect. What `single_label_classification` means each sample can only have one label (but there could be multiple classes) so the loss to use is cross entropy. `multi_label_classification` means each sample can have zero or several labels, so in this case we use bce (because there can't be a softmax). |
transformers | 12,014 | closed | MIssmatch between problem_type and loss functions in DistillBert for sequence classification | ## Environment info
- Platform: Windows
- Python version: 3.8
- PyTorch version (GPU?): 1.8
### Who can help
@LysandreJik
Models:
Distillbert for sequence classification
## Information
The problem_type in config is not correct with the loss function applied. Can be seen [here](https://github.com/huggingface/transformers/blob/242ec31aa59b358e631d981b545fd08330584ea8/src/transformers/models/distilbert/modeling_distilbert.py#L649)
Eg. for the problem type "single_label_classification", the CrossEntropyLoss is used, and the logic to detect "single_label_classification" indicates there should be multiple classes. So its just that the labels of problem_type are switched. Hence, if someone uses actually uses this config of probelm_type to choose their task (binary, or multiclass), the applied loss would be inccorect
## Expected behavior
For problem_type, the "single_label_classification", the BCE loss should be applied
| 06-03-2021 16:52:21 | 06-03-2021 16:52:21 | cc @abhi1thakur |
transformers | 12,013 | closed | [Flax] Refactor MLM | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Simplify MLM pretraining script a bit
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-03-2021 15:19:55 | 06-03-2021 15:19:55 | |
transformers | 12,012 | closed | RAG end to end with RAY throws pickling error | **Environmnet info:**
Transformers:4.5.1
Paltform:Ubuntu
Python:3.7
Torch 1.6.0
Gpus = yes
Distributed: Ray (1.3.0)
**Information**
Am using RAG end2end-retriever from examples code
examples/research_projects/rag-end2end-retriever
For the time being just trying on dummy data and just trying the script given there
sh ./test_run/test_finetune.sh
Used the config there except just changes number of gpus to 4 and changed the gpu order
**Who can help**
@shamanez
**Error**
Throws pickling error
Loading passages from test_run/dummy-kb/my_knowledge_dataset
Traceback (most recent call last):
File "finetune_rag.py", line 790, in <module>
main(args)
File "finetune_rag.py", line 727, in main
model: GenerativeQAModule = GenerativeQAModule(args)
File "finetune_rag.py", line 124, in __init__
hparams.model_name_or_path, hparams.actor_handles, config=config
File "/home/shraeyb/transformers/examples/research_projects/rag-end2end-retriever/distributed_ray_retriever.py", line 165, in from_pretrained
index=index,
File "/home/shraeyb/transformers/examples/research_projects/rag-end2end-retriever/distributed_ray_retriever.py", line 93, in __init__
for worker in self.retrieval_workers
File "/home/shraeyb/transformers/examples/research_projects/rag-end2end-retriever/distributed_ray_retriever.py", line 93, in <listcomp>
for worker in self.retrieval_workers
File "/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/actor.py", line 112, in remote
return self._remote(args, kwargs)
File "/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/actor.py", line 153, in _remote
return invocation(args, kwargs)
File "/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/actor.py", line 147, in invocation
num_returns=num_returns)
File "/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/actor.py", line 865, in _actor_method_call
list_args, name, num_returns, self._ray_actor_method_cpus)
File "python/ray/_raylet.pyx", line 1359, in ray._raylet.CoreWorker.submit_actor_task
File "python/ray/_raylet.pyx", line 1364, in ray._raylet.CoreWorker.submit_actor_task
File "python/ray/_raylet.pyx", line 304, in ray._raylet.prepare_args
File "/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/serialization.py", line 324, in serialize
return self._serialize_to_msgpack(value)
File "/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/serialization.py", line 304, in _serialize_to_msgpack
self._serialize_to_pickle5(metadata, python_objects)
File "/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/serialization.py", line 264, in _serialize_to_pickle5
raise e
File "/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/serialization.py", line 261, in _serialize_to_pickle5
value, protocol=5, buffer_callback=writer.buffer_callback)
File "/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 73, in dumps
cp.dump(obj)
File "/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/cloudpickle/cloudpickle_fast.py", line 580, in dump
return Pickler.dump(self, obj)
File "pyarrow/io.pxi", line 1021, in pyarrow.lib.Buffer.__reduce_ex__
AttributeError: module 'pickle' has no attribute 'PickleBuffer'
INFO:wandb.internal.internal:Internal process exited
/home/shraeyb/anaconda3/envs/new_st/lib/python3.7/site-packages/ray/autoscaler/_private/cli_logger.py:61: FutureWarning: Not all Ray CLI dependencies were found. In Ray 1.4+, the Ray CLI, autoscaler, and dashboard will only be usable via `pip install 'ray[default]'`. Please update your install command.
"update your install command.", FutureWarning)
Stopped all 13 Ray processes. | 06-03-2021 13:39:04 | 06-03-2021 13:39:04 | Check this[ StackOverflow question](https://stackoverflow.com/questions/67798070/raytune-is-throwing-error-module-pickle-has-no-attribute-picklebuffer-whe). btw I use python 3.8 and it worked perfectly for me.<|||||>Yup i did but that is also just the question without any answer unfortunately.<|||||>Hi there, I executed the code again by installing RAY just to double-check. It is working perfectly for me. I also use an anaconda and RAY with 1.3.0.
The only difference is python 3.8 instead of 3.7.
I would suggest you to update the anaconda and trying to run the script.<|||||>Alright thank you, i used python 3.8 now ( though my cuda is 10.1), That error looks resolved but something new pops up, Still trying on dummy data
File "finetune_rag.py", line 790, in <module>
main(args)
File "finetune_rag.py", line 727, in main
model: GenerativeQAModule = GenerativeQAModule(args)
File "finetune_rag.py", line 131, in __init__
retriever.set_ctx_encoder_tokenizer(ctx_encoder_tokenizer)
AttributeError: 'RagRayDistributedRetriever' object has no attribute 'set_ctx_encoder_tokenizer'
Exception ignored in: <function ActorHandle.__del__ at 0x7fc4c143aee0>
Traceback (most recent call last):
File "/home/shraeyb/anaconda3/envs/py38/lib/python3.8/site-packages/ray/actor.py", line 809, in __del__
AttributeError: 'NoneType' object has no attribute 'global_worker'
Exception ignored in: <function ActorHandle.__del__ at 0x7fc4c143aee0>
Traceback (most recent call last):
File "/home/shraeyb/anaconda3/envs/py38/lib/python3.8/site-packages/ray/actor.py", line 809, in __del__
AttributeError: 'NoneType' object has no attribute 'global_worker'
Exception ignored in: <function ActorHandle.__del__ at 0x7fc4c143aee0>
Traceback (most recent call last):
File "/home/shraeyb/anaconda3/envs/py38/lib/python3.8/site-packages/ray/actor.py", line 809, in __del__
AttributeError: 'NoneType' object has no attribute 'global_worker'
Exception ignored in: <function ActorHandle.__del__ at 0x7fc4c143aee0>
Traceback (most recent call last):
File "/home/shraeyb/anaconda3/envs/py38/lib/python3.8/site-packages/ray/actor.py", line 809, in __del__
AttributeError: 'NoneType' object has no attribute 'global_worker'
Stopped all 13 Ray processes.<|||||>Ah, I think you have to install the library from sources. Still, this is not in the pip version.<|||||>You mean ray 1.3 from source?<|||||>Tranformers library.
On Fri, Jun 4, 2021, 02:32 Shraey Bhatia ***@***.***> wrote:
> You mean ray 1.3 from source?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/12012#issuecomment-853916373>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGRFOV4TWZSIZNP3IODTQ6HARANCNFSM46AZYZBQ>
> .
>
<|||||>Thank you. This solves it. Indded needs to be python 3.8
Though i have a few 8gb gpus so cannot really fit on my box, goes out of memory. What gpu config did you use?<|||||>Oh. I am working with 32Gb gpus. This has nearly 800 M parameters. Bart
large model and two Bert base models.
On Fri, Jun 4, 2021, 02:58 Shraey Bhatia ***@***.***> wrote:
> Thank you. This solves it. Indded needs to be python 3.8
> Though i have a few 8gb gpus so cannot really fit on my box, goes out of
> memory. What gpu config did you use?
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/12012#issuecomment-853935772>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGS4TI7XJMAZAK3NOXDTQ6KCNANCNFSM46AZYZBQ>
> .
>
<|||||>No worries. thats fine, I will have to submit jobs to a cluster ( is just tedious to do that). Thank you |
transformers | 12,011 | closed | Add mlm pretraining xla torch readme | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds stats about MLM pretraining
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-03-2021 12:48:46 | 06-03-2021 12:48:46 | |
transformers | 12,010 | closed | Translation example generates the same input | I trained the [translation](https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation ) example, using a custom json file with the format specefied in README.
It has a bug, I don't know where, but strangely it just generated the same input. For example if you set `--do_predict`, the `generated_predictions.txt` is some of (or in language or format) of inputs. | 06-03-2021 12:43:01 | 06-03-2021 12:43:01 | Hey @puraminy,
Could you provide a reproducible code snippet? Otherwise, it's very difficult to reproduce the error you got.<|||||>@patrickvonplaten
Here is my code on github
https://github.com/puraminy/mt5-comet
you should be able to run `run_trans` in the root folder. You can see my settings there. I also changed `run_translation` a bit to adopt it with my local data.
The data were also provided there. For test I decreased the number of train and test data in input parameters. I didn't check it recently, but I guess it's the last one with the error I specified above.
There is another seq2seq task, which follows run_translation, named run_comet. That task is for English 2 English. It also generates the input than target.
However, for now you can first try `run_trans` and for example check the generated texts by `do_predict`
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,009 | closed | some issue in FlaxBertForMultipleChoice | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.7.0.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Bert
## To reproduce
Steps to reproduce the behavior:
```python3
from transformers import BertConfig, FlaxBertForMultipleChoice
import numpy as np
model = FlaxBertForMultipleChoice(BertConfig())
model(np.ones((1, 4)))
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Output
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/vasudevgupta/Local/transformers/src/transformers/models/bert/modeling_flax_bert.py", line 600, in __call__
return self.module.apply(
File "/Users/vasudevgupta/miniconda3/lib/python3.8/site-packages/flax/linen/module.py", line 936, in apply
return apply(
File "/Users/vasudevgupta/miniconda3/lib/python3.8/site-packages/flax/core/scope.py", line 687, in wrapper
y = fn(root, *args, **kwargs)
File "/Users/vasudevgupta/miniconda3/lib/python3.8/site-packages/flax/linen/module.py", line 1178, in scope_fn
return fn(module.clone(parent=scope), *args, **kwargs)
File "/Users/vasudevgupta/miniconda3/lib/python3.8/site-packages/flax/linen/module.py", line 275, in wrapped_module_method
y = fun(self, *args, **kwargs)
File "/Users/vasudevgupta/Local/transformers/src/transformers/models/bert/modeling_flax_bert.py", line 1022, in __call__
reshaped_logits = logits.reshape(-1, num_choices)
File "/Users/vasudevgupta/miniconda3/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py", line 1322, in _reshape
newshape = _compute_newshape(a, args[0] if len(args) == 1 else args)
File "/Users/vasudevgupta/miniconda3/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py", line 1316, in _compute_newshape
return tuple(- core.divide_shape_sizes(np.shape(a), newshape)
File "/Users/vasudevgupta/miniconda3/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py", line 1316, in <genexpr>
return tuple(- core.divide_shape_sizes(np.shape(a), newshape)
File "/Users/vasudevgupta/miniconda3/lib/python3.8/site-packages/jax/core.py", line 1360, in divide_shape_sizes
return handler.divide_shape_sizes(ds[:len(s1)], ds[len(s1):])
File "/Users/vasudevgupta/miniconda3/lib/python3.8/site-packages/jax/core.py", line 1280, in divide_shape_sizes
raise InconclusiveDimensionOperation(f"Cannot divide evenly the sizes of shapes {tuple(s1)} and {tuple(s2)}")
jax.core.InconclusiveDimensionOperation: Cannot divide evenly the sizes of shapes (1, 1) and (-1, 4)
```
I am probably missing something here. Please help me to figure out the issue here.
<!-- A clear and concise description of what you would expect to happen. -->
@LysandreJik @patrickvonplaten | 06-03-2021 11:24:43 | 06-03-2021 11:24:43 | Ah I see!
So multiple choice is actually the only class where the `input_ids` have to be of shape `(batch_size, num_choices, seq_length)` instead of `(batch_size, seq_length)` (see [docs](https://huggingface.co/transformers/model_doc/bert.html?highlight=flaxbert#flaxbertformultiplechoice) here).
This means that the code example should be changed to:
```python
from transformers import BertConfig, FlaxBertForMultipleChoice
import numpy as np
model = FlaxBertForMultipleChoice(BertConfig())
model(np.ones((1, 1, 4)))
```
in order to work :-)<|||||>Okay. I missed that😅. Sorry for this simple issue. I will fix bigbird also accordingly. <|||||>Absolutely no worries ;-) |
transformers | 12,008 | closed | Issue while using DPR with tensorflow and py torch | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform:Windows
- Python version:3.6
- PyTorch version (GPU?No):1.7.0
- Tensorflow version (GPU? No):2.3.1
- Using GPU in script?:No
- Using distributed or parallel set-up in script?:No
### Who can help
Hi @LysandreJik , @Rocketknight1 ,@patrickvonplaten, @lhoestq ,@sgugger
I'm trying the basic example mentioned in the DPR documentation but facing the below issue,
can you please help me,and can you please let me know using what functionality I can get the answer for a given question to the provided text, as `relevance_logits` is just giving a number which I'm not understanding the significance.
Thanks you all!!
```
#########################################################################
##Using **pt** as return tensors
from transformers import DPRReader, DPRReaderTokenizer
tokenizer = DPRReaderTokenizer.from_pretrained('facebook/dpr-reader-single-nq-base')
model = DPRReader.from_pretrained('facebook/dpr-reader-single-nq-base')
encoded_inputs = tokenizer(
questions=["What is love ?"],
titles=["Haddaway"],
texts=["'What Is Love' is a song recorded by the artist Haddaway"],
return_tensors='pt'
)
outputs = model(**encoded_inputs)
start_logits = outputs.stat_logits
end_logits = outputs.end_logits
relevance_logits = outputs.relevance_logits
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-1-86e12c26a869> in <module>
9 )
10 outputs = model(**encoded_inputs)
---> 11 start_logits = outputs.stat_logits
12 end_logits = outputs.end_logits
13 relevance_logits = outputs.relevance_logits
AttributeError: 'DPRReaderOutput' object has no attribute 'stat_logits'
#########################################################################
##Using tf as return_tensors
from transformers import DPRReader, DPRReaderTokenizer
tokenizer = DPRReaderTokenizer.from_pretrained('facebook/dpr-reader-single-nq-base')
model = DPRReader.from_pretrained('facebook/dpr-reader-single-nq-base')
encoded_inputs = tokenizer(
questions=["What is love ?"],
titles=["Haddaway"],
texts=["'What Is Love' is a song recorded by the artist Haddaway"],
return_tensors='tf'
)
outputs = model(**encoded_inputs)
start_logits = outputs.stat_logits
end_logits = outputs.end_logits
relevance_logits = outputs.relevance_logits
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-6-6f88caac8403> in <module>
8 return_tensors='tf'
9 )
---> 10 outputs = model(**encoded_inputs)
11 start_logits = outputs.stat_logits
12 end_logits = outputs.end_logits
~\Anaconda3\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~\Anaconda3\lib\site-packages\transformers\models\dpr\modeling_dpr.py in forward(self, input_ids, attention_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)
641 raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
642 elif input_ids is not None:
--> 643 input_shape = input_ids.size()
644 elif inputs_embeds is not None:
645 input_shape = inputs_embeds.size()[:-1]
AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'size'
```
| 06-03-2021 09:51:24 | 06-03-2021 09:51:24 | Hey @MaheshChandrra,
It seems like there is a type as it should be called `outputs.start_logits` instead of `outputs.stat_logits`. Would you like to open a PR to fix it? :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,007 | closed | Fix megatron_gpt2 attention block's causal mask | # What does this PR do?
Fixes the conversion between Megatron-LM's GPT2 parameters and transformer's GPT2.
In Megatron-LM the attention mask is implemented differently and is not part of the provided checkpoint.
This PR initializes the mask as in [GPT2](https://github.com/huggingface/transformers/blob/master/src/transformers/models/gpt2/modeling_gpt2.py#L131).
Fixes [#11916](https://github.com/huggingface/transformers/issues/11916) and [#12004](https://github.com/huggingface/transformers/issues/12004).
Regarding [#11916](https://github.com/huggingface/transformers/issues/11916), this PR lowers the perplexity to 20 (cf. 30 for gpt2) without fine-tuning.
Regarding [#12004](https://github.com/huggingface/transformers/issues/12004), the prompt in the issue now is continued coherently.
> How are you doing these days?
> I'm just trying to get through the day. I've been working on a lot of things, but it's hard when your kids come home and they're not here with me anymore because we don't have that connection like before." She said she has had some good times since her son was born in January 2012: "It feels great being able for him [to] be around his dad again," he added as if remembering how much fun their relationship used too! The couple also share two daughters together – Ella Rose (born April 2013) who is now 10 years old; Aviana Grace Marie-Gracee DeSantis Jr., 5th grade daughter from an earlier marriage whom Lohan recently
## Who can review?
The PR for the megatron models was reviewed by @LysandreJik
@jdemouth | 06-03-2021 09:37:32 | 06-03-2021 09:37:32 | Hi, I am struggling with the same issue as I detailed in #12004
Did you happen to confirm that once you make this change the two logits produced by Megatron GPT2 and transformers GPT2 are identical?
From what I have experimented, making attention mask as lower triangular matrix did not produce the same logits, nor generated sensible sentences :(<|||||>Plus, although I did not check thorougly, [this line](https://github.com/huggingface/transformers/blob/61c506349134db0a0a2fd6fb2eff8e29a2f84e79/src/transformers/models/gpt2/modeling_gpt2.py#L132) seems to cancel out the wrong conversion, you might wanna take a look!<|||||>Could you share the generation snippet to reproduce the following result?
```
How are you doing these days?
I'm just trying to get through the day. I've been working on a lot of things, but it's hard when your kids come home and they're not here with me anymore because we don't have that connection like before." She said she has had some good times since her son was born in January 2012: "It feels great being able for him [to] be around his dad again," he added as if remembering how much fun their relationship used too! The couple also share two daughters together – Ella Rose (born April 2013) who is now 10 years old; Aviana Grace Marie-Gracee DeSantis Jr., 5th grade daughter from an earlier marriage whom Lohan recently
```<|||||>Hi,
I ran the scripts that you provided in the issue, the logits with Megatron-LM and transformers are close relative to fp16 precision. There is of course the caveat that the embedding table in transformer's GPT2 has size 50257, while in Megatron-LM is 50304 (therefore we must apply [this](https://github.com/huggingface/transformers/blob/master/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py#L73) step).
(see attached code).
> Plus, although I did not check thorougly, this line seems to cancel out the wrong conversion, you might wanna take a look!
That is initialization, which we overwrite with the checkpoint
> Could you share the generation snippet to reproduce the following result?
I used the code you provided in the issue!
```
import sys
sys.path.append('/project/Megatron-LM')
import torch
from megatron import get_args, get_tokenizer, initialize_megatron, mpu
from megatron.model import GPTModel
from megatron.training import get_model
from megatron.checkpointing import load_checkpoint
from megatron.utils import get_ltor_masks_and_position_ids
from transformers import GPT2LMHeadModel
from tokenizers import ByteLevelBPETokenizer
def initialize():
model_path = "/project/megatron-gpt2-345m"
sys.argv.extend(
[
"--num-layers", "24",
"--hidden-size", "1024",
"--num-attention-heads", "16",
"--seq-length", "1024",
"--max-position-embeddings", "1024",
"--tokenizer-type", "GPT2BPETokenizer",
"--fp16",
"--load", str(model_path),
"--vocab-file", str(model_path + "/vocab.json"),
"--merge-file", str(model_path + "/merges.txt"),
"--micro-batch-size", "1",
"--checkpoint-activations",
"--no-scaled-masked-softmax-fusion",
"--no-load-rng",
"--no-load-optim"
]
)
initialize_megatron(ignore_unknown_args=True)
if __name__ == "__main__":
if mpu.is_unitialized():
initialize()
args = get_args()
tokenizer = get_tokenizer().tokenizer #.tokenizer
def model_provider(pre_process=True, post_process=True):
return GPTModel(num_tokentypes=0, parallel_output=False,
pre_process=True, post_process=True)
model = get_model(model_provider)
load_checkpoint(model=model, optimizer=None, lr_scheduler=None)
model = model[0]
model.eval()
inputs = "Hi, how are you doing?"
input_ids = torch.tensor(tokenizer.encode(inputs)).unsqueeze(0)
attention_mask, _, position_ids = get_ltor_masks_and_position_ids(
input_ids,
0,
reset_position_ids=False,
reset_attention_mask=False,
eod_mask_loss=False,
)
input_ids = input_ids.cuda()
position_ids = position_ids.cuda()
attention_mask = attention_mask.cuda()
logits = model(input_ids, position_ids, attention_mask)
model2 = GPT2LMHeadModel.from_pretrained("/project/megatron-gpt2-345m/")
tok = ByteLevelBPETokenizer(
"/project/megatron-gpt2-345m/vocab.json",
"/project/megatron-gpt2-345m/merges.txt", unicode_normalizer="nfkc")
input_ids = torch.tensor(tok.encode("Hi, how are you doing?").ids).unsqueeze(0).cuda()
model = model2.cuda()
model2.eval()
out = model2(input_ids)
for j in range(out.logits.shape[1]):
for i in range(out.logits.shape[2]):
a, b= out.logits[0,j,i].item(), logits[0,j,i].item()
assert(abs(a-b) / max(max(abs(a),abs(b)), 0.5) < 0.1)
```
<|||||>@novatig Thanks for solving my issue!
As for the issue of @hwijeen, where he finds the split in self-attention(QKV and heads) of megatron and huggingface in different order, but you get the close answers.
I think one possible reason is that there exists three versions of checkpoint in megatron, which needs transform between them, shown as https://github.com/NVIDIA/Megatron-LM/blob/42c1cf4279acea5a554500dcb552211f44cbec45/megatron/checkpointing.py#L209.
So the version of checkpoint should also be reported when talking about the wrong results.
Or it will be better to modify the convert code for supporting different checkpoint versions.<|||||>Hi,
could you perhaps add a `test_modeling_megatron_gpt2.py` file to the tests folder? As this model is not a new model (only a conversion script is required), it could look very similar to [the one of BORT](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_bort.py) (which is also just a conversion script to convert the weights to a BertModel).
The test should only include an integration test, which checks whether the HuggingFace model outputs the same output tensors (e.g. logits) on the same input data as the original implementation.
Thanks!<|||||>Hi,
As @codecaution pointed out, the problem in my case was that I was working with Megatron checkpoint version 3, whereas the proposed conversion code supports version 0 -- so please ignore the above comments of mine.
<|||||>Hi,
Many thanks @hwijeen and @codecaution for clarifying the issue!
I updated the the PR, now the conversion script works also for checkpoints generated by recent versions of Megatron-LM.
@NielsRogge @LysandreJik I added a brief integration test.
For simplicity, the reference outputs are not those from Megatron-LM but are from the HuggingFace model, and are close to the ones from Megatron-LM.<|||||>Thanks @novatig!
> @NielsRogge @LysandreJik I added a brief integration test.
For simplicity, the reference outputs are not those from Megatron-LM but are from the HuggingFace model, and are close to the ones from Megatron-LM.
For reference, how close are these to the official Megatron GPT-2 output in terms of magnitude?<|||||>> Thanks @novatig!
>
> > @NielsRogge @LysandreJik I added a brief integration test.
> > For simplicity, the reference outputs are not those from Megatron-LM but are from the HuggingFace model, and are close to the ones from Megatron-LM.
>
> For reference, how close are these to the official Megatron GPT-2 output in terms of magnitude?
For reference, the test values (taken from a "diagonal" of the returned logits tensor) are:
```
megatron = [4.9492188, -0.2866211, -1.2041016, -4.0351562, -0.5180664, -5.2148438, -1.2412109, -1.8310547, -1.7675781, -4.71875, -0.23901367, -1.0761719, -2.1699219, 0.41235352, -3.8007812, -4.0585938, -2.5292969, -3.3808594, 4.3789062]
hf-gpt2 = [4.9414, -0.2920, -1.2148, -4.0273, -0.5161, -5.2109, -1.2412, -1.8301, -1.7734, -4.7148, -0.2317, -1.0811, -2.1777, 0.4141, -3.7969, -4.0586, -2.5332, -3.3809, 4.3867]
```
The absolute error is approximately <= 1e-2.
I updated the PR with style changes.
Sorry for all the commits! I did not realize someone had to manually confirm approval for running the tests!<|||||>> Hi,
>
> Many thanks @hwijeen and @codecaution for clarifying the issue!
> I updated the the PR, now the conversion script works also for checkpoints generated by recent versions of Megatron-LM.
>
> @NielsRogge @LysandreJik I added a brief integration test.
> For simplicity, the reference outputs are not those from Megatron-LM but are from the HuggingFace model, and are close to the ones from Megatron-LM.
Hi! @novatig
Really thanks for your effort! To make this PR better, I would like to give my own suggestion.
As Megatron-LM is a popular repo for training large transformer-based model, it will be better to take this into consideration in the convert code, including different mode size and model-parallel.<|||||>> Hi! @novatig
> Really thanks for your effort! To make this PR better, I would like to give my own suggestion.
> As Megatron-LM is a popular repo for training large transformer-based model, it will be better to take this into consideration in the convert code, including different mode size and model-parallel.
Yes, that's a good idea!
@codecaution please review my last commit to check if it looks like what you were thinking about.
<|||||>> > Hi! @novatig
> > Really thanks for your effort! To make this PR better, I would like to give my own suggestion.
> > As Megatron-LM is a popular repo for training large transformer-based model, it will be better to take this into consideration in the convert code, including different mode size and model-parallel.
>
> Yes, that's a good idea!
> @codecaution please review my last commit to check if it looks like what you were thinking about.
Thanks! Good job! |
transformers | 12,006 | closed | Fluctuating embedding given by different random seed during inference | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1
- Platform: Linux-5.4.0-65-generic-x86_64-with-glibc2.10
- Python version: 3.8.10
- PyTorch version (GPU?): 1.8.1 (GPU)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- bert: @LysandreJik
-->
## Information
Model I am using Bert:
The problem arises when using my own modified scripts:
Get pooler output from bert-base-uncased with different random seed
## Expected behavior
Fluctuating embedding given by different random seed during inference
<!-- A clear and concise description of what you would expect to happen. -->
| 06-03-2021 09:32:16 | 06-03-2021 09:32:16 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,005 | closed | where is the code for DetrFeatureExtractor, DetrForObjectDetection | Hello my dear friend.
i am long for the model of https://huggingface.co/facebook/detr-resnet-50
i cannot find the code of it in transformers==4.7.0.dev0 and 4.6.1 pleae help me . appreciated.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 06-03-2021 09:28:27 | 06-03-2021 09:28:27 | Haha it's not merged yet, thanks for your interest :) will be available soon<|||||>thanks dude<|||||>Resolved by #11653 . Code can be found [here](https://github.com/huggingface/transformers/blob/master/src/transformers/models/detr/modeling_detr.py). |
transformers | 12,004 | closed | Megatron GPT2 not compatible with transformers | **TLDR:**
Current implementation detail of attention in GPT2 is different from Megatron-LM GPT2, raising compatibility issue. Since the required change is quite minimal, how about changing transformers's implementation to follow Megatron-LM's?
**Quite detailed exploration:**
I used the [script](https://github.com/huggingface/transformers/blob/master/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py) to convert Megatron-LM GPT2 into transformers GPT2.
The script itself works well, but the generation results with the converted checkpoint show that there is something wrong.
```python
from transformers import GPT2LMHeadModel
from transformers import GPT2Tokenizer
model = GPT2LMHeadModel.from_pretrained(".")
tok = GPT2Tokenizer.from_pretrained("gpt2")
input_ids = tok.encode("How are you doing these days?" , return_tensors="pt")
gen_tokens = model.generate(
input_ids,
max_length=150,
min_length=30,
do_samples=True,
num_beams=1,
temperature=0.9,
top_p=0.8,
top_k=0,
repetition_penalty=5.0,
)
print(tok.batch_decode(gen_tokens)[0])
# How are you doing these days?` `'' ''",'',",..., and '´','',", – ---''truvesabendabyaruobirockeysludboatmanlongsoceyeand how nobody anybody anything no way or what why not with all of theofoldways is by showing that if it's going to skip out onansonsarry hold skator who wonder when they're shown their a one two those well saying so there but behind them only ones don doubt wonders just like he his soon Johnny Louous ever after asking holding good say hey even though its still yet we know as long for now then may be in show off telling said terking something which will start next thing already says
```
I looked into the problem and found out that the logits produced by Megatron-LM GPT-2 and transformers GPT-2 are different.
Careful debugging revealed that it stems from the different way of implementing the attention mechanism.
```python
# Megatron-LM
# splits head first
225 # [sq, b, (np * 3 * hn)] --> [sq, b, np, 3 * hn]
226 new_tensor_shape = mixed_x_layer.size()[:-1] + \
227 (self.num_attention_heads_per_partition,
228 3 * self.hidden_size_per_attention_head)
229 mixed_x_layer = mixed_x_layer.view(*new_tensor_shape)
# and then splits key,query,value
230
231 # [sq, b, np, 3 * hn] --> 3 [sq, b, np, hn]
232 (query_layer,
233 key_layer,
234 value_layer) = mpu.split_tensor_along_last_dim(mixed_x_layer, 3)
# result
query_layer.norm() # tensor(110.7500, device='cuda:0', dtype=torch.float16, grad_fn=<CopyBackwards>)
key_layer.norm() # tensor(125.3125, device='cuda:0', dtype=torch.float16, grad_fn=<CopyBackwards>)
value_layer.norm() # tensor(42.9688, device='cuda:0', dtype=torch.float16, grad_fn=<CopyBackwards>)
```
```python
# transformers
# splits key, query, value first
242 query, key, value = self.c_attn(hidden_states).split(self.split_size, dim=2)
# and then splits head
244 query = self._split_heads(query, self.num_heads, self.head_dim)
245 key = self._split_heads(key, self.num_heads, self.head_dim)
246 value = self._split_heads(value, self.num_heads, self.head_dim)
# result
query.norm() # tensor(103.4170, device='cuda:0', grad_fn=<CopyBackwards>)
key.norm() # tensor(102.9513, device='cuda:0', grad_fn=<CopyBackwards>)
value.norm() # tensor(92.2508, device='cuda:0', grad_fn=<CopyBackwards>)
```
When I modified transformers GPT2 implementation to match Megatron-LM, the generation looks correct.
```python
# before
How are you doing these days?` `'' ''",'',",..., and '´','',", – ---''truvesabendabyaruobirockeysludboatmanlongsoceyeand ....
# after
How are you doing these days?, even further all," not a as to also 3 both every if [ always and ever ....
```
The generation results does not look super convincing, probably because it is not a very capable model (only ~300M parameters). When I experimented with my own LM checkpoint of 1.3B, the generation is only sensible when I made the modification.
Plus, I did a sanity check and confirmed that the logits from the Megatron-LM GPT-2 and transformers GPT-2 are the same only when I made the modification [here](https://github.com/huggingface/transformers/blob/61c506349134db0a0a2fd6fb2eff8e29a2f84e79/src/transformers/models/gpt2/modeling_gpt2.py#L242).
```python
# modify to follow Megatron-LM's implementation
else:
# query, key, value = self.c_attn(hidden_states).split(self.split_size, dim=2)
attn_out = self.c_attn(hidden_states)
attn_out = self._split_heads(attn_out, self.num_heads, self.head_dim * 3)
# query = self._split_heads(query, self.num_heads, self.head_dim)
# key = self._split_heads(key, self.num_heads, self.head_dim)
# value = self._split_heads(value, self.num_heads, self.head_dim)
query, key, value = attn_out.split(self.head_dim, 3)
```
How about merging this change into master branch? I would be happy to make a PR.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-4.18.0-25-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik @jdemouth
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): GPT2 (converted from Megatron-LM)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Load [gpt-2 checkpoint](https://ngc.nvidia.com/catalog/models/nvidia:megatron_lm_345m) with Megatron-LM and do a forward pass.
2. Convert the checkpoint into transformers-compatible format using the [script](https://github.com/huggingface/transformers/blob/master/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py), load the model with transformers and do the same forward pass.
3. The logits are not the same.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
1. Generation results of converted GPT2 are sensible.
2. Logits from Megatron-LM GPT2 and transformers GPT2 are the same. | 06-03-2021 08:51:12 | 06-03-2021 08:51:12 | This is the code to check if the two gpt2s produce the same logits. Hope this helps and I would be happy to provide further information if requested.
* megatron-LM
```python
import torch
from megatron import get_args, get_tokenizer, initialize_megatron, mpu
from megatron.model import GPTModel
from megatron.training import get_model
from megatron.checkpointing import load_checkpoint
from megatron.utils import get_ltor_masks_and_position_ids
def initialize():
model_path = "/workspace/my_gpt3_1.3B_mp1"
sys.argv.extend(
[
"--distributed",
"--distributed-backend",
"nccl",
"--fp16",
"--load", str(model_path),
"--vocab-file", str(model_path / "vocab.json"),
"--merge-file", str(model_path / "merges.txt"),
"--config-path", str(model_path / "deploy.json"),
"--micro-batch-size", "1",
"--no-scaled-masked-softmax-fusion",
"--no-load-rng",
"--no-load-optim"
]
)
initialize_megatron(ignore_unknown_args=True)
if __name__ == "__main__":
if mpu.is_unitialized():
initialize()
args = get_args()
tokenizer = get_tokenizer().tokenizer.tokenizer
model = get_model(lambda: GPTModel(num_tokentypes=0, parallel_output=False))
load_checkpoint(model=model, optimizer=None, lr_scheduler=None)
model.eval()
inputs = "Hi, how are you doing?"
input_ids = torch.tensor(tokenizer.encode(inputs).ids).unsqueeze(0)
attention_mask, _, position_ids = get_ltor_masks_and_position_ids(
input_ids,
0,
reset_position_ids=False,
reset_attention_mask=False,
eod_mask_loss=False,
)
input_ids = input_ids.cuda()
position_ids = position_ids.cuda()
attention_mask = attention_mask.cuda()
logits = model(input_ids, position_ids, attention_mask)
print(logits.norm())
```
* transformers
```python
from transformers_ import GPT2LMHeadModel
from tokenizers import ByteLevelBPETokenizer
import torch
model = GPT2LMHeadModel.from_pretrained(".")
tok = ByteLevelBPETokenizer("vocab.json", "merges.txt", unicode_normalizer="nfkc")
input_ids = torch.tensor(tok.encode("Hi, how are you doing?").ids).unsqueeze(0).cuda()
model = model.cuda()
model.eval()
out = model(input_ids)
print(out.logits.norm())
```<|||||>Hi, I also meet the problem about converting Megatron-LM to HuggingFace last week and I want to give my solution, which may help you.
+ The convert code need to be changed as @navatig, where there should be a lower triangular part of the matrix [change](https://github.com/huggingface/transformers/pull/12007/files)
+ Megatron-LM has four kinds of checkpoint versions, shown as [here](https://github.com/NVIDIA/Megatron-LM/blob/42c1cf4279acea5a554500dcb552211f44cbec45/megatron/checkpointing.py#L209). You have also mentioned the different split ways in your issue. So I suggest you to check the version of your own checkpoint, transform is into version 0 (which is the same as the convert script). This code shows how to transform version 0/1 to version 3, I think you can modify it as you need.<|||||>Thank you!
The checkpoint I was using was 3.0 and when I modified the attention matrix, it worked!<|||||>@codecaution Thanks a lot for your help! It was very useful! |
transformers | 12,003 | closed | Fast Tokenization fail for the pretrained model | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Linux-5.4.89+-x86_64-with-debian-bullseye-sid
- Python version: 3.6.9
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
Library:
- pipelines: @LysandreJik
## Information
The model I am using is pipeline summarization:
## To reproduce
Steps to reproduce the behavior:
1. from transformers import pipeline
2. summarizer = pipeline("summarization", model="Salesforce/bart-large-xsum-samsum", device=0)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
Traceback (most recent call last):
File "run_pipeline.py", line 52, in <module>
summarizer = pipeline("summarization", model=args.model, device=0)
File "/opt/conda/lib/python3.6/site-packages/transformers/pipelines/__init__.py", line 388, in pipeline
tokenizer, revision=revision, use_fast=use_fast, _from_pipeline=task, **model_kwargs
File "/opt/conda/lib/python3.6/site-packages/transformers/models/auto/tokenization_auto.py", line 423, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1710, in from_pretrained
resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs
File "/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1781, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/opt/conda/lib/python3.6/site-packages/transformers/models/roberta/tokenization_roberta_fast.py", line 173, in __init__
**kwargs,
File "/opt/conda/lib/python3.6/site-packages/transformers/models/gpt2/tokenization_gpt2_fast.py", line 145, in __init__
**kwargs,
File "/opt/conda/lib/python3.6/site-packages/transformers/tokenization_utils_fast.py", line 96, in __init__
fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
Exception: No such file or directory (os error 2)
```
## Expected behavior
I would expect the model can use fast tokenization as other models did.
| 06-03-2021 06:51:27 | 06-03-2021 06:51:27 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,002 | closed | ImportError: cannot import name 'MarianMTModel' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform:jupyter notebook
- Python version: 3.6.7
- PyTorch version (GPU?):1.0.1.post2
- Tensorflow version (GPU?):
- Using GPU in script?:No
- Using distributed or parallel set-up in script?:No
| 06-03-2021 06:37:18 | 06-03-2021 06:37:18 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,001 | closed | Update run_ner.py with id2label config | # What does this PR do?
Enhancement for `run_ner.py` to produce more meaningful `id2label`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger
| 06-03-2021 05:21:59 | 06-03-2021 05:21:59 | Sorry but I've rarely used `run_ner_no_trainer.py`. And, well, since `run_ner_no_trainer.py` does not change `config.json`, thus I want to keep this PR as is.<|||||>Will take care of it then, thanks! |
transformers | 12,000 | closed | Exporting the operator repeat_interleave to ONNX opset version (<=12) is not supported! | When I export Roformer model to onnxruntime, this error occurred! Is there any ops can replace 'torch.repeat_interleave'?
It's in [https://github.com/huggingface/transformers/blob/master/src/transformers/models/roformer/modeling_roformer.py](url), at line 330 and 332 | 06-03-2021 05:07:15 | 06-03-2021 05:07:15 | I have a solution, but maybe it's not the best way.
sin_pos = torch.zeros_like(sin).repeat(1, 1, 1, 2)
sin_pos[..., ::2] = sin
sin_pos[..., 1::2] = sin
cos_pos = torch.zeros_like(cos).repeat(1, 1, 1, 2)
cos_pos[..., ::2] = cos
cos_pos[..., 1::2] = cos<|||||>It seems that you haven't change roformer's positional embedding codes yet. Demo codes are like:
```python
import torch
sinusoidal_pos = torch.randn(1,12,16,32)
sin, cos = sinusoidal_pos.chunk(2, dim=-1)
sin_pos = torch.repeat_interleave(sin, 2, dim=-1)
cos_pos = torch.repeat_interleave(cos, 2, dim=-1)
sin_pos_newway = torch.stack([sin,sin],axis=-1).reshape_as(sinusoidal_pos)
cos_pos_newway = torch.stack([cos,cos],axis=-1).reshape_as(sinusoidal_pos)
assert sin_pos.equal(sin_pos_newway)
assert cos_pos.equal(cos_pos_newway)
``` |
transformers | 11,999 | closed | Unable to find examples on using DPR for transfer learning,request you to provide examples | Hi Team Hugging face
Can you please provide a Q and A example for retrieval of an answer for a given text using DPR,I've read the documentation but couldn't find it. Would be of great help.
Thanks
Mahesh Mareedu
@lhoestq | 06-03-2021 04:06:27 | 06-03-2021 04:06:27 | Hi ! There are a few examples in the `datasets` documentation. It shows how to add a FAISS index to a dataset to perform dense retrieval using DPR:
https://huggingface.co/docs/datasets/faiss_and_ea.html#adding-a-faiss-index<|||||>Thanks for the link,I tried using elasticsearch from my notebook and getting the below error,any help would be useful,Thanks.
```
from datasets import load_dataset
squad = load_dataset('squad', split='validation')
squad.add_elasticsearch_index("context", host="localhost", port="9200")
---------------------------------------------------------------------------
ConnectionRefusedError Traceback (most recent call last)
~\Anaconda3\lib\site-packages\urllib3\connection.py in _new_conn(self)
158 try:
--> 159 conn = connection.create_connection(
160 (self._dns_host, self.port), self.timeout, **extra_kw
~\Anaconda3\lib\site-packages\urllib3\util\connection.py in create_connection(address, timeout, source_address, socket_options)
83 if err is not None:
---> 84 raise err
85
~\Anaconda3\lib\site-packages\urllib3\util\connection.py in create_connection(address, timeout, source_address, socket_options)
73 sock.bind(source_address)
---> 74 sock.connect(sa)
75 return sock
ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it
During handling of the above exception, another exception occurred:
NewConnectionError Traceback (most recent call last)
~\Anaconda3\lib\site-packages\elasticsearch\connection\http_urllib3.py in perform_request(self, method, url, params, body, timeout, ignore, headers)
244
--> 245 response = self.pool.urlopen(
246 method, url, body, retries=Retry(False), headers=request_headers, **kw
~\Anaconda3\lib\site-packages\urllib3\connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
723
--> 724 retries = retries.increment(
725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
~\Anaconda3\lib\site-packages\urllib3\util\retry.py in increment(self, method, url, response, error, _pool, _stacktrace)
378 # Disabled, indicate to re-raise the error.
--> 379 raise six.reraise(type(error), error, _stacktrace)
380
~\Anaconda3\lib\site-packages\urllib3\packages\six.py in reraise(tp, value, tb)
734 raise value.with_traceback(tb)
--> 735 raise value
736 finally:
~\Anaconda3\lib\site-packages\urllib3\connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw)
669 # Make the request on the httplib connection object.
--> 670 httplib_response = self._make_request(
671 conn,
~\Anaconda3\lib\site-packages\urllib3\connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw)
391 else:
--> 392 conn.request(method, url, **httplib_request_kw)
393
~\Anaconda3\lib\http\client.py in request(self, method, url, body, headers, encode_chunked)
1239 """Send a complete request to the server."""
-> 1240 self._send_request(method, url, body, headers, encode_chunked)
1241
~\Anaconda3\lib\http\client.py in _send_request(self, method, url, body, headers, encode_chunked)
1285 body = _encode(body, 'body')
-> 1286 self.endheaders(body, encode_chunked=encode_chunked)
1287
~\Anaconda3\lib\http\client.py in endheaders(self, message_body, encode_chunked)
1234 raise CannotSendHeader()
-> 1235 self._send_output(message_body, encode_chunked=encode_chunked)
1236
~\Anaconda3\lib\http\client.py in _send_output(self, message_body, encode_chunked)
1005 del self._buffer[:]
-> 1006 self.send(msg)
1007
~\Anaconda3\lib\http\client.py in send(self, data)
945 if self.auto_open:
--> 946 self.connect()
947 else:
~\Anaconda3\lib\site-packages\urllib3\connection.py in connect(self)
186 def connect(self):
--> 187 conn = self._new_conn()
188 self._prepare_conn(conn)
~\Anaconda3\lib\site-packages\urllib3\connection.py in _new_conn(self)
170 except SocketError as e:
--> 171 raise NewConnectionError(
172 self, "Failed to establish a new connection: %s" % e
NewConnectionError: <urllib3.connection.HTTPConnection object at 0x000002AF86A1A670>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it
During handling of the above exception, another exception occurred:
ConnectionError Traceback (most recent call last)
<ipython-input-2-91302caec2e8> in <module>
1 from datasets import load_dataset
2 squad = load_dataset('squad', split='validation')
----> 3 squad.add_elasticsearch_index("context", host="10.41.128.179", port="8082")
~\Anaconda3\lib\site-packages\datasets\arrow_dataset.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
3040 """
3041 with self.formatted_as(type=None, columns=[column]):
-> 3042 super().add_elasticsearch_index(
3043 column=column,
3044 index_name=index_name,
~\Anaconda3\lib\site-packages\datasets\search.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config)
539 host=host, port=port, es_client=es_client, es_index_name=es_index_name, es_index_config=es_index_config
540 )
--> 541 es_index.add_documents(self, column=column)
542 self._indexes[index_name] = es_index
543
~\Anaconda3\lib\site-packages\datasets\search.py in add_documents(self, documents, column)
140 index_name = self.es_index_name
141 index_config = self.es_index_config
--> 142 self.es_client.indices.create(index=index_name, body=index_config)
143 number_of_docs = len(documents)
144 not_verbose = bool(logger.getEffectiveLevel() > WARNING)
~\Anaconda3\lib\site-packages\elasticsearch\client\utils.py in _wrapped(*args, **kwargs)
150 if p in kwargs:
151 params[p] = kwargs.pop(p)
--> 152 return func(*args, params=params, headers=headers, **kwargs)
153
154 return _wrapped
~\Anaconda3\lib\site-packages\elasticsearch\client\indices.py in create(self, index, body, params, headers)
121 raise ValueError("Empty value passed for a required argument 'index'.")
122
--> 123 return self.transport.perform_request(
124 "PUT", _make_path(index), params=params, headers=headers, body=body
125 )
~\Anaconda3\lib\site-packages\elasticsearch\transport.py in perform_request(self, method, url, headers, params, body)
388 # raise exception on last retry
389 if attempt == self.max_retries:
--> 390 raise e
391 else:
392 raise e
~\Anaconda3\lib\site-packages\elasticsearch\transport.py in perform_request(self, method, url, headers, params, body)
356
357 try:
--> 358 status, headers_response, data = connection.perform_request(
359 method,
360 url,
~\Anaconda3\lib\site-packages\elasticsearch\connection\http_urllib3.py in perform_request(self, method, url, params, body, timeout, ignore, headers)
256 if isinstance(e, ReadTimeoutError):
257 raise ConnectionTimeout("TIMEOUT", str(e), e)
--> 258 raise ConnectionError("N/A", str(e), e)
259
260 # raise warnings if any from the 'Warnings' header.
ConnectionError: ConnectionError(<urllib3.connection.HTTPConnection object at 0x000002AF86A1A670>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it) caused by: NewConnectionError(<urllib3.connection.HTTPConnection object at 0x000002AF86A1A670>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it)
```
<|||||>Hi ! Did you start elasticsearch on your machine ? Could you also check that you used the right port ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,998 | open | Add SENet Blocks in Encoding Layers | # 🚀 Feature Request
I read the article "[SesameBERT: Attention for Anywhere](https://arxiv.org/pdf/1910.03176.pdf)" and would like to add SENet blocks in the Huggingface implementation. The article's authors made an implementation with [Tensorflow](https://github.com/ICLR2020Sesame/SesameBert/blob/master/modeling.py), but I would like to use the lib in pytorch.
## Motivation
The use of ([Squeeze-and-Excitation Networks](https://arxiv.org/abs/1709.01507)) SENet Blocks has obtained state-of-the-art results. And they seem to be promising in NLP.
## Your contribution
I know that it is possible to modify the [[BertLayer()](https://github.com/huggingface/transformers/blob/61c506349134db0a0a2fd6fb2eff8e29a2f84e79/src/transformers/models/bert/modeling_bert.py#L430)] and [[BertEnconder()](https://github.com/huggingface/transformers/blob/61c506349134db0a0a2fd6fb2eff8e29a2f84e79/src/transformers/models/bert/modeling_bert.py#L513)] classes
Any suggestions on how to modify the code so that you can apply the idea used in the article?

| 06-03-2021 02:44:47 | 06-03-2021 02:44:47 | Hey, I'd like to work on implementing this feature if it hasn't been done yet. |
transformers | 11,997 | closed | [deepspeed] add nvme test skip rule | As discussed at https://github.com/microsoft/DeepSpeed/issues/1126 make it possible to skip the nvme test if user's system isn't compatible with libaio requirements.
@sgugger
| 06-02-2021 18:12:43 | 06-02-2021 18:12:43 | |
transformers | 11,996 | closed | None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. | While generating exe using pyinstaller, i get following error. In python IDL it works finw
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
My TF and Pytorch versions are 2.5.0, 1.8.1+cpu respectively.
I tried uninstalling them and then reinstalling them along with transformers
Any help will be appreciated | 06-02-2021 15:31:24 | 06-02-2021 15:31:24 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,995 | closed | tensorflow has no attribute swish | ## Environment info
- transformers version: 4.6.1
- Platform: Linux-5.4.0-1047-azure-x86_64-with-glibc2.10
- Python version: 3.8.1
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
```
azureuser@ai4hr-k80:~/cloudfiles/code/Users/francois.mentec$ pip show tensorflow
Name: tensorflow
Version: 2.5.0
Summary: TensorFlow is an open source machine learning framework for everyone.
Home-page: https://www.tensorflow.org/
Author: Google Inc.
Author-email: [email protected]
License: Apache 2.0
Location: /anaconda/envs/azureml_py38/lib/python3.8/site-packages
Requires: typing-extensions, h5py, grpcio, tensorflow-estimator, wrapt, gast, tensorboard, six, keras-nightly, astunparse, flatbuffers, wheel, absl-py, protobuf, numpy, opt-einsum, keras-preprocessing, termcolor, google-pasta
Required-by:
azureuser@ai4hr-k80:~/cloudfiles/code/Users/francois.mentec$ pip show transformers
Name: transformers
Version: 4.6.1
Summary: State-of-the-art Natural Language Processing for TensorFlow 2.0 and PyTorch
Home-page: https://github.com/huggingface/transformers
Author: Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Sam Shleifer, Patrick von Platen, Sylvain Gugger, Suraj Patil, Stas Bekman, Google AI Language Team Authors, Open AI team Authors, Facebook AI Authors, Carnegie Mellon University Authors
Author-email: [email protected]
License: Apache
Location: /anaconda/envs/azureml_py38/lib/python3.8/site-packages
Requires: numpy, filelock, requests, tqdm, sacremoses, packaging, tokenizers, huggingface-hub, regex
Required-by:
```
### Who can help
- tensorflow: @Rocketknight1
## To reproduce
Steps to reproduce the behavior:
import transformers:
`from transformers import BertTokenizer, BertModel, AdamW`
## Expected behavior
Following error:
`AttributeError: module 'tensorflow_core.python.keras.api._v2.keras.activations' has no attribute 'swish'` | 06-02-2021 15:30:37 | 06-02-2021 15:30:37 | This is very strange - I can't reproduce it here, and Tensorflow >= 2.3 should have Swish as an activation. I'll try to figure that one out, but if you discover anything else about the problem, let me know!<|||||>Tensorflow version is wrong:
```
import tensorflow as ts
print(ts.__version__)
```
give:
`2.1.0`
I have no idea why pip and transformers-cli show a different version. I must state that I hate Microsoft Azure. The error isn't on your side.
EDIT:
For anyone who get the same error with Azure, you need to select Python 3.8 in the kernel version on your notebook, Python 3 is selected by default.

<|||||>Don't worry about it! This is something I see with conda or venvs sometimes - usually it's because I'm accidentally using two Python environments at once - for example, if I don't have pip installed in a conda environment the pip command still works, but it's actually the system pip, and if I'm not paying attention I just end up installing packages systemwide, while still being unable to access them in my active environment. |
transformers | 11,994 | closed | CLIPFeatureExtractor should resize images with kept aspect ratio | # What does this PR do?
Fixes #11992
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patil-suraj
@LysandreJik
@patrickvonplaten
@sgugger
## Description
With this PR, the preprocessing should match the original preprocessing exactly. With the example from #11992 we now get the exact same results:
```python
>>> import torch
>>> import requests
>>> from transformers import CLIPProcessor, CLIPModel
>>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
>>> processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(images=image, return_tensors="pt", padding=True)
>>> inputs
{'pixel_values': tensor([[[[ 0.5873, 0.5873, 0.6165, ..., 0.0617, 0.0471, -0.0259],
[ 0.5727, 0.5727, 0.6603, ..., 0.1201, 0.0763, 0.0909],
[ 0.5873, 0.5435, 0.6165, ..., 0.0325, 0.1201, 0.0617],
...,
[ 1.8719, 1.8573, 1.8719, ..., 1.3902, 1.4340, 1.4194],
[ 1.8281, 1.8719, 1.8427, ..., 1.4486, 1.4340, 1.5070],
[ 1.8573, 1.9011, 1.8281, ..., 1.3756, 1.3610, 1.4486]],
[[-1.3169, -1.3019, -1.3169, ..., -1.4970, -1.4369, -1.4820],
[-1.2418, -1.2718, -1.2268, ..., -1.4369, -1.4669, -1.4519],
[-1.2568, -1.3169, -1.2268, ..., -1.4669, -1.4069, -1.4519],
...,
[ 0.1239, 0.1089, 0.1239, ..., -0.7016, -0.6865, -0.6865],
[ 0.0789, 0.0939, 0.0488, ..., -0.6565, -0.6865, -0.6115],
[ 0.0939, 0.1089, 0.0038, ..., -0.7766, -0.7316, -0.6115]],
[[-0.4848, -0.4137, -0.3853, ..., -0.9541, -0.8545, -0.8545],
[-0.4137, -0.4706, -0.3711, ..., -0.8119, -0.8545, -0.7834],
[-0.3284, -0.4422, -0.3853, ..., -0.8688, -0.8119, -0.8830],
...,
[ 1.5771, 1.6482, 1.6340, ..., 0.9088, 0.9514, 0.8945],
[ 1.6198, 1.6055, 1.6055, ..., 0.8661, 0.8092, 0.7950],
[ 1.6624, 1.6766, 1.5487, ..., 0.7950, 0.8661, 0.8519]]]])}
``` | 06-02-2021 15:02:22 | 06-02-2021 15:02:22 | Hi @TobiasNorlund , thanks a lot for spotting this.
However, this change does not actually work as expected. I processed a few images using the origin CLIP transforms and `CLIPFeatureExtractor` and compared the output. Here's the script
```python3
from PIL import Image
import os
import skimage
import torch
from clip import load
from transformers import CLIPConfig, CLIPModel, CLIPTokenizer, CLIPFeatureExtractor, CLIPProcessor
_, transforms = load("./model.pt", jit=False)
proc = CLIPProcessor.from_pretrained("./clip-vit-base-patch32/")
files = [filename for filename in os.listdir(skimage.data_dir) if filename.endswith(".png") or filename.endswith(".jpg")]
images = []
for filename in files:
image = transforms(Image.open(os.path.join(skimage.data_dir, filename)).convert("RGB"))
images.append(image)
hf_images = []
for filename in files:
image = Image.open(os.path.join(skimage.data_dir, filename)).convert("RGB")
enc = proc(images=image, return_tensors="pt")
hf_images.append(enc.pixel_values.squeeze(0))
match = [torch.allclose(hf_image, pt_image, atol=4e-2) for hf_image, pt_image in zip(hf_images, images)]
all(match)
```
I looked into `torchvision' s` `resize` and `center_crop` implementation and turns out they are a bit different than the way we have implemented it. Following their implem I tried overriding the `resize` and `center_crop` methods in `CLIPFeatureExtractor` which seems to be working. Here's the code
```python
def center_crop(self, image, size):
"""
Crops :obj:`image` to the given size using a center crop. Note that if the image is too small to be cropped to
the size is given, it will be padded (so the returned result has the size asked).
Args:
image (:obj:`PIL.Image.Image` or :obj:`np.ndarray` or :obj:`torch.Tensor`):
The image to resize.
size (:obj:`int` or :obj:`Tuple[int, int]`):
The size to which crop the image.
"""
self._ensure_format_supported(image)
if not isinstance(size, tuple):
size = (size, size)
if not isinstance(image, Image.Image):
image = self.to_pil_image(image)
image_width, image_height = image.size
crop_height, crop_width = size
crop_top = int((image_height - crop_height + 1) * 0.5)
crop_left = int((image_width - crop_width + 1) * 0.5)
return image.crop((crop_left, crop_top, crop_left + crop_width, crop_top + crop_height))
def resize(self, image, size, resample=Image.BICUBIC):
width, height = image.size
short, long = (width, height) if width <= height else (height, width)
if short == size:
return image
new_short, new_long = size, int(size * long / short)
new_w, new_h = (new_short, new_long) if width <= height else (new_long, new_short)
return image.resize((new_w, new_h), resample)
```
With this change, there is no need to change the `__call__` method.
However, this change requires `center_crop` to be always applied as `resize` won't always resize to an exact given size.
Could you verify this on your end and update the PR? Thanks!<|||||>Thanks @patil-suraj !
Please have a look at the updated PR, in which I modified your `resize` method to also support non-PIL images to make tests pass. |
transformers | 11,993 | closed | XLNET on SQuAD2 evaluation error | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.7.0.dev0
- Platform: NAME="Red Hat Enterprise Linux Server" VERSION="7.9 (Maipo)"
- Python version: Python 3.6.8 :: Anaconda, Inc.
- PyTorch version (GPU?): 1.8.1
- Tensorflow version (GPU?): not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: parallel in 4 GPUs on one node
### Who can help
@patrickvonplaten @sgugger
- Models: XLNET, benchmarks on SQuAD2
## Information
Model I am using (Bert, XLNet ...): XLNET cased-base, it produces error like following in the evaluation step:
```
100%|█████████▉| 1528/1529 [09:59<00:00, 2.55it/s]Traceback (most recent call last):
File "/scratch365/yding4/Amazon_2021_summer_intern/AVEQA_PyTorch/XLNET_on_SQuAD2.0/run_qa_beam_search.py", line 661, in <module>
main()
File "/scratch365/yding4/Amazon_2021_summer_intern/AVEQA_PyTorch/XLNET_on_SQuAD2.0/run_qa_beam_search.py", line 620, in main
metrics = trainer.evaluate()
File "/scratch365/yding4/Amazon_2021_summer_intern/AVEQA_PyTorch/XLNET_on_SQuAD2.0/trainer_qa.py", line 50, in evaluate
ignore_keys=ignore_keys,
File "/scratch365/yding4/Amazon_2021_summer_intern/transformers/src/transformers/trainer.py", line 2169, in evaluation_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/scratch365/yding4/Amazon_2021_summer_intern/transformers/src/transformers/trainer.py", line 2383, in prediction_step
outputs = model(**inputs)
File "/afs/crc.nd.edu/user/y/yding4/.conda/envs/AVEQA_PyTorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/afs/crc.nd.edu/user/y/yding4/.conda/envs/AVEQA_PyTorch/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward
return self.gather(outputs, self.output_device)
File "/afs/crc.nd.edu/user/y/yding4/.conda/envs/AVEQA_PyTorch/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 180, in gather
return gather(outputs, output_device, dim=self.dim)
File "/afs/crc.nd.edu/user/y/yding4/.conda/envs/AVEQA_PyTorch/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 76, in gather
res = gather_map(outputs)
File "/afs/crc.nd.edu/user/y/yding4/.conda/envs/AVEQA_PyTorch/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 70, in gather_map
for k in out))
File "<string>", line 11, in __init__
File "/scratch365/yding4/Amazon_2021_summer_intern/transformers/src/transformers/file_utils.py", line 1739, in __post_init__
for element in iterator:
File "/afs/crc.nd.edu/user/y/yding4/.conda/envs/AVEQA_PyTorch/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 70, in <genexpr>
for k in out))
File "/afs/crc.nd.edu/user/y/yding4/.conda/envs/AVEQA_PyTorch/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 71, in gather_map
return type(out)(map(gather_map, zip(*outputs)))
File "/afs/crc.nd.edu/user/y/yding4/.conda/envs/AVEQA_PyTorch/lib/python3.6/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map
return Gather.apply(target_device, dim, *outputs)
File "/afs/crc.nd.edu/user/y/yding4/.conda/envs/AVEQA_PyTorch/lib/python3.6/site-packages/torch/nn/parallel/_functions.py", line 72, in forward
return comm.gather(inputs, ctx.dim, ctx.target_device)
File "/afs/crc.nd.edu/user/y/yding4/.conda/envs/AVEQA_PyTorch/lib/python3.6/site-packages/torch/nn/parallel/comm.py", line 235, in gather
return torch._C._gather(tensors, dim, destination)
RuntimeError: Input tensor at index 3 has invalid shape [384, 1, 1024], but expected [384, 2, 1024]
```
The problem arises when using:
- [ * ] the official example scripts: (give details below)
```
python run_qa_beam_search.py \
--model_name_or_path xlnet-large-cased \
--dataset_name squad_v2 \
--do_train \
--do_eval \
--version_2_with_negative \
--learning_rate 3e-5 \
--num_train_epochs 1 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ./wwm_cased_finetuned_squad/ \
--per_device_eval_batch_size=2 \
--per_device_train_batch_size=2 \
--save_steps 5000
```
The evaluation works when the **--per_device_eval_batch_size=2** is set to 1.
The tasks I am working on is:
- [x] an official GLUE/SQUaD task: SQUaD2.0 task
the script is used from https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa_beam_search.py
## To reproduce
Steps to reproduce the behavior:
1. just run the script
## Expected behavior
train successfully and get evaluation results | 06-02-2021 14:54:41 | 06-02-2021 14:54:41 | It looks like you are using DataParallel for the evaluation, which does not always work if the number of samples is not a round multiple of the number of GPUs. You should use DistributedDataParallel (as recommended by the PyTorch team), just change your command to:
```
python -m torch.distributed.launch --nproc_per_node 2 run_qa_beam_search.py \
```
(and replace 2 by your actual number of GPUs) and it should work (it did on my setup).<|||||>> It looks like you are using DataParallel for the evaluation, which does not always work if the number of samples is not a round multiple of the number of GPUs. You should use DistributedDataParallel (as recommended by the PyTorch team), just change your command to:
>
> ```
> python -m torch.distributed.launch --nproc_per_node 2 run_qa_beam_search.py \
> ```
>
> (and replace 2 by your actual number of GPUs) and it should work (it did on my setup).
Thank you so much, close the issue now. |
transformers | 11,992 | closed | CLIPFeatureExtractor should resize images with kept aspect ratio | ## Environment info
- `transformers` version: 4.6.1
- Platform: Linux-5.11.0-7614-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.8.1+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patil-suraj
@LysandreJik
@patrickvonplaten
@sgugger
## Information
Model I am using (Bert, XLNet ...): CLIP
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
The CLIPFeatureExtractor does not replicate the behavior of the [CLIP reference implementation](https://github.com/openai/CLIP). The below code is taken from the official [huggingface transformers CLIP documentation](https://huggingface.co/transformers/model_doc/clip.html):
```
$ docker run --rm -it huggingface/transformers-cpu:4.6.1
root@02cd404c4a60:/workspace# pip install Pillow==7.2.0
root@02cd404c4a60:/workspace# python3
Python 3.6.9 (default, Jan 26 2021, 15:33:00)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> from PIL import Image
>>> import requests
>>> from transformers import CLIPProcessor, CLIPModel
>>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
>>> processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(images=image, return_tensors="pt", padding=True)
>>> inputs
{'pixel_values': tensor([[[[ 0.2807, 0.3829, 0.4267, ..., -0.2886, -0.2740, -0.2886],
[ 0.3245, 0.3829, 0.4121, ..., -0.2886, -0.2886, -0.3178],
[ 0.2807, 0.3537, 0.3683, ..., -0.3762, -0.3470, -0.3178],
...,
[ 1.6384, 1.5362, 1.4194, ..., 1.3902, 1.2880, 1.2442],
[ 1.6092, 1.5508, 1.5070, ..., 1.2150, 0.9814, 0.8501],
[ 1.6092, 1.4778, 1.4924, ..., 0.1201, -0.1280, -0.3908]],
[[-1.3919, -1.3919, -1.3919, ..., -1.5420, -1.5420, -1.5570],
[-1.3469, -1.3469, -1.3469, ..., -1.5270, -1.5120, -1.5270],
[-1.4069, -1.3769, -1.3469, ..., -1.5570, -1.5420, -1.5420],
...,
[-0.3414, -0.4614, -0.5515, ..., -0.6415, -0.7016, -0.7466],
[-0.3414, -0.3864, -0.4914, ..., -0.7316, -0.8666, -0.9267],
[-0.3714, -0.4914, -0.5065, ..., -1.2869, -1.3769, -1.4820]],
[[-0.6555, -0.4990, -0.5417, ..., -1.0110, -0.9256, -0.9541],
[-0.6981, -0.5986, -0.5701, ..., -1.0110, -0.9541, -1.0110],
[-0.6128, -0.5275, -0.4990, ..., -1.0252, -1.0394, -1.0536],
...,
[ 1.3638, 1.3496, 1.1221, ..., 1.1647, 1.0652, 0.9514],
[ 1.3354, 1.1789, 1.2643, ..., 0.9372, 0.7523, 0.6244],
[ 1.3780, 1.3780, 1.2643, ..., -0.1293, -0.5559, -0.7408]]]])}
```
## Expected behavior
Feeding the same image through the official CLIP preprocessing function gives different results:
```
$ docker run --rm -it pytorch/pytorch:1.8.1-cuda11.1-cudnn8-runtime
root@fe597c8c5e9f:/workspace# apt update && apt install git
root@fe597c8c5e9f:/workspace# pip install ftfy regex tqdm
root@fe597c8c5e9f:/workspace# pip install git+https://github.com/openai/CLIP.git
root@fe597c8c5e9f:/workspace# python
Python 3.8.8 (default, Feb 24 2021, 21:46:12)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> import clip
>>> import requests
>>> from PIL import Image
>>> device = "cpu"
>>> model, preprocess = clip.load("ViT-B/32", device=device)
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> inputs = preprocess(Image.open(requests.get(url, stream=True).raw)).unsqueeze(0).to(device)
>>> inputs
tensor([[[[ 0.5873, 0.5873, 0.6165, ..., 0.0617, 0.0471, -0.0259],
[ 0.5727, 0.5727, 0.6603, ..., 0.1201, 0.0763, 0.0909],
[ 0.5873, 0.5435, 0.6165, ..., 0.0325, 0.1201, 0.0617],
...,
[ 1.8719, 1.8573, 1.8719, ..., 1.3902, 1.4340, 1.4194],
[ 1.8281, 1.8719, 1.8427, ..., 1.4486, 1.4340, 1.5070],
[ 1.8573, 1.9011, 1.8281, ..., 1.3756, 1.3610, 1.4486]],
[[-1.3169, -1.3019, -1.3169, ..., -1.4970, -1.4369, -1.4820],
[-1.2418, -1.2718, -1.2268, ..., -1.4369, -1.4669, -1.4519],
[-1.2568, -1.3169, -1.2268, ..., -1.4669, -1.4069, -1.4519],
...,
[ 0.1239, 0.1089, 0.1239, ..., -0.7016, -0.6865, -0.6865],
[ 0.0789, 0.0939, 0.0488, ..., -0.6565, -0.6865, -0.6115],
[ 0.0939, 0.1089, 0.0038, ..., -0.7766, -0.7316, -0.6115]],
[[-0.4848, -0.4137, -0.3853, ..., -0.9541, -0.8545, -0.8545],
[-0.4137, -0.4706, -0.3711, ..., -0.8119, -0.8545, -0.7834],
[-0.3284, -0.4422, -0.3853, ..., -0.8688, -0.8119, -0.8830],
...,
[ 1.5771, 1.6482, 1.6340, ..., 0.9088, 0.9514, 0.8945],
[ 1.6198, 1.6055, 1.6055, ..., 0.8661, 0.8092, 0.7950],
[ 1.6624, 1.6766, 1.5487, ..., 0.7950, 0.8661, 0.8519]]]])
``` | 06-02-2021 14:36:54 | 06-02-2021 14:36:54 | Specifically, I believe this is due to differing logic in the image resizing preprocessing step. The original CLIP implementation uses `torchvision.transforms.Resize` ([ref](https://github.com/openai/CLIP/blob/main/clip/clip.py#L60)) to resize the image given a single integer of the desired size. According to the [documentation of `Resize`](https://pytorch.org/vision/stable/transforms.html#torchvision.transforms.Resize):
> If size is an int, smaller edge of the image will be matched to this number. i.e, if height > width, then image will be rescaled to (size * height / width, size)
However, in transformers, the image is eventually [resized in `CLIPFeatureExtractor`](https://github.com/huggingface/transformers/blob/master/src/transformers/models/clip/feature_extraction_clip.py#L146), which in turn [resizes to a square size](https://github.com/huggingface/transformers/blob/123b597f5da6dd1e54545f9cce1450dc4b401784/src/transformers/image_utils.py#L158). |
transformers | 11,991 | closed | Trainer API | I am using Trainer API to pretrain BERT model
Start training...
Traceback (most recent call last):
File "/home/kruthika/PycharmProjects/huggingfaceBert/pretrain_transformers_pytorch.py", line 451, in <module>
trainer.train(model_path=model_path)
File "/home/kruthika/PycharmProjects/huggingfaceBert/venv/lib/python3.6/site-packages/transformers/trainer.py", line 1208, in train
self.control = self.callback_handler.on_train_begin(args, self.state, self.control)
File "/home/kruthika/PycharmProjects/huggingfaceBert/venv/lib/python3.6/site-packages/transformers/trainer_callback.py", line 340, in on_train_begin
return self.call_event("on_train_begin", args, state, control)
File "/home/kruthika/PycharmProjects/huggingfaceBert/venv/lib/python3.6/site-packages/transformers/trainer_callback.py", line 388, in call_event
**kwargs,
File "/home/kruthika/PycharmProjects/huggingfaceBert/venv/lib/python3.6/site-packages/transformers/integrations.py", line 717, in on_train_begin
self.setup(args, state, model, **kwargs)
File "/home/kruthika/PycharmProjects/huggingfaceBert/venv/lib/python3.6/site-packages/transformers/integrations.py", line 694, in setup
**init_args,
File "/home/kruthika/PycharmProjects/huggingfaceBert/venv/lib/python3.6/site-packages/wandb/sdk/wandb_init.py", line 747, in init
wi.setup(kwargs)
File "/home/kruthika/PycharmProjects/huggingfaceBert/venv/lib/python3.6/site-packages/wandb/sdk/wandb_init.py", line 154, in setup
wandb_login._login(anonymous=anonymous, force=force, _disable_warning=True)
File "/home/kruthika/PycharmProjects/huggingfaceBert/venv/lib/python3.6/site-packages/wandb/sdk/wandb_login.py", line 238, in _login
wlogin.prompt_api_key()
File "/home/kruthika/PycharmProjects/huggingfaceBert/venv/lib/python3.6/site-packages/wandb/sdk/wandb_login.py", line 174, in prompt_api_key
raise UsageError("api_key not configured (no-tty). call " + directive)
wandb.errors.UsageError: api_key not configured (no-tty). call wandb.login(key=[your_api_key])
Process finished with exit code 1
| 06-02-2021 14:17:13 | 06-02-2021 14:17:13 | Maybe @borisdayma @sgugger have an idea<|||||>What is your environment?
Can you try to call `wandb login` in the console before running your script? You typically only have to do it once.
Otherwise you can always use you key and login within your script or use environment variables.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,990 | closed | Fix examples in VisualBERT docs | This PR fixes the examples in [VisualBERT docs](https://huggingface.co/transformers/master/model_doc/visual_bert.html).
The issue is described in this [comment](https://github.com/huggingface/transformers/pull/10534#issuecomment-853015418) by @NielsRogge.
Requesting review from @sgugger | 06-02-2021 13:57:34 | 06-02-2021 13:57:34 | |
transformers | 11,989 | closed | EOFError("No valid references for a sentence!") for run_translation example | I tried to apply `run_translation` on a dataset I created. I tried to get sure there is no blank target.
example of test file:
`
{"data": [{"translation": {"en": "You're quite impatient with the rest of humanity, in fact.", "fa": "ﺩﺭ ﻭﺎﻘﻋ ﺩﺭ ﻢﻗﺎﺒﻟ ﺱﺎﯾﺭ ﺎﻨﺳﺎﻧ<200c>ﻫﺍ ﮎﺎﻣﻻ ﺐﯾ<200c>ﺣﻮﺼﻠﻫ ﻪﺴﺘﯾﺩ."}}, {"translation": {"en": "Now despite this history of distrust, I still believe that indigenous people can benefit from genetic research.", "fa": "ﺡﺍﻻ ﺏﺍ ﻮﺟﻭﺩ ﺎﯿﻧ ﺐﯾ ﺎﻌﺘﻣﺍﺪﯾ ﺕﺍﺮﯿﺨﯾ ﻢﻧ ﻪﻧﻭﺯ ﻢﻌﺘﻗﺪﻣ ﻡﺭﺪﻣﺎﻧ ﺏﻮﻤﯾ ﻢﯾ ﺕﻭﺎﻨﻧﺩ ﺍﺯ ﺖﺤﻘﯿﻗﺎﺗ ﮋﻨﺘﯿﮐ ﺱﻭﺩ ﺐﺑﺮﻧﺩ."}},
`
However I get this error:
```
File "run_translation.py", line 496, in compute_metrics
result = metric.compute(predictions=decoded_preds, references=decoded_labels)
File "/home/pouramini/miniconda3/lib/python3.7/site-packages/datasets/metric.py", line 40$
, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
File "/home/pouramini/.cache/huggingface/modules/datasets_modules/metrics/sacrebleu/4dba4$
29caa3766d885f0b9cde070fedb22ac3190c264a6454b8ea6703ddd466/sacrebleu.py", line 128, in _com$
ute
use_effective_order=use_effective_order,
File "/home/pouramini/miniconda3/lib/python3.7/site-packages/sacrebleu/compat.py", line 3$
, in corpus_bleu
sys_stream, ref_streams, use_effective_order=use_effective_order)
File "/home/pouramini/miniconda3/lib/python3.7/site-packages/sacrebleu/metrics/bleu.py", $
ine 286, in corpus_score
raise EOFError("No valid references for a sentence!")
EOFError: No valid references for a sentence!
```
@patil-suraj @patrickvonplaten @lhoestq | 06-02-2021 10:26:16 | 06-02-2021 10:26:16 | Hi @puraminy
Could you debug the `compute_metrics` function? Maybe print/log a few `decoded_labels` that are used as references. Bit hard to guess without running the script.<|||||>@patil-suraj
I actually did, and the problem is that the predictions are in English than Persian (the target language)! I consider it a serious bug and I posted more details in:
https://github.com/huggingface/transformers/issues/12010<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,988 | closed | Add loss reduction parameter in forward() method | # Add loss reduction parameter in forward() method
Add possibility to choose reduction method in loss functions used in `forward` method in most of the models (like all BERT based, `BertForMaskedLM`, `BigBirdForMaskedLM` etc.). Currently, models use losses without the possibility to pass additional parameters.
From PyTorch docs [nn.CrossEntropyLoss ](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html):
> reduction (string, optional) – Specifies the reduction to apply to the output: 'none' | 'mean' | 'sum'. 'none': no reduction will be applied, 'mean': the weighted mean of the output is taken, 'sum': the output will be summed. Note: size_average and reduce are in the process of being deprecated, and in the meantime, specifying either of those two args will override reduction. Default: 'mean'
## Motivation
Especially in models like `BertForMaskedLM` passing a reduction like `none` instead of the default `mean` can be very handy to check individual losses for tokens.
In our project, we handle this by creating our own class inheriting from hugginface model and overwriting `forward()` method.
## Your contribution
A quick demonstration of the idea on the example of base `Bert` models:
https://github.com/marekrydlewski/transformers/commit/83994b12085b3187f0b80fbb9a6d4a7b5e4bc8de
There are no updated docs, it's just a demonstration.
Then it can be used like:
```
model = BertForMaskedLM(...)
output = model(..., loss_reduction='none')
```
| 06-02-2021 08:47:40 | 06-02-2021 08:47:40 | This has been asked before, but I don't think this is on the roadmap. If you want to use a different loss reduction, you can easily overwrite the model and plug in your custom loss function. See also https://github.com/huggingface/transformers/issues/9625#issuecomment-762167788 and https://github.com/huggingface/transformers/issues/7024#issue-696684075.
Otherwise, the library would become a bit cluttered with all kinds of custom parameters, so simplicity is favored.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,987 | closed | Movement Pruning does not achieve expected results | Transformer version: master or 4.6.1
tensorflow-gpu == 2.3.1
pytorch == 1.7.1
flax == 0.3.4
I tried to reproduce the results based on the script given in examples/research_projects/movement-pruning.
The experimental results I got are not as good as the paper shows:
06/01/2021 22:55:40 - INFO - __main__ - ***** Running evaluation *****
06/01/2021 22:55:40 - INFO - __main__ - Num examples = 10833
06/01/2021 22:55:40 - INFO - __main__ - Batch size = 32
Evaluating: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 339/339 [01:42<00:00, 3.32it/s]
06/01/2021 22:57:22 - INFO - __main__ - Evaluation done in total 102.215492 secs (0.009436 sec per example)
06/01/2021 22:58:27 - INFO - __main__ - Results: {'exact': 0.33112582781456956, 'f1': 7.122522334334856, 'total': 10570, 'HasAns_exact': 0.33112582781456956, 'HasAns_f1': 7.122522334334856, 'HasAns_total': 10570, 'best_exact': 0.33112582781456956, 'best_exact_thresh': 0.0, 'best_f1': 7.122522334334856, 'best_f1_thresh': 0.0}
I almost followed each step as the 'README.md' says. The only difference I modified is 'BertLayerNorm = torch.nn.LayerNorm' according to [this](https://github.com/huggingface/transformers/issues/10892).
I also noticed the requirements in the [setup ](https://github.com/huggingface/transformers/tree/master/examples/research_projects/movement-pruning#setup) step.
I wonder what should I do to reproduce the same results from the master branch.
Thank you. @VictorSanh | 06-02-2021 08:06:30 | 06-02-2021 08:06:30 | that's very low of a score... i imagine it''s on squad.
could you give me more details about the command you are running?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I have the same problem. The accuracy is quite low when performing movement pruning. Have you solved it? |
transformers | 11,986 | open | [WIP] Add ViLBERT | # What does this PR do?
This PR adds ViLBERT.
Papers: [Multitask ViLBERT](https://openaccess.thecvf.com/content_CVPR_2020/html/Lu_12-in-1_Multi-Task_Vision_and_Language_Representation_Learning_CVPR_2020_paper.html) , [VilBERT](https://arxiv.org/abs/1908.02265).
GitHub: https://github.com/facebookresearch/vilbert-multi-task
| 06-02-2021 06:57:58 | 06-02-2021 06:57:58 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale |
transformers | 11,985 | closed | Changed the hidden_size to d_model for XLNET docs | # What does this PR do?
Changes the use of hidden_size to d_model as XLNET model (unlike BERT and other) uses `d_model` instead of `hidden_size`.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #11938
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@sgugger @NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-02-2021 04:46:37 | 06-02-2021 04:46:37 | Many models use different names for this and `hidden_size` is a constant property that works for all of them, so we will keep the docstrings updated with this. It also simplifies copy-pasting the docstrings between models.<|||||>> Many models use different names for this and `hidden_size` is a constant property that works for all of them, so we will keep the docstrings updated with this. It also simplifies copy-pasting the docstrings between models.
Okay, in that case at least we can mention somewhere in the documentation of the XLNET that hidden_size is same as d_model. Because for someone who starts working with XLNET and has not worked with other models won't understand what is hidden_size, as hidden_size is not described in XLNET docs and is mentioned as one of the dimensions in the output. <|||||>How about adding an entry in the glossary page, along with input IDs, attention mask etc?<|||||>> How about adding an entry in the glossary page, along with input IDs, attention mask etc?
along with.. means inside those sections [input IDs section, attention mask section] or creating another section for it?
If thinking of adding it to the glossary we can write a line in docs of XLNET, BERT ..(other models) that explanation of some terms can be explored in the glossary. As all the documentations (pandas, pytorch... etc) don't have a glossary section and it can't be expected from the reader that he/she will search for glossary section if they encounter any term not explain in a documentation page.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,984 | closed | [deepspeed] Move code and doc into standalone files | Deepspeed integration is no longer bound to HF Trainer and its docs have grown too big to be a subsection of the Trainer docs. This PR:
- creates `transformers.deepspeed` and migrates all the code and references to it
- moves docs to `deepspeed.rst`
Note to @sgugger - I tried to make this easy to review by not making any changes to any content of code or text. Other then the updated imports in the code, the only change is the preamble section of `deepspeed.rst` - there is no need to re-review the rest unless you'd like to. I flagged that new text below.
And I also added a poor-man style redirect links from most of the previous sections in `trainer.rst` so that the old links still work. Well, had to add anchors to sections in the new doc for this to work.
@sgugger | 06-02-2021 03:59:42 | 06-02-2021 03:59:42 | |
transformers | 11,983 | closed | Bump urllib3 from 1.25.8 to 1.26.5 in /examples/research_projects/lxmert | Bumps [urllib3](https://github.com/urllib3/urllib3) from 1.25.8 to 1.26.5.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/urllib3/urllib3/releases">urllib3's releases</a>.</em></p>
<blockquote>
<h2>1.26.5</h2>
<p>:warning: <strong>IMPORTANT: urllib3 v2.0 will drop support for Python 2</strong>: <a href="https://urllib3.readthedocs.io/en/latest/v2-roadmap.html">Read more in the v2.0 Roadmap</a></p>
<ul>
<li>Fixed deprecation warnings emitted in Python 3.10.</li>
<li>Updated vendored <code>six</code> library to 1.16.0.</li>
<li>Improved performance of URL parser when splitting the authority component.</li>
</ul>
<p><strong>If you or your organization rely on urllib3 consider supporting us via <a href="https://github.com/sponsors/urllib3">GitHub Sponsors</a></strong></p>
<h2>1.26.4</h2>
<p>:warning: <strong>IMPORTANT: urllib3 v2.0 will drop support for Python 2</strong>: <a href="https://urllib3.readthedocs.io/en/latest/v2-roadmap.html">Read more in the v2.0 Roadmap</a></p>
<ul>
<li>Changed behavior of the default <code>SSLContext</code> when connecting to HTTPS proxy during HTTPS requests. The default <code>SSLContext</code> now sets <code>check_hostname=True</code>.</li>
</ul>
<p><strong>If you or your organization rely on urllib3 consider supporting us via <a href="https://github.com/sponsors/urllib3">GitHub Sponsors</a></strong></p>
<h2>1.26.3</h2>
<p>:warning: <strong>IMPORTANT: urllib3 v2.0 will drop support for Python 2</strong>: <a href="https://urllib3.readthedocs.io/en/latest/v2-roadmap.html">Read more in the v2.0 Roadmap</a></p>
<ul>
<li>
<p>Fixed bytes and string comparison issue with headers (Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/2141">#2141</a>)</p>
</li>
<li>
<p>Changed <code>ProxySchemeUnknown</code> error message to be more actionable if the user supplies a proxy URL without a scheme (Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/2107">#2107</a>)</p>
</li>
</ul>
<p><strong>If you or your organization rely on urllib3 consider supporting us via <a href="https://github.com/sponsors/urllib3">GitHub Sponsors</a></strong></p>
<h2>1.26.2</h2>
<p>:warning: <strong>IMPORTANT: urllib3 v2.0 will drop support for Python 2</strong>: <a href="https://urllib3.readthedocs.io/en/latest/v2-roadmap.html">Read more in the v2.0 Roadmap</a></p>
<ul>
<li>Fixed an issue where <code>wrap_socket</code> and <code>CERT_REQUIRED</code> wouldn't be imported properly on Python 2.7.8 and earlier (Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/2052">#2052</a>)</li>
</ul>
<h2>1.26.1</h2>
<p>:warning: <strong>IMPORTANT: urllib3 v2.0 will drop support for Python 2</strong>: <a href="https://urllib3.readthedocs.io/en/latest/v2-roadmap.html">Read more in the v2.0 Roadmap</a></p>
<ul>
<li>Fixed an issue where two <code>User-Agent</code> headers would be sent if a <code>User-Agent</code> header key is passed as <code>bytes</code> (Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/2047">#2047</a>)</li>
</ul>
<h2>1.26.0</h2>
<p>:warning: <strong>IMPORTANT: urllib3 v2.0 will drop support for Python 2</strong>: <a href="https://urllib3.readthedocs.io/en/latest/v2-roadmap.html">Read more in the v2.0 Roadmap</a></p>
<ul>
<li>
<p>Added support for HTTPS proxies contacting HTTPS servers (Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/1923">#1923</a>, Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/1806">#1806</a>)</p>
</li>
<li>
<p>Deprecated negotiating TLSv1 and TLSv1.1 by default. Users that
still wish to use TLS earlier than 1.2 without a deprecation warning
should opt-in explicitly by setting <code>ssl_version=ssl.PROTOCOL_TLSv1_1</code> (Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/2002">#2002</a>)
<strong>Starting in urllib3 v2.0: Connections that receive a <code>DeprecationWarning</code> will fail</strong></p>
</li>
<li>
<p>Deprecated <code>Retry</code> options <code>Retry.DEFAULT_METHOD_WHITELIST</code>, <code>Retry.DEFAULT_REDIRECT_HEADERS_BLACKLIST</code>
and <code>Retry(method_whitelist=...)</code> in favor of <code>Retry.DEFAULT_ALLOWED_METHODS</code>,
<code>Retry.DEFAULT_REMOVE_HEADERS_ON_REDIRECT</code>, and <code>Retry(allowed_methods=...)</code>
(Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/2000">#2000</a>) <strong>Starting in urllib3 v2.0: Deprecated options will be removed</strong></p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a href="https://github.com/urllib3/urllib3/blob/main/CHANGES.rst">urllib3's changelog</a>.</em></p>
<blockquote>
<h2>1.26.5 (2021-05-26)</h2>
<ul>
<li>Fixed deprecation warnings emitted in Python 3.10.</li>
<li>Updated vendored <code>six</code> library to 1.16.0.</li>
<li>Improved performance of URL parser when splitting
the authority component.</li>
</ul>
<h2>1.26.4 (2021-03-15)</h2>
<ul>
<li>Changed behavior of the default <code>SSLContext</code> when connecting to HTTPS proxy
during HTTPS requests. The default <code>SSLContext</code> now sets <code>check_hostname=True</code>.</li>
</ul>
<h2>1.26.3 (2021-01-26)</h2>
<ul>
<li>
<p>Fixed bytes and string comparison issue with headers (Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/2141">#2141</a>)</p>
</li>
<li>
<p>Changed <code>ProxySchemeUnknown</code> error message to be
more actionable if the user supplies a proxy URL without
a scheme. (Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/2107">#2107</a>)</p>
</li>
</ul>
<h2>1.26.2 (2020-11-12)</h2>
<ul>
<li>Fixed an issue where <code>wrap_socket</code> and <code>CERT_REQUIRED</code> wouldn't
be imported properly on Python 2.7.8 and earlier (Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/2052">#2052</a>)</li>
</ul>
<h2>1.26.1 (2020-11-11)</h2>
<ul>
<li>Fixed an issue where two <code>User-Agent</code> headers would be sent if a
<code>User-Agent</code> header key is passed as <code>bytes</code> (Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/2047">#2047</a>)</li>
</ul>
<h2>1.26.0 (2020-11-10)</h2>
<ul>
<li>
<p><strong>NOTE: urllib3 v2.0 will drop support for Python 2</strong>.
<code>Read more in the v2.0 Roadmap <https://urllib3.readthedocs.io/en/latest/v2-roadmap.html></code>_.</p>
</li>
<li>
<p>Added support for HTTPS proxies contacting HTTPS servers (Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/1923">#1923</a>, Pull <a href="https://github-redirect.dependabot.com/urllib3/urllib3/issues/1806">#1806</a>)</p>
</li>
<li>
<p>Deprecated negotiating TLSv1 and TLSv1.1 by default. Users that
still wish to use TLS earlier than 1.2 without a deprecation warning</p>
</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/urllib3/urllib3/commit/d1616473df94b94f0f5ad19d2a6608cfe93b7cdf"><code>d161647</code></a> Release 1.26.5</li>
<li><a href="https://github.com/urllib3/urllib3/commit/2d4a3fee6de2fa45eb82169361918f759269b4ec"><code>2d4a3fe</code></a> Improve performance of sub-authority splitting in URL</li>
<li><a href="https://github.com/urllib3/urllib3/commit/2698537d52f8ff1f0bbb1d45cf018b118e91f637"><code>2698537</code></a> Update vendored six to 1.16.0</li>
<li><a href="https://github.com/urllib3/urllib3/commit/07bed791e9c391d8bf12950f76537dc3c6f90550"><code>07bed79</code></a> Fix deprecation warnings for Python 3.10 ssl module</li>
<li><a href="https://github.com/urllib3/urllib3/commit/d725a9b56bb8baf87c9e6eee0e9edf010034b63b"><code>d725a9b</code></a> Add Python 3.10 to GitHub Actions</li>
<li><a href="https://github.com/urllib3/urllib3/commit/339ad34c677c98fd9ad008de1d8bbeb9dbf34381"><code>339ad34</code></a> Use pytest==6.2.4 on Python 3.10+</li>
<li><a href="https://github.com/urllib3/urllib3/commit/f271c9c3149e20d7feffb6429b135bbb6c09ddf4"><code>f271c9c</code></a> Apply latest Black formatting</li>
<li><a href="https://github.com/urllib3/urllib3/commit/1884878aac87ef0494b282e940c32c24ee917d52"><code>1884878</code></a> [1.26] Properly proxy EOF on the SSLTransport test suite</li>
<li><a href="https://github.com/urllib3/urllib3/commit/a8913042b676c510e94fc2b097f6b514ae11a537"><code>a891304</code></a> Release 1.26.4</li>
<li><a href="https://github.com/urllib3/urllib3/commit/8d65ea1ecf6e2cdc27d42124e587c1b83a3118b0"><code>8d65ea1</code></a> Merge pull request from GHSA-5phf-pp7p-vc2r</li>
<li>Additional commits viewable in <a href="https://github.com/urllib3/urllib3/compare/1.25.8...1.26.5">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 06-02-2021 03:46:58 | 06-02-2021 03:46:58 | |
transformers | 11,982 | closed | AttributeError: 'GPT2LMHeadModel' object has no attribute 'get_encoder' | Hi,
I am trying to generate text from a GPT2 model I have trained from scratch using custom english language data.
OS: Windows 10
transformers 3.5.0
Pytorch 1.4.0 (upgrading torch did not help)
Tensorflow 2.2.0
GPT2LMHeadModel was trained using run_language_modeling.py with the following config
{
"_num_labels": 2,
"activation_function": "gelu_new",
"architectures": [
"GPT2LMHeadModel"
],
"attn_pdrop": 0.1,
"bos_token_id": 50256,
"do_sample": "false",
"early_stopping": "false",
"embd_pdrop": 0.1,
"eos_token_id": 50256,
"gradient_checkpointing": false,
"initializer_range": 0.02,
"is_decoder": "false",
"is_encoder_decoder": "false",
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_inner": null,
"n_layer": 12,
"n_positions": 1024,
"output_attentions": "false",
"output_hidden_states": "false",
"output_past": "true",
"resid_pdrop": 0.1,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": "true",
"summary_type": "cls_index",
"summary_use_proj": "true",
"torchscript": "false",
"use_bfloat16": "false",
"vocab_size": 49051
}
Training ended without errors. I tried to generate text from the model using the following script(https://dejanbatanjac.github.io/gpt2-example/):
import random
import numpy as np
import torch
from transformers import GPT2LMHeadModel, GPT2Tokenizer
gpt2_model = GPT2LMHeadModel.from_pretrained("LOCAL PATH TO MODEL")
gpt2_tokenizer = GPT2Tokenizer.from_pretrained("LOCAL PATH TO MODEL TOKENIZER")
seed = random.randint(0, 13)
np.random.seed(seed)
torch.random.manual_seed(seed)
torch.cuda.manual_seed(seed)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
text = """Sample prompt text """
input_ids = torch.tensor(gpt2_tokenizer.encode(text, add_special_tokens=True)).unsqueeze(0) # bs=1
gpt2_model.to(device)
gpt2_model.eval()
outputs = gpt2_model.generate(
input_ids.to(device),
max_length=500,
do_sample=True,
top_k=15,
temperature=0.65
)
print(gpt2_tokenizer.decode(outputs[0], skip_special_tokens=True))
outputs.shape,outputs[0].shape # (torch.Size([1, 500]), torch.Size([500]))
Running the above generation script, I get the following error trace
2021-06-01 22:38:44.602457: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
Traceback (most recent call last):
File "GPT2_Text_gen_4.py", line 26, in <module>
temperature=0.65
File "C:\Users\gojeb\anaconda3\envs\TF_Pytorch_Transformers\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "C:\Users\gojeb\anaconda3\envs\TF_Pytorch_Transformers\lib\site-packages\transformers\generation_utils.py", line 462, in generate
model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)
File "C:\Users\gojeb\anaconda3\envs\TF_Pytorch_Transformers\lib\site-packages\transformers\generation_utils.py", line 80, in _prepare_encoder_decoder_kwargs_for_generation
encoder = self.get_encoder()
File "C:\Users\gojeb\anaconda3\envs\TF_Pytorch_Transformers\lib\site-packages\torch\nn\modules\module.py", line 948, in __getattr__
type(self).__name__, name))
AttributeError: 'GPT2LMHeadModel' object has no attribute 'get_encoder'
Please point me in the correct direction to solve this problem.
Thank you | 06-02-2021 02:42:55 | 06-02-2021 02:42:55 | Hi there,
this is because in the config the value `is_encoder_decoder` is actually a string `"False"` which evaluates to `True` in python, and hence `generate` treats this model as an encoder-decoder model. `is_encoder_decoder` should be set to boolean false.<|||||>Ahh! Thank you. |
transformers | 11,981 | closed | Rewrite ProphetNet to adapt converting ONNX friendly | # What does this PR do?
We want to convert ProphetNet (pytorch model) to ONNX, but it needs some source code change to adapt it. The current code cannot convert to ONNX because torch.new generates constant dimension for Tensor in IR graph, which is not suitable if we want to do dynamic input axes for the converter. So we use torch.full instead.
This PR does not (should not) change any model behavior.
Fixes # (issue)
After this PR, the model can be converted to ONNX.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@qiweizhen @patrickvonplaten @Zhylkaaa
| 06-02-2021 00:02:41 | 06-02-2021 00:02:41 | This PR is originally from https://github.com/huggingface/transformers/pull/8675
Since ProphetNet gets refactored, now this PR is enough.
@patrickvonplaten, you are previously ok with this change, could you please sign off? or perhaps @mfuntowicz ?
Thanks!<|||||>Could someone take a look? Thank you! @qiweizhen @patrickvonplaten @Zhylkaaa @mfuntowicz<|||||>> Sorry, I am not really familiar with onnx, but for me this code is even more elegant than using `new` (which I guess is now discouraged in favor of `new_*`).
> But wouldn't it be more flexible to use `dtype=hidden_states.dtype` instead of `np.float32` to be sure that results are identical?
Thanks, done.
<|||||>@patrickvonplaten the CI failure "check_code_quality" says:
would reformat src/transformers/models/prophetnet/modeling_prophetnet.py
Oh no! 💥 💔 💥
1 file would be reformatted, 909 files would be left unchanged.
How to reformat that? Thanks.<|||||>You can simply run `make style`<|||||>> You can simply run `make style`
@patrickvonplaten I got the following errors when I run `make style`. I am using Ubuntu 20.04. Thanks
~/dev/transformers$ make style
black examples tests src utils
make: black: Command not found
make: *** [Makefile:54: style] Error 127
<|||||>do you have black installed? (if not I guess you should use `python -m pip install black`)<|||||>> do you have black installed? (if not I guess you should use `python -m pip install black`)
Thanks, it works!<|||||>@patrickvonplaten could you please approve this PR and merge? The CI failure seems unrelated. Thank you!<|||||>Hey @jiafatom,
Could you run `make style` to get the `check_code_quality` test passing? <|||||>@patrickvonplaten I actually ran `make style` and `make quality` (I guess that's what this test is using), but there are no warnings or errors, so I don't know what this is about either.<|||||>> @patrickvonplaten actually ran `make style` and `make quality` (I guess that's what this test is using), but there are no warnings or errors, so I don't know what this is about either.
Thanks, yes, actually I ran `make style` several times, it is fine in my local dev box, but still see this error in CI. |
transformers | 11,980 | closed | [Trainer] add train loss and flops metrics reports | The train wasn't reporting loss metrics (and flops it seems), this PR fixes that. Now we get:
```
***** train metrics *****
epoch = 1.0
total_flos = 405GF
train_loss = 2.9435
train_runtime = 0:00:01.75
train_samples = 20
train_samples_per_second = 11.401
train_steps_per_second = 1.14
```
Also moves metrics logging to when all metrics have been updated.
@sgugger | 06-01-2021 19:46:48 | 06-01-2021 19:46:48 | Hmm, it broke several trainer tests:
```
FAILED tests/test_trainer.py::TrainerIntegrationTest::test_can_resume_training
FAILED tests/test_trainer.py::TrainerIntegrationTest::test_resume_training_with_frozen_params
FAILED tests/test_trainer.py::TrainerIntegrationTest::test_resume_training_with_gradient_accumulation
FAILED tests/test_trainer_callback.py::TrainerCallbackTest::test_event_flow
```
The metrics log for `train_loss` is now different:
```
self.assertEqual(log, log1)
E AssertionError: {'tot[14 chars].0, 'train_loss': 6.380087534586589, 'epoch': 3.0, 'step': 24} != {'tot[14 chars].0, 'train_loss': 3.9063542683919272, 'epoch': 3.0, 'step': 24}
E {'epoch': 3.0,
E 'step': 24,
E 'total_flos': 4608.0,
E - 'train_loss': 6.380087534586589}
E + 'train_loss': 3.9063542683919272}
```
So either there was already `train_loss` in metrics log, but it was saving a different value, So me "fixing" it to report the value `TrainOutput` returns broke the test. or it wasn't there and it wasn't comparing the loss and now it does and it isn't the same. Need to check.
**edit:** proved to be the latter. |
transformers | 11,979 | closed | Typo in usage example, changed to device instead of torch_device | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? Not applicable
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-01-2021 18:42:13 | 06-01-2021 18:42:13 | |
transformers | 11,978 | closed | No package metadata found for tqdm while generating exe | Hi,
Whenever i try to make executable file i always get an error : No package metadata found for tqdm.
I've tried hidden-import tqdm but it didn't work. In python IDLE it works fine but while running exe it doesn't work.
I am using Win 10 (64-bit)
Code:
_import eel
import tqdm # i thought of importing it to see wheter this solves anything..but it didn't help
from transformers import pipeline
print("Loaded")
eel.init('Web')
// try...Catch to open index.html_ [UI]

Any help will be really appreciated.
Thank You | 06-01-2021 18:03:43 | 06-01-2021 18:03:43 | Hello,
Can you please tell me how you managed to solve this issue ? |
transformers | 11,977 | closed | T5-Training Arguments | Hi,
I am currently training a T5 model. I have noticed that to compute the loss for the T5 model during training, I should not assign decoder_input argument, but only use the labels argument. Otherwise, the trained model will generate gibberish. So for example, this does not work:
> labels = decoder_input_ids
> outputs = model(input_ids=input_ids,
> attention_mask=attention_mask,
> decoder_input_ids = decoder_input_ids
> decoder_attention_mask = decoder_attention_mask
> labels=labels)
But this will work:
> labels = decoder_input_ids
> labels[labels[:, :] == tokenizer.pad_token_id] = -100 # do label mask
> outputs = model(input_ids=input_ids,
> attention_mask=attention_mask,
> labels=labels)
Why is this the case? What is the difference between the two code when computing loss? | 06-01-2021 18:02:51 | 06-01-2021 18:02:51 | The `decoder_input_ids` should not be equal to the `labels`, but instead equal to the labels shifted one position to the right.
This is because the decoder of T5 processes the text autoregressively (a fancy word to say, from left to right). So suppose you want the decoder of T5 to generate the sentence "Belgium is gonna win the European Football Championship". Then first, we provide the token `"<s>"` to the decoder, to mark the beginning of a sentence. The corresponding label will be `"Belgium"`. Next, we provide `["<s>", "Belgium"]` to the decoder and the label will be `"is"`. Next, we provide `["<s>", "Belgium", "is"]` to the decoder, and the label will be `"gonna"`, and so on. So as you can see, we have the following `decoder input ids` and `labels`:
decoder_input_ids = [`"<s>"`, `"Belgium"`, `"is"`, `"gonna"`, `"win"`, ...]
labels = [`"Belgium"`, `"is",` `"gonna"`, `"win"`, ...]
=> so as you can see, the decoder_input_ids are equal to the labels, but shifted one position to the right. That's why have
`decoder_input_ids = self._shift_right(labels)` in the code of `modeling_t5.py`, as can be seen [here](https://github.com/huggingface/transformers/blob/47a98fc4cb6a561576309a57b315b042977d194c/src/transformers/models/t5/modeling_t5.py#L1583). If you don't specify the decoder_input_ids yourself, the model will create them for you (based on the labels).<|||||>Okay, that makes sense. Thanks.<|||||>Hi,
I am currently some baseline models and I have a follow up question regarding the _decoder_input_ids_ and the _labels_ argument.
I have read from another tutorial, that for the EncoderDecoder model (another Seq2Seq model), the decoder_input_ids and the labels should be copies of each other. Specifically, (retrieved from https://github.com/utkd/encdecmodel-hf/blob/master/train_model.py, Line 117-124):
> en_input = en_input.to(device)
> de_output = de_output.to(device)
> en_masks = en_masks.to(device)
> de_masks = de_masks.to(device)
>
> lm_labels = de_output.clone()
> out = model(input_ids=en_input, attention_mask=en_masks,
> decoder_input_ids=de_output, decoder_attention_mask=de_masks,lm_labels=lm_labels)
This is different from how T5 handles _decoder_input_ids_ and the _labels_, which requires the _lm_labels_ (replaced by _labels_ in later hugginface transformers version) to be shifted right.
Assuming that the EncoderDecoder tutorial is correct, does this mean that for different Seq2Seq models, how _decoder_input_ids_ and the _labels_ should be prepared for model training are also different?
<|||||>Looking at the code example in the [documentation of the EncoderDecoder model](https://huggingface.co/transformers/model_doc/encoderdecoder.html), it looks like there, the `decoder_input_ids` are indeed set equal to the `labels`. This might help you: https://github.com/huggingface/transformers/issues/6487#issuecomment-674172930.
I think they all train in the same way, but the `EncoderDecoder` model itself takes care of adding the BOS token (= beginning of sequence) to the `decoder_input_ids`. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,976 | closed | Update return introduction of `forward` method | # What does this PR do?
Make it clear that the `forward` method now returns a dict instead of tuple.
PR #8530 switch the default value of `return_dict` in configurations to `True`. This caused some older code that relied on `return_dict` being set to `False` to break since assigned a dictionary to tuples can have unexpected outcomes such as receiving strings instead.
The phrasing of the introduction under the return section of the `forward` method currently state that a dictionary will be returned only if `return_dict=True` is passed or that `config.return_dict` is set to `True`. This is no longer valid ever since the default configuration changed, thus it will be beneficial for the readers to update this portion to indicate that those values need to be `False` for a tuple to be returned.
This will likely save readers some time when adapting old code :) | 06-01-2021 18:00:40 | 06-01-2021 18:00:40 | Requesting review from @sgugger<|||||>Looks like you just need to run `make style` on your branch and we should be good to merge!<|||||>I have fixed the styling issues!<|||||>Thanks! |
transformers | 11,975 | closed | It seems not able to add the args "repetition_penalty" when running the code run_summarization.py for prediction. | I am using the summarization code provided in example/pytorch/summarization/run_summarization.py
However, I could not add the argument "repetition_penalty" when generating.
After tracing the source code of Seq2SeqTrainer(), I found that the function `predict()` does not take the argument as input.
It might be helpful if the arguments the function takes could be much more flexible. For now, it only takes `max_len` and `num_beams` for the function `generate()` | 06-01-2021 17:50:30 | 06-01-2021 17:50:30 | Yes, that's right. The `run_summarization` script does not accept all `generate` arguments. Instead, those arguments should be set in the `config`. For your use-case, it should easy to modify the script to accept these args and then pass those to `config` so `generate` can directly access those.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,974 | closed | Fix loss reporting with deepspeed | # What does this PR do?
In the deepspeed integration inside `Trainer`, the loss currently reported is the scaled loss, as in scaled by the loss scaling factor used during mixed precision training. This has nothing to do with the actual loss (the scaling factor is the highest possible value that does not make the gradients overflow basically), this PR fixes that.
Fixes #11919 | 06-01-2021 14:04:15 | 06-01-2021 14:04:15 | This is exactly what deepspeed already does. Please see:
https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/runtime/engine.py#L1142-L1143
The scaling function is:
```
def _scale_loss_by_gas(self, prescaled_loss):
if isinstance(prescaled_loss, torch.Tensor):
scaled_loss = prescaled_loss / self.gradient_accumulation_steps()
elif isinstance(prescaled_loss, tuple) or isinstance(prescaled_loss, list):
scaled_loss = []
for l in prescaled_loss:
if isinstance(l, torch.Tensor):
scaled_loss.append(l / self.gradient_accumulation_steps())
else:
scaled_loss.append(l)
else:
scaled_loss = prescaled_loss
if self.warn_unscaled_loss:
logger.warning(
f'DeepSpeed unable to scale loss because of type: {type(prescaled_loss)}'
)
self.warn_unscaled_loss = False
return scaled_loss
```
it's scaled by gradient acc steps and not scaling factor.<|||||>I spent some more time running tests, including fp16, and I can't find any problem with the current code.
As posted in a comment above reviewing the code shows that it only scales by grad acc steps. |
transformers | 11,973 | closed | typo correction | I modified wrong word in 772 line of src/transformers/generation_utils.py (trhe -> the) | 06-01-2021 12:59:17 | 06-01-2021 12:59:17 | I modified wrong words in src/transforers/generation_utils.py (trhe -> the) |
transformers | 11,972 | closed | RuntimeError: Overflow when unpacking long during training the model | Hi I am training the model for custom dataset for QnA task. I have transformers version 4.0.0 and pytorch version 1.7.1. with the following code, I am getting the issue.
```
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
# evaluation dataset
)
trainer.train()
```
Error is below:
```
RuntimeError Traceback (most recent call last)
<ipython-input-16-3435b262f1ae> in <module>
----> 1 trainer.train()
~/.local/lib/python3.7/site-packages/transformers/trainer.py in train(self, model_path, trial)
727 self.control = self.callback_handler.on_epoch_begin(self.args, self.state, self.control)
728
--> 729 for step, inputs in enumerate(epoch_iterator):
730
731 # Skip past any already trained steps if resuming training
~/.local/lib/python3.7/site-packages/torch/utils/data/dataloader.py in __next__(self)
433 if self._sampler_iter is None:
434 self._reset()
--> 435 data = self._next_data()
436 self._num_yielded += 1
437 if self._dataset_kind == _DatasetKind.Iterable and \
~/.local/lib/python3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self)
473 def _next_data(self):
474 index = self._next_index() # may raise StopIteration
--> 475 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
476 if self._pin_memory:
477 data = _utils.pin_memory.pin_memory(data)
~/.local/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index)
42 def fetch(self, possibly_batched_index):
43 if self.auto_collation:
---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
45 else:
46 data = self.dataset[possibly_batched_index]
~/.local/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0)
42 def fetch(self, possibly_batched_index):
43 if self.auto_collation:
---> 44 data = [self.dataset[idx] for idx in possibly_batched_index]
45 else:
46 data = self.dataset[possibly_batched_index]
<ipython-input-7-80744e22dabe> in __getitem__(self, idx)
6
7 def __getitem__(self, idx):
----> 8 return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
9
10 def __len__(self):
<ipython-input-7-80744e22dabe> in <dictcomp>(.0)
6
7 def __getitem__(self, idx):
----> 8 return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
9
10 def __len__(self):
RuntimeError: Overflow when unpacking long
``` | 06-01-2021 12:35:36 | 06-01-2021 12:35:36 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi,
I am using transformers version 4.0.0 and pytorch version 1.6.0. I am getting the same error.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I am running transformer 4.14.0 and still have the exact same error randomly during training.<|||||>In order to get help faster, please also include all that is asked in the issue template, with the model, dataset used, all software versions as prompted by the template. Thanks! |
transformers | 11,971 | closed | ByT5 model | ## ByT5
- Code: https://github.com/google-research/byt5
- Paper: https://arxiv.org/abs/2105.13626
- Twitter: https://twitter.com/colinraffel/status/1399525871678103552
- Ported checkpoints: https://huggingface.co/models?filter=arxiv:1907.06292
This model only requires a new tokenizer (and a tiny change to TFT5). New tokenizer does not require a vocab file as model just uses raw bytes. | 06-01-2021 10:31:45 | 06-01-2021 10:31:45 | Thank you so much for adding this model.
Is anyone else experiencing extremely slow training with it? I get 5 times longer training times on ByT5-large compared to mT5-large.
In the paper it's about 20% slower only.

<|||||>Hey @ViktorThink,
Could you maybe make two google colab using `mt5-small` and `byt5-small` that shows the different in training speed? :-)<|||||>Yes, I did a quick test just forward propagating. Seems like byt5-small takes about 4.5x times longer forward (backwards pass was fast during my tests). I think the simple explanation is that when tokenized based on utf8 instead of tokens, it generates about 4.5x more input tokens. I think I just misunderstood the paper, since it probably measured the speed per token, and not speed for a whole sentence.
https://colab.research.google.com/drive/1Hv8XnggFscgb8M9UIEkkXYllwjFUyOB_?usp=sharing
Update:
I compared batch_size=1 with batch_size=2, and the models were almost equally fast for batch_size=1, but mT5 was even faster with batch_size=2 than with batch_size=1, while ByT5 was considerably slower. <|||||>I also see slow cpu inference - byT5-small has similar speed compared to mt5-xl.
And frankly, I do not understand how it can not be the case. The number of tokens is 5X larger in my test, so up to 25X more compute in self-attention. Hidden dimension is 3X bigger in byt5 compare to mt5 (thus FFNs take 9X time more compute, and 45X more compute taking into account the number of tokens ). And everything is also multiplied by the 1.5X numbers of layers in byt5.
Sure the decoding is faster (and maybe for training it is not that obvious), but classification part of Table 10 leaves me puzzled. How can this be only 1.1X slower in terms of examples given at least 1.5X more layers and 3X bigger hidden dim? Is it some TPU magic?<|||||>I'm also curious now! Gently pinging the author @lintingxue maybe she knows more about it :-)<|||||>@lintingxue @patrickvonplaten
Also here is the code I've been using for colab (maybe I did something wrong? - please let me know):
``` python
!pip install git+https://github.com/huggingface/transformers
!pip install pip install sentencepiece
import transformers
import torch
torch.backends.cudnn.benchmark = True
import time
article = """История «Твиттера» началась в марте 2006 года как научно-исследовательский проект компании Odeo (Сан-Франциско), первоначально для внутреннего использования. Джек Дорси ввёл понятие индивидуального пользования SMS-сервиса для общения с небольшой группой. Первоначально проект задумывался, как возможность ответить на единственный вопрос: «Что ты сейчас делаешь?»."""
torch.set_num_threads(1)
for model_name in ['google/mt5-small','google/mt5-base','google/mt5-large','google/byt5-small','google/byt5-base']:
model = transformers.MT5EncoderModel.from_pretrained(model_name)#.cuda()
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
input_ids = tokenizer(article, return_tensors="pt").input_ids#.cuda()
print(input_ids.shape)
for idx in range(5):
with torch.no_grad():
t0=time.time()
outputs = model(input_ids)
#torch.cuda.synchronize()
print(f"{model_name} {idx} {time.time() - t0} seconds")
hidden_state = outputs.last_hidden_state
```
It gets:
```
...
torch.Size([1, 98])
google/mt5-small 0 0.07804465293884277 seconds
google/mt5-small 1 0.07803463935852051 seconds
google/mt5-small 2 0.08781981468200684 seconds
google/mt5-small 3 0.08014941215515137 seconds
google/mt5-small 4 0.07851195335388184 seconds
...
torch.Size([1, 98])
google/mt5-base 0 0.31621241569519043 seconds
google/mt5-base 1 0.3263530731201172 seconds
google/mt5-base 2 0.32642054557800293 seconds
google/mt5-base 3 0.3143343925476074 seconds
google/mt5-base 4 0.32720518112182617 seconds
...
torch.Size([1, 98])
google/mt5-large 0 1.1833469867706299 seconds
google/mt5-large 1 1.160696268081665 seconds
google/mt5-large 2 1.1483042240142822 seconds
google/mt5-large 3 1.1961536407470703 seconds
...
torch.Size([1, 663])
google/byt5-small 0 4.315548419952393 seconds
google/byt5-small 1 4.416741371154785 seconds
google/byt5-small 2 4.385504722595215 seconds
google/byt5-small 3 4.426936149597168 seconds
...
torch.Size([1, 663])
google/byt5-base 0 8.502674579620361 seconds
google/byt5-base 1 8.467743635177612 seconds
google/byt5-base 2 8.519198656082153 seconds
google/byt5-base 3 8.492923974990845 seconds
google/byt5-base 4 8.318963766098022 seconds
<|||||>Btw, the ByT5-xxl model in huggingface doesn't have a pytorch_model.bin so it's not possible to load, is it because it's new and will be added shortly?<|||||>byt5-xl is also not working (`pytorch_model.bin` is not correctly uploaded) btw. :)<|||||>thanks for letting me know guys - willl correct this!<|||||>Weights should be correctly uploaded now :-) <|||||>@patrickvonplaten the tokenizer also seems to be super slow.
Anybody else with such an experience? @stefan-it @ViktorThink <|||||>@PhilipMay - yes this doesn't surprise me tbh. I implemented the tokenizer in a way that fits the libraries' tokenizer design, which is by no means optimal in terms of speed. I think we should aim for a much faster Rust-backed tokenizer here - could we maybe re-use the fast Reformer tokenizer which also works on chars? cc @Narsil |
transformers | 11,970 | closed | Saving and loading a model does not work | 
After pulling a model, tuning it and saving it, it shows this error while loading it back.
I pulled other files except `pytorch_model.bin` from the pretrained model in hugging face

| 06-01-2021 06:43:01 | 06-01-2021 06:43:01 | Your file is called `pytoch_model.bin` instead of `pytorch_model.bin`.
You should use `save_pretrained` :)<|||||>Sometime I wonder how more stupid I can be.
Thanks for the help |
transformers | 11,969 | closed | run_qa.py for Question and answering doesn't work for SQUAD2 | Hi I am using custom data set for fine tune QnA. When I am trying to train the run_qa.py for my dataset. preprocessing is not working. My dataset is look like this.
```
!python /home/jupyter/Project/transformers/examples/pytorch/question-answering/run_qa.py \
--model_name_or_path deepset/bert-large-uncased-whole-word-masking-squad2 \
--train_file /home/jupyter/Project/QnAwatersoftner/squad/train-v2.0.json \
--do_train \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir ./Exp_model/ \
--version_2_with_negative
```


Can you please help me with this. Thanks
@sgugger | 06-01-2021 06:27:10 | 06-01-2021 06:27:10 | Yes, the script is an example for squad, not an app that works on any data. You will need to adapt the preprocessing steps to your dataset or change your dataset to be formatted exactly like squad.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,968 | closed | [Pipelines] Extend pipelines to handle multiple possible AutoModel classes | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR extends `pipeline` to better handle multiple auto model classes per pipeline.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-31-2021 22:21:14 | 05-31-2021 22:21:14 | Re:
> Also, IIRC, config.architectures was not added until some point in transformers. Do we have any way to check that we're not breaking legacy models ? (Scanning the hub is my best guess)
Agree that we should be careful here. I'm scanning the hub now to check, but I'm pretty sure that 99% of models have `config.architectures`. Even the very old models like `gpt2` have `config.architectures = [...]` saved. Also, instead of raising an error if `config.architectures` doesn't exist, we could just pick the first element of the tuple -> this way we ensure that we cannot break anything that worked previously. What do you think?
Re:
> However, I have a feeling we're adding another extra layer of complexity.
>
> Couldn't we use this PR, to simplify the overall logic here. Maybe None could become an empty tuple.
> Single class could become 1-tuple.
>
> Overall the rest of the flow should be more streamlined, don't you think ?
Agree that we are adding more complexity, but I don't really see how to allow multiple auto classes without adding more complexity. I don't really see how forcing everything to be in the `tuple` format will help here. But keen to hear your proposition on how this could reduce overall complexity! |
transformers | 11,967 | closed | Flax Big Bird | # What does this PR do?
--------------------------------------------------------------------
**🚨 Bug detection 🚨**
`BigBirdForMultipleChoice` was incorrect and is corrected in this PR. This is a breaking change for all BigBird models that have been trained on multiple choice (0 on the hub currently)
--------------------------------------------------------------------
This PR will add `FlaxBigBirdModel`.
Evaluation Notebook: https://colab.research.google.com/drive/1rx_G9awurQekrK1mSzd3A9F_UciTTuTY?usp=sharing#scrollTo=ecjNtnAuKYo8
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten | 05-31-2021 20:07:54 | 05-31-2021 20:07:54 | Wow awesome work! I think the important next step would be to make the test:
```
tests/test_modeling_flax_bigbird.py::FlaxBigBirdModelTest::test_jit_compilation
```
If this test works, we can enable super fast training on TPU :-) <|||||>This test is passing for all model classes except for `FlaxBigBirdForMultipleChoice` and models are already `jit` compatible. It's failing since there is some bug in `FlaxBertForMultipleChoice` (BERT) which doesn't allow it to work for `seqlen > 1`.<|||||>> This test is passing for all model classes except for `FlaxBigBirdForMultipleChoice` and models are already `jit` compatible. It's failing since there is some bug in `FlaxBertForMultipleChoice` (BERT) which doesn't allow it to work for `seqlen > 1`.
I'm sorry - I don't follow here 100%. Running `RUN_PT_FLAX_CROSS_TESTS=1 pytest tests/test_modeling_flax_bert.py` passes, so `FlaxBertForMultipleChoice` seems to work correctly. Can you maybe open an issue showing the problem with `FlaxBertForMultipleChoice`? <|||||>Hey Vasu,
could you remove the `.ipynb` debugger file? :-)<|||||>done.<|||||>@LysandreJik @sgugger @stas00
The jitted-Flax tests are getting too expensive to be run at every commit:
```
423.98s call tests/test_modeling_flax_big_bird.py::FlaxBigBirdModelTest::test_jit_compilation
64.09s call tests/test_modeling_flax_bert.py::FlaxBertModelTest::test_jit_compilation
51.94s call tests/test_modeling_flax_electra.py::FlaxElectraModelTest::test_jit_compilation
43.62s call tests/test_modeling_flax_roberta.py::FlaxRobertaModelTest::test_jit_compilation
37.21s call tests/test_modeling_flax_big_bird.py::FlaxBigBirdModelTest::test_hidden_states_output
29.00s call tests/test_modeling_flax_gpt2.py::FlaxGPT2ModelTest::test_greedy_generate
28.08s call tests/test_modeling_flax_big_bird.py::FlaxBigBirdModelTest::test_model_outputs_equivalence
27.02s call tests/test_modeling_flax_gpt2.py::FlaxGPT2ModelTest::test_sample_generate_logits_warper
25.48s call tests/test_modeling_flax_bert.py::FlaxBertModelTest::test_attention_outputs
25.26s call tests/test_modeling_flax_gpt2.py::FlaxGPT2ModelTest::test_sample_generate
25.10s call tests/test_modeling_flax_gpt2.py::FlaxGPT2ModelTest::test_greedy_generate_attn_mask
24.77s call tests/test_modeling_flax_gpt2.py::FlaxGPT2ModelTest::test_sample_generate_attn_mask
23.91s call tests/test_modeling_flax_electra.py::FlaxElectraModelTest::test_attention_outputs
23.50s call tests/test_modeling_flax_clip.py::FlaxCLIPModelTest::test_jit_compilation
20.20s call tests/test_modeling_flax_roberta.py::FlaxRobertaModelTest::test_attention_outputs
16.70s call tests/test_modeling_flax_bert.py::FlaxBertModelTest::test_model_outputs_equivalence
15.46s call tests/test_modeling_flax_gpt2.py::FlaxGPT2ModelTest::test_jit_compilation
13.64s call tests/test_modeling_flax_clip.py::FlaxCLIPVisionModelTest::test_jit_compilation
12.68s call tests/test_modeling_flax_bert.py::FlaxBertModelTest::test_hidden_states_output
12.43s call tests/test_modeling_flax_electra.py::FlaxElectraModelTest::test_model_outputs_equivalence
12.05s call tests/test_tokenization_mbart50.py::MBartTokenizationTest::test_save_pretrained
11.96s call tests/test_modeling_flax_big_bird.py::FlaxBigBirdModelTest::test_forward_signature
11.88s call tests/test_modeling_flax_roberta.py::FlaxRobertaModelTest::test_model_outputs_equivalence
11.63s call tests/test_modeling_flax_clip.py::FlaxCLIPVisionModelTest::test_attention_outputs
11.11s call tests/test_modeling_flax_clip.py::FlaxCLIPModelTest::test_get_image_features
```
However they are super important to ensure that the model works on TPU. Can we somehow run them only on approval or it's probably easier to just set them to "slow" for now?<|||||>@vasudevgupta7 could you also run `make style` one last time? :-)<|||||>> The jitted-Flax tests are getting too expensive to be run at every commit:
[...]
>
> However they are super important to ensure that the model works on TPU. Can we somehow run them only on approval or it's probably easier to just set them to "slow" for now?
What you're saying is that the TPU tests won't be then run at all, because we don't have a TPU runner for slow tests, correct?
I think Circle CI has a mechanism where you can trigger certain runs by adding a special keyword to the commit message. But that might be too complicated to remember to do.
How about this idea. Leave an open PR with a circle-ci jitted-flax tests job that only exists in this PR, and which gets rebased on a nightly basis through a cron-job and pushed, that would give a poor man's scheduled CI run on TPU. Perhaps there are some easier ways.<|||||>We can mark them as slow if you want but there's no Flax GPU CI right now, so the slow tests won't be run :) @stas00's proposal sounds good!<|||||>tests failing on CircleCI are unrelated to this PR.<|||||>Flax tests are disabled now - opening a PR that will run them as proposed by @stas00 <|||||>Merging - great job @vasudevgupta7 |
transformers | 11,966 | closed | [DeepSpeed] decouple `DeepSpeedConfigHF` from `Trainer` | As requested in https://github.com/huggingface/transformers/issues/11954 this PR
* uncouples `DeepSpeedConfigHF` from `Trainer` so one can activate `zero.Init()` in `modeling_utils.py` and several other places w/o needing to rely on the HF `Trainer`.
* adds a new `LoggingLevel` ctx manager to `testing_utils.py`
* adds a new test testing `DeepSpeedConfigHF` decoupled from the `Trainer`.
* starts a new doc
* well, through the PR things got renamed too, see the final diff for all the changes.
The plan is to merge this, then move all of the Deepspeed integration code into its own `src/transformers/deepspeed.py` and the docs into `docs/source/main_classes/deepspeed.rst`, since the integration has now outgrown the Trainer alone.
Fixes: https://github.com/huggingface/transformers/issues/11954
@sgugger | 05-31-2021 18:51:45 | 05-31-2021 18:51:45 | Thank you for the great feedback and suggestions, Sylvain.
On to the next PR to move the code and docs.<|||||>@stas00 I have a question about this PR. My use case is HF trainer + DeepSpeed + hyperparameter search.
In **training_arg.py**: hf_deepspeed_config is first loaded from config file (e.g. zero3.config) as HfTrainerDeepSpeedConfig, and then is adjusted with TrainingArguments values. In particular, all "auto" in zero3.config would be resolved to actual integral or floating values.
self.hf_deepspeed_config = HfTrainerDeepSpeedConfig(self.deepspeed)
self.hf_deepspeed_config.trainer_config_process(self)
In **trainer.py -> _hp_search_setup()**: hf_deepspeed_config is reset by zero3.config to HfDeepSpeedConfig, different from HfTrainerDeepSpeedConfig, and thus "auto" values are remained unchanged as string type, not integral or floating type.
from transformers.deepspeed import HfDeepSpeedConfig
self.args.hf_deepspeed_config = HfDeepSpeedConfig(self.args.deepspeed)
This didn't work for hyperparameter search because DS cannot be initialized with "auto" values. The error messages are attached below. I feel _hp_search_setup() should do exact the same thing as in training_args.py to reset hf_deepspeed_config as HfTrainerDeepSpeedConfig and resolve all "auto" values, but I am not sure what was the reason you chose the other way in this PR.
```
File "/home/meiyang/src/transformers_fork/src/transformers/integrations.py", line 164, in run_hp_search_optuna
study.optimize(_objective, n_trials=n_trials, timeout=timeout, n_jobs=n_jobs)
File "/home/meiyang/bin/miniconda3/envs/gptj/lib/python3.8/site-packages/optuna/study/study.py", line 400, in optimize
_optimize(
File "/home/meiyang/bin/miniconda3/envs/gptj/lib/python3.8/site-packages/optuna/study/_optimize.py", line 66, in _optimize
_optimize_sequential(
File "/home/meiyang/bin/miniconda3/envs/gptj/lib/python3.8/site-packages/optuna/study/_optimize.py", line 163, in _optimize_sequential
trial = _run_trial(study, func, catch)
File "/home/meiyang/bin/miniconda3/envs/gptj/lib/python3.8/site-packages/optuna/study/_optimize.py", line 264, in _run_trial
raise func_err
File "/home/meiyang/bin/miniconda3/envs/gptj/lib/python3.8/site-packages/optuna/study/_optimize.py", line 213, in _run_trial
value_or_values = func(trial)
File "/home/meiyang/src/transformers_fork/src/transformers/integrations.py", line 154, in _objective
trainer.train(resume_from_checkpoint=checkpoint, trial=trial)
File "/home/meiyang/src/transformers_fork/src/transformers/trainer.py", line 1155, in train
self.model = self.call_model_init(trial)
File "/home/meiyang/src/transformers_fork/src/transformers/trainer.py", line 1019, in call_model_init
model = self.model_init()
File "run_clm_local.py", line 289, in model_init
pretrained_model = AutoModelForCausalLM.from_pretrained(
File "/home/meiyang/src/transformers_fork/src/transformers/models/auto/auto_factory.py", line 447, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/home/meiyang/src/transformers_fork/src/transformers/modeling_utils.py", line 1488, in from_pretrained
with deepspeed.zero.Init(config_dict_or_path=deepspeed_config()):
File "/home/meiyang/src/deepspeed/deepspeed/runtime/zero/partition_parameters.py", line 461, in __init__
_ds_config = DeepSpeedConfig(config_dict_or_path,
File "/home/meiyang/src/deepspeed/deepspeed/runtime/config.py", line 873, in __init__
self._configure_train_batch_size()
File "/home/meiyang/src/deepspeed/deepspeed/runtime/config.py", line 1050, in _configure_train_batch_size
self._batch_assertion()
**File "/home/meiyang/src/deepspeed/deepspeed/runtime/config.py", line 986, in _batch_assertion
train_batch > 0
TypeError: '>' not supported between instances of 'str' and 'int'**
```<|||||>Honestly, I have never used `_hp_search_setup()` and have no idea what it does, so it's very possible the DS integration doesn't support it at the moment as indicated by your report.
Do you want to try and make it work and make a PR if you succeed?
I'm not exactly sure what you mean by:
> but I am not sure what was the reason you chose the other way in this PR.
but perhaps it'd be much simpler for you to code what you think it should be and then we can look together at what you meant.
How does that sound?<|||||>Sure. I am working on it. After replacing to HFTrainerDeepSpeedConfig, I was able to pass DS initialization. But there are other issues for model init. Not sure if they’re DS related or not. Will continue looking tomorrow. <|||||>Thank you for working on this, @dunalduck0 - I trust that you will figure it out.
While you work on this please log all the steps so that we can reproduce the process and we will need to create tests to verify the workings of this path based on your logs as we currently don't have any tests exercising this path with deepspeed. <|||||>Hi stas00, I wanted to update this thread a bit. To my question on Mar 3rd, I think the change below is good enough to make sure DeepSpeed configuration is loaded properly. .

But the ultimate goal, to use hyperparameter search feature in Transformers with DeepSpeed, is still blocking (see [discussion thread](https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/10) and [blog](https://huggingface.co/blog/ray-tune) to learn this feature). The difficulty is how to use an "used" DeepSpeed engine, as in the problems you addressed last month: https://github.com/microsoft/DeepSpeed/issues/1748
The difference in my case is I don't need to save/load_checkpoint. For hyperparameters search, both trainer and DeepSpeed engine need to be reused in multiple trial trainings, to discover the optimal settings. I was managed to run trial 1 (the 1st time the DeepSpeed engine is used in training) and then trial 2 failed inside of DeepSpeed with "index out of range" error. It looks like some partition went wrong, but I am not sure. I wonder if you have reached any good solution with DeepSpeed team on this type of issues?<|||||>Some improvements have been made recently in Deepspeed, but last I tried to re-use the engine it still didn't work in all situations. You're of course welcome to open and issue at Deepspeed and ask for a better engine reuse. If you don't ask other priorities will take over.
But I'm sure we can find a workaround with what we have.
Bottom line, I need to study how this feature works, write a simple example that deploys this feature, then add DS support, then turn it into a test.
Thank you for the links with the usage examples.
I'm currently very busy with the BigScience 176B model training launch, but hopefully next week I should have some time to tinker with this. Unless of course, you or someone else beats me to it ;)<|||||>@dunalduck0, my apologies for taking forever to attend to this feature request.
Please try this PR https://github.com/huggingface/transformers/pull/16740 and let me know if it addressed your need.
I only added a basic test, so if you're doing something specific that happens not to work please let me know and I will extend the test to include it. |
transformers | 11,965 | closed | Reproducibility Questions | When initializing task-specific models from pre-trained language models, we see something like:
`Some weights of XLMRobertaForQuestionAnswering were not initialized from the model checkpoint at xlm-roberta-large and are ne
wly initialized: ['qa_outputs.weight', 'qa_outputs.bias']`
It seems that the initialization is model-dependant, however, I wonder whether this behaviour is controlled by the random seed that we set in the `training_args.py` `seed` hyper-parameter?
If not, I assume we should the following? But how? Are there any examples?
```
To ensure reproducibility across runs, use the: func:`~transformers.Trainer.model_init` function to instantiate the model if it has some randomly initialized parameters.
```
@sgugger
| 05-31-2021 18:32:25 | 05-31-2021 18:32:25 | If you are using any of the example scripts, they set the seed before instantiating the model for full reproducibility. If not, you should do that in your script by using the `set_seed` function you can import from the library, or by using the `model_init` function to initialize your model (the `Trainer` will set the seed before calling it).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,964 | closed | Fix weight decay masking in `run_flax_glue.py` | # What does this PR do?
Fixes #11936
In addition to the changes discussed in the issue, I combined `traverse` and `decay_path` into one function `decay_mask_fn` to simplify the implementation.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten | 05-31-2021 17:03:11 | 05-31-2021 17:03:11 | That's great - thanks a lot for the fix @n2cholas :-) I'll re-run the FlaxGlue suit today with your fix to check if results improve <|||||>Thanks a lot for the fix @n2cholas - I reran the eval train+eval script & updated the results accordingly |
transformers | 11,963 | closed | How to achive character lvl tokenization? (cant convert from huggingface/tokenizers) | Initially, I thought that huggingface/tokenizers is the same thing as tokenization in this repo.
I made it like this:
```
from tokenizers import Tokenizer, models, pre_tokenizers
from tokenizers.processors import TemplateProcessing
tokenizer = Tokenizer(models.WordLevel(unk_token='[UNK]'))
tokenizer.pre_tokenizer = pre_tokenizers.Split("", "isolated")
trainer = tokenizer.model.get_trainer()
trainer.vocab_size = 100
trainer.special_tokens = ["[UNK]", "[PAD]", "[SOS]", "[SEP]", "[EOS]"]
tokenizer.train(files=["alchemist.txt"], trainer=trainer)
tokenizer.post_processor = TemplateProcessing(
single="[SOS] $A [EOS]",
pair="[SOS] $A [SEP] $B:1 [EOS]:1",
special_tokens=[
("[SOS]", 2),
("[SEP]", 3),
("[EOS]", 4),
],
)
tokenizer.enable_padding(pad_id=1, pad_token="[PAD]")
```
Still, huggingface/tokenizers lack some features I want like returning tensors.
So I tried to convert it to transformers tokenizer as suggested here
https://github.com/huggingface/tokenizers/issues/669#issuecomment-828864108
But got error:
```
/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1631 FutureWarning,
1632 )
-> 1633 file_id = list(cls.vocab_files_names.keys())[0]
1634 vocab_files[file_id] = pretrained_model_name_or_path
1635 else:
IndexError: list index out of range
```
So, is tokenizer from tokenizers + converting to transformer tokenizer is a best way to achieve what I want?
If so, how should I convert it? | 05-31-2021 16:52:26 | 05-31-2021 16:52:26 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I just can not figure out why two api can not be the same. Extra Burden for users |
transformers | 11,962 | closed | [RAG] Fix rag from pretrained question encoder generator behavior | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #11303
The new code allows to pass model specific parameters to `from_pretrained_...` which will correctly change the config.
Also `*model_kwargs` is deleted from `from_pretrained_question_encoder_generator` is it cannot be used (impossible to check whether args correspond to question encoder or generator).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-31-2021 14:47:03 | 05-31-2021 14:47:03 | |
transformers | 11,961 | closed | Add MT5ForConditionalGeneration as supported arch. to summarization README | see #11960 | 05-31-2021 12:08:15 | 05-31-2021 12:08:15 | Thanks a lot for fixing this!
Could you also update [translation readme](https://github.com/huggingface/transformers/blob/master/examples/pytorch/translation/README.md) ?<|||||>> Could you also update [translation readme](https://github.com/huggingface/transformers/blob/master/examples/pytorch/translation/README.md) ?
Done. |
transformers | 11,960 | closed | Summarization also supports MT5ForConditionalGeneration | The [README.md](https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/README.md) of the summarization examples sais it supports `T5ForConditionalGeneration` IMO `MT5ForConditionalGeneration` should be added - right?
Tagging @sgugger and @sshleifer ... | 05-31-2021 11:51:23 | 05-31-2021 11:51:23 | And @patil-suraj<|||||>Yes, you are right!
We could also add it in the[ translation readme](https://github.com/huggingface/transformers/blob/master/examples/pytorch/translation/README.md) as well.
Feel free to open a PR :)<|||||>> Feel free to open a PR :)
see #11961
<|||||>Fixed by #11961 |
transformers | 11,959 | closed | Add new token to pretrained GPT2 tokenizer | Hi! Thank you for awesome project!
I want to add several tokens to pre-trained GPT2 tokenizer.
Can I? | 05-31-2021 09:01:56 | 05-31-2021 09:01:56 | I will use "EleutherAI/gpt-neo-1.3B" tokenizer. Is there a <|||||>This should help you out: https://huggingface.co/transformers/internal/tokenization_utils.html?highlight=add_tokens#transformers.tokenization_utils_base.SpecialTokensMixin.add_tokens<|||||>Thank you! @LysandreJik |
transformers | 11,958 | closed | Issue: IndexError: "Index out of range in self" when generating translations with MarianMTModel | ## Environment info
- transformers version: 4.5.1
- Python version: Python 3.7
- Using GPU in script? Yes
### Who can help
- marian: @patrickvonplaten, @patil-suraj
- text generation: @patrickvonplaten
## Information
I am currently trying to use MarianMTModel to translate text from English to German. When generating the translation, an error occurs (code and error message below).
## To reproduce
I am using Google Colab.
```ruby
%%capture
!pip install datasets==1.6.2
!pip install transformers==4.5.1
!pip install SentencePiece
import datasets
import tensorflow_datasets as tfds
import pandas as pd
from transformers import MarianMTModel, MarianTokenizer
train_data, train_info = tfds.load("cnn_dailymail", split="train[:85%]", with_info=True)
val_data, val_info = tfds.load("cnn_dailymail", split="validation[:10%]", with_info=True)
test_data, test_info = tfds.load("cnn_dailymail", split="test[:5%]", with_info=True)
df_train = tfds.as_dataframe(train_data, train_info)
df_val = tfds.as_dataframe(val_data, val_info)
df_test = tfds.as_dataframe(test_data, test_info)
df_train = tfds.as_dataframe(train_data.take(100), train_info)
df_val = tfds.as_dataframe(val_data.take(100), val_info)
df_test = tfds.as_dataframe(test_data.take(100), test_info)
name = "Helsinki-NLP/opus-mt-en-de"
tokenizer = MarianTokenizer.from_pretrained(name)
model = MarianMTModel.from_pretrained(name)
model.resize_token_embeddings(len(tokenizer))
def translate_dataframe(df):
corpus_text = []
corpus_summary = []
for index, row in df.iterrows():
translated = model.generate(**tokenizer(row["article"], return_tensors="pt", padding=True))
decoded = [tokenizer.decode(token, skip_special_tokens=True) for token in translated]
corpus_text.append(decoded)
translated = model.generate(**tokenizer(row["highlights"], return_tensors="pt", padding=True))
decoded = [tokenizer.decode(token, skip_special_tokens=True) for token in translated]
corpus_summary.append(decoded)
df = pd.DataFrame({"article": corpus_text, "highlights": corpus_summary})
return df
df_train = translate_dataframe(df_train)
df_val = translate_dataframe(df_val)
df_test = translate_dataframe(df_test)
```
Error message:
```ruby
IndexError Traceback (most recent call last)
<ipython-input-8-879e52d92ecf> in <module>()
24
25
---> 26 df_train = translate_dataframe(df_train)
27 df_val = translate_dataframe(df_val)
28 df_test = translate_dataframe(df_test)
10 frames
<ipython-input-8-879e52d92ecf> in translate_dataframe(df)
11
12 for index, row in df.iterrows():
---> 13 translated = model.generate(**tokenizer(row["article"], return_tensors="pt", padding=True))
14 decoded = [tokenizer.decode(token, skip_special_tokens=True) for token in translated]
15 corpus_text.append(decoded)
/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
25 def decorate_context(*args, **kwargs):
26 with self.__class__():
---> 27 return func(*args, **kwargs)
28 return cast(F, decorate_context)
29
/usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py in generate(self, input_ids, max_length, min_length, do_sample, early_stopping, num_beams, temperature, top_k, top_p, repetition_penalty, bad_words_ids, bos_token_id, pad_token_id, eos_token_id, length_penalty, no_repeat_ngram_size, encoder_no_repeat_ngram_size, num_return_sequences, max_time, decoder_start_token_id, use_cache, num_beam_groups, diversity_penalty, prefix_allowed_tokens_fn, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, forced_bos_token_id, forced_eos_token_id, remove_invalid_values, **model_kwargs)
925 if self.config.is_encoder_decoder:
926 # add encoder_outputs to model_kwargs
--> 927 model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs)
928
929 # set input_ids as decoder_input_ids
/usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py in _prepare_encoder_decoder_kwargs_for_generation(self, input_ids, model_kwargs)
410 argument: value for argument, value in model_kwargs.items() if not argument.startswith("decoder_")
411 }
--> 412 model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs)
413 return model_kwargs
414
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/usr/local/lib/python3.7/dist-packages/transformers/models/marian/modeling_marian.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict)
722 inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale
723
--> 724 embed_pos = self.embed_positions(input_shape)
725
726 hidden_states = inputs_embeds + embed_pos
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs)
25 def decorate_context(*args, **kwargs):
26 with self.__class__():
---> 27 return func(*args, **kwargs)
28 return cast(F, decorate_context)
29
/usr/local/lib/python3.7/dist-packages/transformers/models/marian/modeling_marian.py in forward(self, input_ids_shape, past_key_values_length)
137 past_key_values_length, past_key_values_length + seq_len, dtype=torch.long, device=self.weight.device
138 )
--> 139 return super().forward(positions)
140
141
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py in forward(self, input)
156 return F.embedding(
157 input, self.weight, self.padding_idx, self.max_norm,
--> 158 self.norm_type, self.scale_grad_by_freq, self.sparse)
159
160 def extra_repr(self) -> str:
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1914 # remove once script supports set_grad_enabled
1915 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1916 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1917
1918
IndexError: index out of range in self
```
## Expected behavior
I expect this model to generate translations without running into this error. Could you give me some tips on how to fix this error or what is wrong in general? Thanks! :) | 05-31-2021 08:49:26 | 05-31-2021 08:49:26 | Hi, this could be because some example might have seq length greater than `max_length` supported by the model. For marin max_length is 1024. You could pass `truncation=True` to `tokenizer` so it'll truncate the text if it's greater than model's max length.<|||||>Thanks for your quick reply. That has solved my problem :) |
transformers | 11,957 | closed | Byt5 | # 🌟 New model addition
## Model description
Tokenizer free Version of mt5
<!-- Important information -->
## Open source status
* [X] the model implementation is available: (give details)
* https://github.com/google-research/byt5
* [X] the model weights are available: (give details)
* https://github.com/google-research/byt5
* [ ] who are the authors: (mention them, if possible by @gh-username)
| 05-31-2021 08:38:37 | 05-31-2021 08:38:37 | hopefully getting closed by https://github.com/huggingface/transformers/pull/11971 |
transformers | 11,956 | closed | Authorize args when instantiating an AutoModel | The current `_BaseAutoModelClass` class initialization does not accept any argument, and therefore fails with an arcane error when instantiating it incorrectly, as shown in https://github.com/huggingface/transformers/issues/11953 by @g-karthik:
```py
from transformers import AutoConfig, AutoModelForCausalLM
config = AutoConfig.from_pretrained("gpt2", return_dict=True, gradient_checkpointing=False)
model = AutoModelForCausalLM(config)
```
```out
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __init__() takes 1 positional argument but 2 were given
```
This PR adds possible arguments and keyword arguments so that the error is always correctly raised:
```out
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/xxx/transformers/src/transformers/models/auto/auto_factory.py", line 361, in __init__
raise EnvironmentError(
OSError: AutoModel is designed to be instantiated using the `AutoModel.from_pretrained(pretrained_model_name_or_path)` or `AutoModel.from_config(config)` methods.
```
Taking this opportunity to re-open the question asked by @g-karthik of whether the `AutoModel`s should have the ability to be instantiated using configuration objects via the `__init__`, similarly to other `PreTrainedModel`s.
@patrickvonplaten @sgugger | 05-31-2021 07:49:26 | 05-31-2021 07:49:26 | @LysandreJik thanks for this!
Yeah IMO it'd be great if the instantiation of `AutoModel`s from a configuration follows a similar signature as that of instantiation of regular model classes from a configuration, just from a consistency standpoint because `AutoModel`s are already effectively treated as models via `modeling_auto.py` just like any other model. |
transformers | 11,955 | closed | Killed Message | Describe the bug
When I run this command
generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B')
or even download the model in a folder like this
generator = pipeline('text-generation', model=neo-models/')
It is not loading and produce the result as "Killed" text.
which usually means "out of memory"
even though I have nothing loaded except pycharm GUI
I have tested this on Ubuntu and Centos Server. Same result
below is the whole code:
import gc
import os
from transformers import pipeline
import torch
gc.collect()
print("================1")
**generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B') ====> here comes the KILLED**
#generator = pipeline('text-generation', model='neo-models/'')
print("================2")
prompt = "what is the meaning of life"
res = generator(prompt, max_length=50, do_sample=True, Temperature=0.9)
print("================")
print(res) | 05-31-2021 05:43:24 | 05-31-2021 05:43:24 | The `EleutherAI/gpt-neo-2.7B` is a large model. Loading it in memory takes more than 10GB.
Do you have the same results when trying to use the 1.3B or the 125M variants?<|||||>> The `EleutherAI/gpt-neo-2.7B` is a large model. Loading it in memory takes more than 10GB.
> Do you have the same results when trying to use the 1.3B or the 125M variants?
yes I did with 2.7 and then 1.3 both have the same result....
Then how we use it, if a person has 8GB or 16GB memory installed in a PC...
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,954 | closed | Uncoupling ZeRO-3 weak ref bridge b/w Trainer and modeling_utils | https://github.com/huggingface/transformers/blob/fd6204b2a70d100800cb259a7fbddfc812631ed3/src/transformers/modeling_utils.py#L1168
So I briefly looked through the ZeRO-3 integration into the `Trainer` and this approach of creating a "bridge" between the `Trainer` and `modeling_utils` via a weak ref is neat.
However, what if I wanted to use ZeRO-3 with HF models outside the scope of the `Trainer` and `TrainingArguments`? Seems like I cannot at the moment.
There seems to be 4 places in `modeling_utils` where `is_deepspeed_zero3_enabled` is called, of which the only one that seems to be heavily tied to the custom `DeepSpeedConfigHF` class is the one referenced above.
Is it not possible to allow `Init()` args to come from the parent `from_pretrained()` method's `kwargs`?
@stas00 @sgugger | 05-31-2021 04:08:17 | 05-31-2021 04:08:17 | Good call, @g-karthik!
Please see the decoupled PR here: https://github.com/huggingface/transformers/pull/11966
Now you can just do:
```
from transformers.integrations import HfDeepSpeedConfig
dsc = HfDeepSpeedConfigHF(ds_config)
model = AutoModel.from_pretrained(name)
```
and it'll just work, w/o needing the Trainer.
Please let me know if it works for you.
> Is it not possible to allow Init() args to come from the parent from_pretrained() method's kwargs?
I can't see how this would work, since there several other core functions which rely on `is_deepspeed_zero3_enabled` and there is no way to pass this argument to those. That's why the "environmental" approach, rather than passing args.
The only other way I can see this solved is by tapping into the `model.config` object, but then it'll require a ton of code changes - e.g. all examples.<|||||>PR merged - I updated the comment above to reflect the final new name.
I will now work on updating the docs so it's all clear. |
transformers | 11,953 | closed | AutoModel abstraction fails for pre-training initialization | ## Environment info
- `transformers` version: 4.5.1
- Python version: 3.6
- PyTorch version: 1.4+
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten, @LysandreJik
## Information
Model I am using: GPT-2
The problem arises when using:
* [Y] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
from transformers import AutoConfig, AutoModelForCausalLM, GPT2LMHeadModel
config = AutoConfig.from_pretrained("gpt2", return_dict=True, gradient_checkpointing=False)
model_class = GPT2LMHeadModel
model = model_class(config) # WORKS FINE
model_class = AutoModelForCausalLM
model = model_class(config) # FAILS, stack trace below
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: __init__() takes 1 positional argument but 2 were given
```
## Expected behavior
Both cases should work fine. The latter case should pull the former class internally. | 05-31-2021 00:20:15 | 05-31-2021 00:20:15 | Hello! We recommend you read the [docs regarding the `AutoModel`](https://huggingface.co/transformers/model_doc/auto.html#transformers.AutoModel.from_config). I have linked you the `from_config` method which should be used in this use case.<|||||>However, it is indeed unexpected for you to receive this error message. The message should be more explicit, investigating now.<|||||>Opened #11956 for a more explicit error, and opening your use case for discussion. |
transformers | 11,952 | closed | TypeError: __init__() got an unexpected keyword argument 'force_bos_token_to_be_generated' | ## Environment info
- `transformers` version: 4.6.1
- Platform: Linux-3.10.0-1160.15.2.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten, @patil-suraj
## Information
The model I am using is BART
The problem arises when using:
* [x] the official example scripts: (https://huggingface.co/transformers/model_doc/bart.html)
## To reproduce
Steps to reproduce the behavior:
1. Install transformers library
2. run the following code-snipped as presented in the official example:
```
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", force_bos_token_to_be_generated=True)
```
3. Receive error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-21-216ff3421f95> in <module>
1 from transformers import BartForConditionalGeneration, BartTokenizer
----> 2 model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", force_bos_token_to_be_generated=True)
3 tok = BartTokenizer.from_pretrained("facebook/bart-large")
4 example_english_phrase = "UN Chief Says There Is No <mask> in Syria"
5 batch = tok(example_english_phrase, return_tensors='pt')
~/.conda/envs/groundwork/lib/python3.8/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1171 else:
1172 with no_init_weights(_enable=_fast_init):
-> 1173 model = cls(config, *model_args, **model_kwargs)
1174
1175 if from_tf:
TypeError: __init__() got an unexpected keyword argument 'force_bos_token_to_be_generated'
```
## Expected behavior
I expect the code to not raise an exception and that the final assertion is true.
If there are more information needed, please let me know.
| 05-30-2021 17:03:50 | 05-30-2021 17:03:50 | Hi there,
`force_bos_token_to_be_generated` is now depricated, instead you could use `forced_bos_token_id` argument, which should be set to the token id that needs to be forced as first token.<|||||>Thanks a lot for your fast reply!
As suggested, I used the BART tokenizer to find out the bos_token ID and now it works perfectly fine (:<|||||>> Hi there,
>
> `force_bos_token_to_be_generated` is now depricated, instead you could use `forced_bos_token_id` argument, which should be set to the token id that needs to be forced as first token.
Hello @patil-suraj.
I ran the following code in Colab and it worked but could you confirm that corresponds to what you wrote? Thanks.
```
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large")
tok = BartTokenizer.from_pretrained("facebook/bart-large")
example_english_phrase = "UN Chief Says There Is No <mask> in Syria"
batch = tokenizer(example_english_phrase, return_tensors='pt')
generated_ids = model.generate(batch['input_ids'], forced_bos_token_id = batch['input_ids'][0][0])
tok.batch_decode(generated_ids, skip_special_tokens=True)[0]
# assert tok.batch_decode(generated_ids, skip_special_tokens=True) == ['UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria']
```<|||||>Hi @piegu
It's not necessary to use `forced_bos_token_id ` with `facebook/bart-large`, it's only needed for bart-cnn models <|||||>> It's not necessary to use `forced_bos_token_id ` with `facebook/bart-large`, it's only needed for bart-cnn models
Hello @patil-suraj,
I'm not sure to understand your answer.
1. If you run [my code](https://github.com/huggingface/transformers/issues/11952#issuecomment-923264808) about `facebook/bart-large` with `forced_bos_token_id`, you get a clear output:
`UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria`
2. If you run it without (`generated_ids = model.generate(batch['input_ids']`), you get this: `UNALSO SEE`
3. There is clearly a difference that shows that `forced_bos_token_id` has an impact with `facebook/bart-large`, no?
4. bart-cnn models are finetuned model for summarization, no? (like `https://huggingface.co/ainize/bart-base-cnn`). How do you use `forced_bos_token_id` with them?
4. I think the HF doc is not updated about this: https://huggingface.co/transformers/model_doc/bart.html#mask-filling
**Note**: just to give an overview of this discussion, I'm researching the right code to get the BART, mBART, and MBART-50 language models making multiple token masks (ie writing zero or more tokens in the output sentence when there is a `<mask>` token in the input one) with the objective to get the full output sentence. |
transformers | 11,951 | closed | [Flax] Adding Visual-Transformer | # What does this PR do?
This PR adds the `ViT` model in JAX/Flax
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #11948
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
| 05-30-2021 15:03:19 | 05-30-2021 15:03:19 | Awesome!
The equivalence tests are still failing, from CI logs
```
FAILED tests/test_modeling_flax_vit.py::FlaxViTModelTest::test_equivalence_flax_to_pt
FAILED tests/test_modeling_flax_vit.py::FlaxViTModelTest::test_equivalence_pt_to_flax
```
Could you fix these, otherwise lmk and I will take care of it :) <|||||>sorry having a busy week, will back to this on the weekend :weary: |
transformers | 11,950 | closed | ValueError: You have to specify either decoder_inputs or decoder_inputs_embeds | Why there is no training example for T5 or MT5???
Could you please give me a link to an example? I had a hard time to write a code with various errors:
This is my code:
```
import torch
from datasets import load_dataset
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
#from transformers import MT5Model, T5Tokenizer
from transformers import MT5ForConditionalGeneration, T5Tokenizer
#tokenizer = AutoTokenizer.from_pretrained("google/mt5-base")
tokenizer = T5Tokenizer.from_pretrained("google/mt5-small")
raw_datasets = load_dataset("atomic")
def tokenize_function(examples):
return tokenizer(examples["event"],max_length=128, padding="max_length", truncation=True)
def tokenize_labels(examples):
with tokenizer.as_target_tokenizer():
return tokenizer(examples["oReact"], return_tensors="pt")
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
#labels = raw_datasets.map(tokenize_labels, batched=True)
small_train_dataset = tokenized_datasets["train"].shuffle(seed=42).select(range(1000))
small_eval_dataset = tokenized_datasets["test"].shuffle(seed=42).select(range(1000))
full_train_dataset = tokenized_datasets["train"]
full_eval_dataset = tokenized_datasets["test"]
from transformers import TrainingArguments
training_args = TrainingArguments("test_trainer")
#model = AutoModelForSeq2SeqLM.from_pretrained("google/mt5-base")
model = MT5ForConditionalGeneration.from_pretrained("google/mt5-small", torchscript = True)
#traced_model = torch.jit.trace(model, (input_ids, attention_mask, decoder_input_ids))
from transformers import Trainer
trainer = Trainer(
model=model, args=training_args, train_dataset=small_train_dataset, eval_dataset=small_eval_dataset
)
trainer.train()
import numpy as np
from datasets import load_metric
metric = load_metric("accuracy")
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=small_train_dataset,
eval_dataset=small_eval_dataset,
compute_metrics=compute_metrics,
)
trainer.evaluate()
```
I don't know how to feed the labels to this model...
And this is error:
```
...
tr_loss += self.training_step(model, inputs)
File "/home/pouramini/miniconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1250, in training_step
loss = self.compute_loss(model, inputs)
File "/home/pouramini/miniconda3/lib/python3.7/site-packages/transformers/trainer.py", line 1277, in compute_loss
outputs = model(**inputs)
File "/home/pouramini/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/pouramini/miniconda3/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py", line 1510, in forward
return_dict=return_dict,
File "/home/pouramini/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/pouramini/miniconda3/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py", line 871, in forward
raise ValueError(f"You have to specify either {err_msg_prefix}inputs or {err_msg_prefix}inputs_embeds")
```
@sgugger @patrickvonplaten @patil-suraj | 05-30-2021 14:49:57 | 05-30-2021 14:49:57 | Hi there,
the [summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) and [translation](https://github.com/huggingface/transformers/tree/master/examples/pytorch/translation) examples supports fine-tuning T5 and mT5 (and other seq2seq models in the lib). Please take a look at the readme and the script.
The scripts are easily modifiable to support training on any seq2seq task.
Also there are multiple notebook on T5 training in [community notebooks ](https://huggingface.co/transformers/community.html#community-notebooks)section. Hope that helps.<|||||>Thank you very much! I haven't found the first examples which are up to date too.
I found community notebooks later, some of them are old. Maybe a recent one in the main notebooks is a good idea.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,949 | closed | report_to flag does not work with TFTrainer | ## Environment info
- `transformers` version: 4.6.1
- Platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.27
- Python version: 3.8.10
- PyTorch version (GPU?): 1.7.1+cu101 (True)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
@sgugger @LysandreJik @Rocketknight1
## Information
The problem arises when using my own modified scripts: [CoLA from BLUE on TF version BERT](https://gist.github.com/tomy0000000/af06394aa00a8b0fffb992e5bf444adf)
My modified script only run the CoLA task, which is minimized from the official [tf_glue.py](https://github.com/huggingface/transformers/blob/master/examples/tensorflow/text-classification/run_tf_glue.py)
I've setup comet.ml and wandb for other project, but don't want to use them in this one.
However `report_to` flag don't seems to work in `TFTrainingArguments`.
More specific in the [trainer_tf.py](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_tf.py#L113-L127)
## To reproduce
Just run the notebook in jupyter, error pops in the 8th cell when `TFTrainer` is initialized.
Stacktrace:
```python
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-9-308135373fed> in <module>
----> 1 trainer = TFTrainer(
2 model=model,
3 args=TFTrainingArguments(output_dir=".", report_to="tensorboard"),
4 train_dataset=train_dataset,
5 eval_dataset=eval_dataset,
~/.local/lib/python3.8/site-packages/transformers/trainer_tf.py in __init__(self, model, args, train_dataset, eval_dataset, compute_metrics, tb_writer, optimizers)
120
121 if is_comet_available():
--> 122 self.setup_comet()
123 elif os.environ.get("COMET_MODE") != "DISABLED":
124 logger.info(
~/.local/lib/python3.8/site-packages/transformers/trainer_tf.py in setup_comet(self)
274 experiment = None
275 if comet_mode == "ONLINE":
--> 276 experiment = comet_ml.Experiment(**args)
277 logger.info("Automatic Comet.ml online logging enabled")
278 elif comet_mode == "OFFLINE":
~/.local/lib/python3.8/site-packages/comet_ml/__init__.py in __init__(self, api_key, project_name, workspace, log_code, log_graph, auto_param_logging, auto_metric_logging, parse_args, auto_output_logging, log_env_details, log_git_metadata, log_git_patch, disabled, log_env_gpu, log_env_host, display_summary, log_env_cpu, display_summary_level, optimizer_data, auto_weight_logging, auto_log_co2, auto_metric_step_rate, auto_histogram_tensorboard_logging, auto_histogram_epoch_rate, auto_histogram_weight_logging, auto_histogram_gradient_logging, auto_histogram_activation_logging)
239 )
240
--> 241 super(Experiment, self).__init__(
242 project_name=project_name,
243 workspace=workspace,
~/.local/lib/python3.8/site-packages/comet_ml/experiment.py in __init__(self, project_name, workspace, log_code, log_graph, auto_param_logging, auto_metric_logging, parse_args, auto_output_logging, log_env_details, log_git_metadata, log_git_patch, disabled, log_env_gpu, log_env_host, display_summary, log_env_cpu, display_summary_level, optimizer_data, auto_weight_logging, auto_log_co2, auto_metric_step_rate, auto_histogram_tensorboard_logging, auto_histogram_epoch_rate, auto_histogram_weight_logging, auto_histogram_gradient_logging, auto_histogram_activation_logging)
458 ALREADY_IMPORTED_MODULES
459 )
--> 460 raise ImportError(msg)
461
462 # Generate a unique identifier for this experiment.
ImportError: You must import Comet before these modules: tensorflow, torch, tensorboard
```
## Expected behavior
`TFTrainer` should respect the `report_to` arguments provided
| 05-30-2021 13:42:40 | 05-30-2021 13:42:40 | No this is not implemented for the `TFTrainer`. More generally, we are moving away from `TFTrainer` to go to pure Keras for the training loop.<|||||>Well, does that means `trainer_tf.py` is getting a huge rewrite? If not, I'm willing to work on a PR solving this particular issue. Let me know if anyone have thoughts about how this should be implemented.<|||||>No it's just going to disappear and we will use the Keras methods (fit etc.) instead. |
transformers | 11,948 | closed | Flax port vision Transformer to flax | Port the existing vision-transformer to flax. | 05-30-2021 12:41:25 | 05-30-2021 12:41:25 | claiming this issue has started work, @patrickvonplaten @patil-suraj |
transformers | 11,947 | closed | Encoding/decoding NLP model in tensorflow lite (fine-tuned GPT2) | We are in the process of building a small virtual assistant and would like it to be able to run a fine-tuned version of GPT-2 on a raspberry-pi with a coral accelerator.
So far, we managed to convert our model to a tflite and to get first results. We know how to convert from words to indices with the previous tokenizer but then we need a bigger tensor as input to the interpreter. We miss the conversion from indices to tensors. Is there a way to do this simply?
You can find our pseudo-code here, we are stuck at step 2 and 6 :
```
import tensorflow as tf
#Prelude
TF_MODEL_PATH_LITE = "/path/model.tflite"
interpreter = tf.lite.Interpreter(model_path=TF_MODEL_PATH_LITE)
interpreter.allocate_tensors()
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
input_shape = input_details[0]['shape']
#1-Encode input, giving you indices
context_idx = tokenizer.encode("Hello world.", return_tensors = "tf")
#2-How to convert the context_idx to appropriate np.array ?
input_data = np.array(np.random.random_sample(input_shape), dtype=np.int32) #dummy input for now
#3- feed input
interpreter.set_tensor(input_details[0]['index'], input_data)
#4- Run model
interpreter.invoke()
#5- Get output as tensor
output_data = interpreter.get_tensor(output_details[0]['index'])
#6- How decode this np array to idx
output_idx=np.random.randint(100) #dummy for now ...
#7- Decode Output from idx to word
string_tf = tokenizer.decode(output_idx, skip_special_tokens=True)
``` | 05-30-2021 09:44:46 | 05-30-2021 09:44:46 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>Sure,
Thanks for your answer :)
On Mon, May 31, 2021 at 9:51 AM Lysandre Debut ***@***.***>
wrote:
> Hello, thanks for opening an issue! We try to keep the github issues for
> bugs/feature requests.
> Could you ask your question on the forum <https://discuss.huggingface.co>
> instead?
>
> Thanks!
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/11947#issuecomment-851283473>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ADRJZ5KO6QL4EYWKMDXCXADTQM5WVANCNFSM45ZFOJYQ>
> .
>
--
Guillaume Slizewicz
www.guillaumeslizewicz.com
+32 496 53 6666
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,946 | closed | Loading mbart-large-50-one-to-many-mmt is very slow | Whenever i try to run :
model = MBartForConditionalGeneration.from_pretrained(" [local path]/mbart-large-50-one-to-many-mmt")
My computer ether freezes or it takes 15-20 minutes to load the model.
I am using it for translation
Code: https://huggingface.co/facebook/mbart-large-50-one-to-many-mmt
Any solution fo this?
-Thanks | 05-30-2021 08:08:29 | 05-30-2021 08:08:29 | Hi there, `mbart-50` is actlly a big model and takes a while to load. But 15-20 min seems a lot, it's could probably an issue with your system. You could try to load it using a colab and see how much time it takes.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,945 | closed | reinitialize wandb config for each hyperparameter search run | # What does this PR do?
Fixes #11944
This is the quick/easy fix I'm using to work around the issue locally by just rerunning the WandbCallback integration `setup()` method for each run. This works fine for me, but if for some reason it's not safe/desirable to rerun the `WandbCallback.setup()` please feel free to just close this PR.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Unsure; probably whoever did the wandb integration is best. Otherwise maybe @sgugger because it's Trainer related?
| 05-30-2021 08:01:58 | 05-30-2021 08:01:58 | I usually do the search directly with wandb sweeps so didn't notice this issue.
Looks good on my side, thanks! |
transformers | 11,944 | closed | wandb integration gags during hyperparameter search | ## Environment info
- transformers version: 4.6.1
- Platform: Linux-4.19.0-16-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.10
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
wandb version is 0.10.26, but I don't think it matters.
### Who can help
Maybe @sgugger since this is Trainer-related; I don't know who did the wandb integration specifically.
## Information
Model I am using: custom Pytorch model.
The problem arises when using:
* [ ] the official example scripts: (probably, haven't tried)
* [x] my own modified scripts: custom training script using the Trainer
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task:
* [x] my own task or dataset: custom MLM training
## To reproduce
Steps to reproduce the behavior:
1. Train a model using the Trainer with the wandb logging integration and run a hyperparameter search using Optuna (also maybe Ray, but I haven't tried with Ray)
2. After the first run, you'll get an exception like below when wandb tries to log. The issue is that the previous run has finished but a new one hasn't been started.
```
..... (first trial runs fine; logs to wandb and finishes)
wandb: Synced /home/josh/runs/hps_test: https://wandb.ai/mindful/projectname/runs/2vojg06h
5%|▌ | 1/19 [00:03<01:02, 3.47s/it][W 2021-05-30 07:41:43,979] Trial 1 failed because of the following error: Error('You must call wandb.init() before wandb.log()')
Traceback (most recent call last):
File "/home/josh/anaconda3/envs/project/lib/python3.8/site-packages/optuna/_optimize.py", line 217, in _run_trial
value_or_values = func(trial)
File "/home/josh/anaconda3/envs/project/lib/python3.8/site-packages/transformers/integrations.py", line 138, in _objective
trainer.train(resume_from_checkpoint=checkpoint, trial=trial)
File "/home/josh/anaconda3/envs/project/lib/python3.8/site-packages/transformers/trainer.py", line 1332, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/home/josh/anaconda3/envs/project/lib/python3.8/site-packages/transformers/trainer.py", line 1405, in _maybe_log_save_evaluate
self.log(logs)
File "/home/josh/anaconda3/envs/project/lib/python3.8/site-packages/transformers/trainer.py", line 1692, in log
self.control = self.callback_handler.on_log(self.args, self.state, self.control, logs)
File "/home/josh/anaconda3/envs/project/lib/python3.8/site-packages/transformers/trainer_callback.py", line 371, in on_log
return self.call_event("on_log", args, state, control, logs=logs)
File "/home/josh/anaconda3/envs/project/lib/python3.8/site-packages/transformers/trainer_callback.py", line 378, in call_event
result = getattr(callback, event)(
File "/home/josh/anaconda3/envs/project/lib/python3.8/site-packages/transformers/integrations.py", line 754, in on_log
self._wandb.log({**logs, "train/global_step": state.global_step})
File "/home/josh/anaconda3/envs/project/lib/python3.8/site-packages/wandb/sdk/lib/preinit.py", line 38, in preinit_wrapper
raise wandb.Error("You must call wandb.init() before {}()".format(name))
wandb.errors.Error: You must call wandb.init() before wandb.log()
wandb: ERROR You must call wandb.init() before wandb.log()
```
## Expected behavior
wandb should just reinitialize per training run so that each run is logged separately.
Note that as far as I can tell this is a one-line fix (set `_initialized` to `False` in `WandbCallback.on_train_begin` when running an hyperparameter search) so I'll open a PR with that. I just figured there should be an issue as well for clarity.
| 05-30-2021 07:55:31 | 05-30-2021 07:55:31 | Maybe of interest to @borisdayma <|||||>Just ran into the same problem.
Thanks for opening this issue. |
transformers | 11,943 | closed | RuntimeError: CUDA error: device-side assert triggered | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Google Colab
- Python version: 3.7
- PyTorch version (GPU?): 1.8.1+cu101
- Using GPU in script?: Yes
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- gpt2: @patrickvonplaten, @LysandreJik
Library:
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Add ```TL;DR:``` tag at the end of the sequence
2. Preferably use a sequence longer than 1024.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## My Script
The sequences are long (>1024) and I expect the ```truncation = True``` to take care of that.
```python
from transformers import GPT2LMHeadModel, GPT2Tokenizer
model = GPT2LMHeadModel.from_pretrained("gpt2")
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
text = """Dred Scott v. Sandford, 60 U.S. (19 How.) 393 (1857), often referred to as the Dred Scott decision, was a landmark decision of the US Supreme Court in which the Court held that the US Constitution was not meant to include American citizenship for black people, regardless of whether they were enslaved or free, and so the rights and privileges that the Constitution confers upon American citizens could not apply to them.The decision was made in the case of Dred Scott, an enslaved black man whose owners had taken him from Missouri, which was a slave-holding state, into Illinois and the Wisconsin Territory, which were free areas where slavery was illegal. When his owners later brought him back to Missouri, Scott sued in court for his freedom and claimed that because he had been taken into "free" U.S. territory, he had automatically been freed and was legally no longer a slave. Scott sued first in Missouri state court, which ruled that he was still a slave under its law. He then sued in US federal court, which ruled against him by deciding that it had to apply Missouri law to the case. He then appealed to the US Supreme Court.
In March 1857, the Supreme Court issued a 7–2 decision against Dred Scott. In an opinion written by Chief Justice Roger Taney, the Court ruled that black people "are not included, and were not intended to be included, under the word 'citizens' in the Constitution, and can therefore claim none of the rights and privileges which that instrument provides for and secures to citizens of the United States." Taney supported his ruling with an extended survey of American state and local laws from the time of the Constitution's drafting in 1787 that purported to show that a "perpetual and impassable barrier was intended to be erected between the white race and the one which they had reduced to slavery." Because the Court ruled that Scott was not an American citizen, he was also not a citizen of any state and, accordingly, could never establish the "diversity of citizenship" that Article III of the US Constitution requires for a US federal court to be able to exercise jurisdiction over a case. After ruling on those issues surrounding Scott, Taney continued further and struck down the entire Missouri Compromise as a limitation on slavery that exceeded the US Congress's constitutional powers.
Although Taney and several of the other justices hoped that the decision would permanently settle the slavery controversy, which was increasingly dividing the American public, the decision's effect was the complete opposite. Taney's majority opinion suited the slaveholding states, but was intensely decried in all the other states. The decision inflamed the national debate over slavery and deepened the divide that led ultimately to the Civil War. In 1865, after the Union won the Civil War, the Dred Scott ruling was voided by the Thirteenth Amendment to the US Constitution, which abolished slavery except as punishment for a crime, and the Fourteenth Amendment, which guaranteed citizenship for "all persons born or naturalized in the United States, and subject to the jurisdiction thereof."
The Supreme Court's decision has been widely denounced ever since, both for how overtly racist the decision was and its crucial role in the near destruction of the United States four years later. Bernard Schwartz said that it "stands first in any list of the worst Supreme Court decisions—Chief Justice Hughes called it the Court's greatest self-inflicted wound." Junius P. Rodriguez said that it is "universally condemned as the U.S. Supreme Court's worst decision". Historian David Thomas Konig said that it was "unquestionably, our court's worst decision ever."
Abraham Lincoln (; February 12, 1809 – April 15, 1865) was an American statesman and lawyer who served as the 16th president of the United States from 1861 until his assassination in 1865. Lincoln led the nation through the American Civil War, the country's greatest moral, cultural, constitutional, and political crisis. He succeeded in preserving the Union, abolishing slavery, bolstering the federal government, and modernizing the U.S. economy.
Lincoln was born into poverty in a log cabin and was raised on the frontier primarily in Indiana. He was self-educated and became a lawyer, Whig Party leader, Illinois state legislator, and U.S. Congressman from Illinois. In 1849, he returned to his law practice but became vexed by the opening of additional lands to slavery as a result of the Kansas–Nebraska Act. He reentered politics in 1854, becoming a leader in the new Republican Party, and he reached a national audience in the 1858 debates against Stephen Douglas. Lincoln ran for President in 1860, sweeping the North in victory. Pro-slavery elements in the South equated his success with the North's rejection of their right to practice slavery, and southern states began seceding from the union. To secure its independence, the new Confederate States fired on Fort Sumter, a U.S. fort in the South, and Lincoln called up forces to suppress the rebellion and restore the Union.
As the leader of moderate Republicans, Lincoln had to navigate a contentious array of factions with friends and opponents on both sides. War Democrats rallied a large faction of former opponents into his moderate camp, but they were countered by Radical Republicans, who demanded harsh treatment of the Southern Confederates. Anti-war Democrats (called "Copperheads") despised him, and irreconcilable pro-Confederate elements plotted his assassination. Lincoln managed the factions by exploiting their mutual enmity, by carefully distributing political patronage, and by appealing to the U.S. people. His Gettysburg Address became a historic clarion call for nationalism, republicanism, equal rights, liberty, and democracy. Lincoln scrutinized the strategy and tactics in the war effort, including the selection of generals and the naval blockade of the South's trade. He suspended habeas corpus, and he averted British intervention by defusing the Trent Affair. He engineered the end to slavery with his Emancipation Proclamation and his order that the Army protect and recruit former slaves. He also encouraged border states to outlaw slavery, and promoted the Thirteenth Amendment to the United States Constitution, which outlawed slavery across the country.
Lincoln managed his own successful re-election campaign. He sought to heal the war-torn nation through reconciliation. On April 14, 1865, just days after the war's end at Appomattox, Lincoln was attending a play at Ford's Theatre with his wife Mary when he was assassinated by Confederate sympathizer John Wilkes Booth. His marriage had produced four sons, two of whom preceded him in death, with severe emotional impact upon them and Mary. Lincoln is remembered as the martyr hero of the United States and he is consistently ranked as one of the greatest presidents in American history."""
class GPT2():
def __init__(self,device,model,tokenizer):
self.name = "GPT2"
self.device = device
self.model = model.to(device)
self.tokenizer = tokenizer
self.tokenizer.add_special_tokens({'pad_token': '[PAD]'})
self.model.resize_token_embeddings(len(self.tokenizer))
def summarise(self,text):
if text == np.nan or len(str(text)) < 10:
return np.nan
text = str(text)
text = text + " TL;DR:"
generated = self.tokenizer(text,return_tensors = 'pt', truncation=True, padding = True)
context = generated['input_ids'].to(device)
past = None
generated = []
for i in range(50):
output = self.model(context, past_key_values=past)
past = output["past_key_values"]
logits = output["logits"]
token = torch.argmax(logits[..., -1, :])
generated += [token.tolist()]
context = token.unsqueeze(0)
summary = self.tokenizer.decode(generated)
return summary
def __str__(self):
return self.name
gpt2 = GPT2("cuda",model,tokenizer)
```
## Error Produced
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-9-1460361bbf99> in <module>()
----> 1 gpt2.summarise(data.loc[5,"clubbed_def"])
7 frames
/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1914 # remove once script supports set_grad_enabled
1915 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1916 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1917
1918
RuntimeError: CUDA error: device-side assert triggered
```
Earlier tried to truncate manually checking the length of the input text but that gave ```IndexError: index out of range in self ````
Have tried #1805 but didn't work.
@patrickvonplaten, @LysandreJik
| 05-30-2021 07:49:41 | 05-30-2021 07:49:41 | @patil-suraj <|||||>Hey @gandharvsuri,
Running your (slighly adapted) code example does not give me any errors.
Also, please don't ping more people if you don't receive an answer within a day (no need to also ping @patil-suraj). We try to answer all issues and to do so efficiently, it is not helpful to be pinged unnecessarly. Thanks!<|||||>Hey @patrickvonplaten, my apologies, I just saw him replying on similar issues so thought of pinging him as well, I agree I shouldn't have done that.
Can you try with this custom input?
```
text = """Dred Scott v. Sandford, 60 U.S. (19 How.) 393 (1857), often referred to as the Dred Scott decision, was a landmark decision of the US Supreme Court in which the Court held that the US Constitution was not meant to include American citizenship for black people, regardless of whether they were enslaved or free, and so the rights and privileges that the Constitution confers upon American citizens could not apply to them.The decision was made in the case of Dred Scott, an enslaved black man whose owners had taken him from Missouri, which was a slave-holding state, into Illinois and the Wisconsin Territory, which were free areas where slavery was illegal. When his owners later brought him back to Missouri, Scott sued in court for his freedom and claimed that because he had been taken into "free" U.S. territory, he had automatically been freed and was legally no longer a slave. Scott sued first in Missouri state court, which ruled that he was still a slave under its law. He then sued in US federal court, which ruled against him by deciding that it had to apply Missouri law to the case. He then appealed to the US Supreme Court.
In March 1857, the Supreme Court issued a 7–2 decision against Dred Scott. In an opinion written by Chief Justice Roger Taney, the Court ruled that black people "are not included, and were not intended to be included, under the word 'citizens' in the Constitution, and can therefore claim none of the rights and privileges which that instrument provides for and secures to citizens of the United States." Taney supported his ruling with an extended survey of American state and local laws from the time of the Constitution's drafting in 1787 that purported to show that a "perpetual and impassable barrier was intended to be erected between the white race and the one which they had reduced to slavery." Because the Court ruled that Scott was not an American citizen, he was also not a citizen of any state and, accordingly, could never establish the "diversity of citizenship" that Article III of the US Constitution requires for a US federal court to be able to exercise jurisdiction over a case. After ruling on those issues surrounding Scott, Taney continued further and struck down the entire Missouri Compromise as a limitation on slavery that exceeded the US Congress's constitutional powers.
Although Taney and several of the other justices hoped that the decision would permanently settle the slavery controversy, which was increasingly dividing the American public, the decision's effect was the complete opposite. Taney's majority opinion suited the slaveholding states, but was intensely decried in all the other states. The decision inflamed the national debate over slavery and deepened the divide that led ultimately to the Civil War. In 1865, after the Union won the Civil War, the Dred Scott ruling was voided by the Thirteenth Amendment to the US Constitution, which abolished slavery except as punishment for a crime, and the Fourteenth Amendment, which guaranteed citizenship for "all persons born or naturalized in the United States, and subject to the jurisdiction thereof."
The Supreme Court's decision has been widely denounced ever since, both for how overtly racist the decision was and its crucial role in the near destruction of the United States four years later. Bernard Schwartz said that it "stands first in any list of the worst Supreme Court decisions—Chief Justice Hughes called it the Court's greatest self-inflicted wound." Junius P. Rodriguez said that it is "universally condemned as the U.S. Supreme Court's worst decision". Historian David Thomas Konig said that it was "unquestionably, our court's worst decision ever."
Abraham Lincoln (; February 12, 1809 – April 15, 1865) was an American statesman and lawyer who served as the 16th president of the United States from 1861 until his assassination in 1865. Lincoln led the nation through the American Civil War, the country's greatest moral, cultural, constitutional, and political crisis. He succeeded in preserving the Union, abolishing slavery, bolstering the federal government, and modernizing the U.S. economy.
Lincoln was born into poverty in a log cabin and was raised on the frontier primarily in Indiana. He was self-educated and became a lawyer, Whig Party leader, Illinois state legislator, and U.S. Congressman from Illinois. In 1849, he returned to his law practice but became vexed by the opening of additional lands to slavery as a result of the Kansas–Nebraska Act. He reentered politics in 1854, becoming a leader in the new Republican Party, and he reached a national audience in the 1858 debates against Stephen Douglas. Lincoln ran for President in 1860, sweeping the North in victory. Pro-slavery elements in the South equated his success with the North's rejection of their right to practice slavery, and southern states began seceding from the union. To secure its independence, the new Confederate States fired on Fort Sumter, a U.S. fort in the South, and Lincoln called up forces to suppress the rebellion and restore the Union.
As the leader of moderate Republicans, Lincoln had to navigate a contentious array of factions with friends and opponents on both sides. War Democrats rallied a large faction of former opponents into his moderate camp, but they were countered by Radical Republicans, who demanded harsh treatment of the Southern Confederates. Anti-war Democrats (called "Copperheads") despised him, and irreconcilable pro-Confederate elements plotted his assassination. Lincoln managed the factions by exploiting their mutual enmity, by carefully distributing political patronage, and by appealing to the U.S. people. His Gettysburg Address became a historic clarion call for nationalism, republicanism, equal rights, liberty, and democracy. Lincoln scrutinized the strategy and tactics in the war effort, including the selection of generals and the naval blockade of the South's trade. He suspended habeas corpus, and he averted British intervention by defusing the Trent Affair. He engineered the end to slavery with his Emancipation Proclamation and his order that the Army protect and recruit former slaves. He also encouraged border states to outlaw slavery, and promoted the Thirteenth Amendment to the United States Constitution, which outlawed slavery across the country.
Lincoln managed his own successful re-election campaign. He sought to heal the war-torn nation through reconciliation. On April 14, 1865, just days after the war's end at Appomattox, Lincoln was attending a play at Ford's Theatre with his wife Mary when he was assassinated by Confederate sympathizer John Wilkes Booth. His marriage had produced four sons, two of whom preceded him in death, with severe emotional impact upon them and Mary. Lincoln is remembered as the martyr hero of the United States and he is consistently ranked as one of the greatest presidents in American history."""
```
It gives me the above mentioned error and later, all inputs even for the ones for which the model earlier was working, start giving the same error.<|||||>Hey @gandharvsuri,
I sadly still cannot reproduce the error, I adapted the code snippet with the `text` you provided. Running the adapted code example does not throw an error for me -> could you maybe create a colab showing the error instead? <|||||>Ah, one probably has to call `summarise` to reproduce the error. When I call `gpt2.summarise(...)` I'm getting some import errors (numpy is not imported). Could you please provide a complete code snippet that includes all imports to reproduce the error? :-)<|||||>Sure, You can use the following notebook, I've added you as an editor.
https://colab.research.google.com/drive/17-STvWmqNROY8tlD8mfdm1grcRCFD4hy?usp=sharing<|||||>I misunderstood complete code snippet meaning, here it is.
```
!pip install torch
!pip install transformers
import numpy as np
import os
# tried this to resolve the error as well :)
os.environ['CUDA_LAUNCH_BLOCKING'] = "1"
import torch
import numpy as np
device = 'cuda' if torch.cuda.is_available() else 'cpu'
from transformers import GPT2LMHeadModel, GPT2Tokenizer
model = GPT2LMHeadModel.from_pretrained("gpt2")
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
text = """Dred Scott v. Sandford, 60 U.S. (19 How.) 393 (1857), often referred to as the Dred Scott decision, was a landmark decision of the US Supreme Court in which the Court held that the US Constitution was not meant to include American citizenship for black people, regardless of whether they were enslaved or free, and so the rights and privileges that the Constitution confers upon American citizens could not apply to them.The decision was made in the case of Dred Scott, an enslaved black man whose owners had taken him from Missouri, which was a slave-holding state, into Illinois and the Wisconsin Territory, which were free areas where slavery was illegal. When his owners later brought him back to Missouri, Scott sued in court for his freedom and claimed that because he had been taken into "free" U.S. territory, he had automatically been freed and was legally no longer a slave. Scott sued first in Missouri state court, which ruled that he was still a slave under its law. He then sued in US federal court, which ruled against him by deciding that it had to apply Missouri law to the case. He then appealed to the US Supreme Court.
In March 1857, the Supreme Court issued a 7–2 decision against Dred Scott. In an opinion written by Chief Justice Roger Taney, the Court ruled that black people "are not included, and were not intended to be included, under the word 'citizens' in the Constitution, and can therefore claim none of the rights and privileges which that instrument provides for and secures to citizens of the United States." Taney supported his ruling with an extended survey of American state and local laws from the time of the Constitution's drafting in 1787 that purported to show that a "perpetual and impassable barrier was intended to be erected between the white race and the one which they had reduced to slavery." Because the Court ruled that Scott was not an American citizen, he was also not a citizen of any state and, accordingly, could never establish the "diversity of citizenship" that Article III of the US Constitution requires for a US federal court to be able to exercise jurisdiction over a case. After ruling on those issues surrounding Scott, Taney continued further and struck down the entire Missouri Compromise as a limitation on slavery that exceeded the US Congress's constitutional powers.
Although Taney and several of the other justices hoped that the decision would permanently settle the slavery controversy, which was increasingly dividing the American public, the decision's effect was the complete opposite. Taney's majority opinion suited the slaveholding states, but was intensely decried in all the other states. The decision inflamed the national debate over slavery and deepened the divide that led ultimately to the Civil War. In 1865, after the Union won the Civil War, the Dred Scott ruling was voided by the Thirteenth Amendment to the US Constitution, which abolished slavery except as punishment for a crime, and the Fourteenth Amendment, which guaranteed citizenship for "all persons born or naturalized in the United States, and subject to the jurisdiction thereof."
The Supreme Court's decision has been widely denounced ever since, both for how overtly racist the decision was and its crucial role in the near destruction of the United States four years later. Bernard Schwartz said that it "stands first in any list of the worst Supreme Court decisions—Chief Justice Hughes called it the Court's greatest self-inflicted wound." Junius P. Rodriguez said that it is "universally condemned as the U.S. Supreme Court's worst decision". Historian David Thomas Konig said that it was "unquestionably, our court's worst decision ever."
Abraham Lincoln (; February 12, 1809 – April 15, 1865) was an American statesman and lawyer who served as the 16th president of the United States from 1861 until his assassination in 1865. Lincoln led the nation through the American Civil War, the country's greatest moral, cultural, constitutional, and political crisis. He succeeded in preserving the Union, abolishing slavery, bolstering the federal government, and modernizing the U.S. economy.
Lincoln was born into poverty in a log cabin and was raised on the frontier primarily in Indiana. He was self-educated and became a lawyer, Whig Party leader, Illinois state legislator, and U.S. Congressman from Illinois. In 1849, he returned to his law practice but became vexed by the opening of additional lands to slavery as a result of the Kansas–Nebraska Act. He reentered politics in 1854, becoming a leader in the new Republican Party, and he reached a national audience in the 1858 debates against Stephen Douglas. Lincoln ran for President in 1860, sweeping the North in victory. Pro-slavery elements in the South equated his success with the North's rejection of their right to practice slavery, and southern states began seceding from the union. To secure its independence, the new Confederate States fired on Fort Sumter, a U.S. fort in the South, and Lincoln called up forces to suppress the rebellion and restore the Union.
As the leader of moderate Republicans, Lincoln had to navigate a contentious array of factions with friends and opponents on both sides. War Democrats rallied a large faction of former opponents into his moderate camp, but they were countered by Radical Republicans, who demanded harsh treatment of the Southern Confederates. Anti-war Democrats (called "Copperheads") despised him, and irreconcilable pro-Confederate elements plotted his assassination. Lincoln managed the factions by exploiting their mutual enmity, by carefully distributing political patronage, and by appealing to the U.S. people. His Gettysburg Address became a historic clarion call for nationalism, republicanism, equal rights, liberty, and democracy. Lincoln scrutinized the strategy and tactics in the war effort, including the selection of generals and the naval blockade of the South's trade. He suspended habeas corpus, and he averted British intervention by defusing the Trent Affair. He engineered the end to slavery with his Emancipation Proclamation and his order that the Army protect and recruit former slaves. He also encouraged border states to outlaw slavery, and promoted the Thirteenth Amendment to the United States Constitution, which outlawed slavery across the country.
Lincoln managed his own successful re-election campaign. He sought to heal the war-torn nation through reconciliation. On April 14, 1865, just days after the war's end at Appomattox, Lincoln was attending a play at Ford's Theatre with his wife Mary when he was assassinated by Confederate sympathizer John Wilkes Booth. His marriage had produced four sons, two of whom preceded him in death, with severe emotional impact upon them and Mary. Lincoln is remembered as the martyr hero of the United States and he is consistently ranked as one of the greatest presidents in American history."""
# Another text with lesser number of tokens
text_2 = """At least nine people were killed and 25 others injured after a powerful blast outside Pakistan’s famous Sufi shrine Data Darbar in Lahore. According to initial police reports, the explosion took place close to two police vehicles near Gate 2 of Data Darbar. The nature and exact target of the explosion is yet to be ascertained. Rescue operations are underway. The blast comes as the country marks the fasting month of Ramzan.
Data Darbar, located in Pakistan’s Lahore city, is one of the oldest Sufi shrines in South Asia. Considered to be one of the most sacred places in Lahore, the shrine houses the remains of Sufi saint Abul Hassan Ali Hujwiri, commonly known as Data Ganj Baksh. He is said to have lived on the site in the 11th century and was reputed to have miraculous powers.
Data Darbar attracts a lot of visitors to its annual Urs festival. The Urs marks the death anniversary of the Sufi saint.
According to the BBC, the shrine was originally established as a simple grave next to the mosque which Hujwiri had built on the outskirts of Lahore in the 11th century. It was later expanded in the 13th century to commemorate the burial site of Hujwiri after his spiritual powers became popular.
For centuries, the shrine has seen visitors from all religions. Pakistan’s former prime minister Nawaz Sharif is also a frequent visitor to the shrine.
In 2010, two suicide bombers detonated their explosive vests outside the shrine, killing close to 50 people. More than 200 people were injured in the blasts"""
class GPT2():
def __init__(self,device,model,tokenizer):
self.name = "GPT2"
self.device = device
self.model = model.to(device)
self.tokenizer = tokenizer
# self.tokenizer.add_special_tokens({'pad_token': '[PAD]'})
self.model.resize_token_embeddings(len(self.tokenizer))
def summarise(self,text):
if text == np.nan or len(str(text)) < 10:
return np.nan
text = str(text)
text = text + " TL;DR:"
generated = self.tokenizer(text,return_tensors = 'pt', truncation=True, max_length = 1024)
context = generated['input_ids'].to(self.device)
past = None
generated = []
for i in range(50):
output = self.model(context, past_key_values=past)
past = output["past_key_values"]
logits = output["logits"]
token = torch.argmax(logits[..., -1, :])
generated += [token.tolist()]
context = token.unsqueeze(0)
summary = self.tokenizer.decode(generated)
return summary
def __str__(self):
return self.name
gpt2 = GPT2("cuda",model,tokenizer)
# This cell works fine
print(gpt2.summarise(text_2))
# Throws error
print(gpt2.summarise(text))
# The same example which was working earlier also stopped working now.
print(gpt2.summarise(text_2))
```<|||||>@patrickvonplaten I guess I found the error. I am running a for loop to predict the next 50 tokens. After each iteration, a new token would be added to the sequence thus increasing its size by 1. Now, this new sequence would be fed to the model (the one with the new predicted token) to predict the next token. So to predict the 50 tokens, I need to set ```max_length = (1024-50)``` so that in the last iteration, it does not exceed the sequence length limit.
<|||||>Hey @gandharvsuri,
Thanks for the very nice reproducible code snippet & sorry to answer so late. The problem is indeed an out-of-index error of the position ids matrix.
Those errors are often hard to catch on GPU because the error message is quite obscure<|||||>Hi @gandharvsuri ,
Thanks for raising this issue. I get the same error but only for specific passages. But when I get this error for one passage, the BART model throws the same error for all the following passages.
I wanted to ask if you could please share the code change that you had done for - "So to predict the 50 tokens, I need to set max_length = (1024-50) so that in the last iteration, it does not exceed the sequence length limit."
And there are several posts suggesting - model.resize_token_embeddings(len(tokenizer))
I added it after loading the model and also after each generation call. It still throws the above error. I wanted to ask if you found the above useful anywhere in fixing this error?
Thanks and looking forward for your response :D
<|||||>I get this issue when finetuning Llama on Abirate/English_Quotes |
transformers | 11,942 | closed | Typo in Pegasus model usage example | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
Documentation: @sgugger
## Information
Model I am using (Bert, XLNet ...): Pegasus
The problem arises when using:
* [x] the official example scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below): just trying the Pegasus model
## To reproduce
Steps to reproduce the behavior:
1. Create a new Colab notebook and install the required libraries using:
```python
!pip install transformers
!pip install sentencepiece
```
2. Copy / paste the "Usage example" from the pegasus [documentation](https://huggingface.co/transformers/model_doc/pegasus.html) page in a cell:
```python
from transformers import PegasusForConditionalGeneration, PegasusTokenizer
import torch
src_text = [
""" PG&E stated it scheduled the blackouts in response to forecasts for high winds amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow."""
]
model_name = 'google/pegasus-xsum'
device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = PegasusTokenizer.from_pretrained(model_name)
model = PegasusForConditionalGeneration.from_pretrained(model_name).to(device)
batch = tokenizer(src_text, truncation=True, padding='longest', return_tensors="pt").to(device)
translated = model.generate(**batch)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
assert tgt_text[0] == "California's largest electricity provider has turned off power to hundreds of thousands of customers."
```
3. Execute the cell
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The final statement `assert` should be True, however there is an typo en the usage example and we get an error:
```python
12 model = PegasusForConditionalGeneration.from_pretrained(model_name).to(device)
13
---> 14 batch = tokenizer(src_text, truncation=True, padding='longest', return_tensors="pt").to(torch_device)
15 translated = model.generate(**batch)
16 tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
NameError: name 'torch_device' is not defined
```
The reason of the above error is that one of the lines of the code is referring to `torch_device` when it should refer to `device`. Change the line
- ```python batch = tokenizer(src_text, truncation=True, padding='longest', return_tensors="pt").to(torch_device)```
To the following:
- ```python batch = tokenizer(src_text, truncation=True, padding='longest', return_tensors="pt").to(device)```
| 05-30-2021 07:32:04 | 05-30-2021 07:32:04 | Thanks for flagging this! Would you mind sending a PR with the fix since you found it?<|||||>Hi @sgugger, I have just created a PR.
Kind regards<|||||>Thanks!
Closed by #11979 |
transformers | 11,941 | closed | position_ids version changed during training | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: linux
- Python version: 3.6.9
- PyTorch version (GPU?): yes
- Tensorflow version (GPU?): no
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: distributed
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [* ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [* ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. deployed BertModel in modeling_bert.py to train text classification model.
2. submitted task using `torch.distributed.launch` with 2 P40 GPUs.
3. left `position_ids` empty, and initialized it in `BertEmbeddings` while doing `forward` computation.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Expected training to be successfully finished, but encountered issue below:
```
RuntimeError: RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.LongTensor [1, 128]] is at version 4; expected version 3 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.LongTensor [1, 128]] is at version 4; expected version 3 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
```
Maybe the initialization of position_ids should be suffixed with clone() in line 195 under `transformers.models.bert.modeling_bert`? I resolved the issue by adding clone to `self.position_ids`. | 05-30-2021 02:32:45 | 05-30-2021 02:32:45 | Hello! Do you have a reproducible code example? Did you use the text-classification examples given in this repo?<|||||>> Hello! Do you have a reproducible code example? Did you use the text-classification examples given in this repo?
Hi, thanks for your reply. Since it's a private project, I cannot provide you the entire code. But I can share with you the code that triggers the issue:
```
def get_loss(args, model, train_batch, unsup_batch, global_step, total_steps):
# batch
input_ids, attention_mask, token_type_ids, input_len, labels = train_batch
if unsup_batch:
ori_input_ids, ori_attention_mask, ori_token_type_ids, ori_input_len, ori_labels, \
aug_input_ids, aug_attention_mask, aug_token_type_ids, aug_input_len = unsup_batch
input_ids = torch.cat((input_ids, aug_input_ids), dim=0)
attention_mask = torch.cat((attention_mask, aug_attention_mask), dim=0)
token_type_ids = torch.cat((token_type_ids, aug_token_type_ids), dim=0)
# torch_device_one used by loss computation
torch_device_one = torch.tensor(1., device=args.device)
# logits
outputs = model(input_ids, attention_mask, token_type_ids)
logits = outputs[0]
# loss fct
sup_loss_fct = CrossEntropyLoss(reduction='none')
unsup_loss_fct = KLDivLoss(reduction='none')
# sup loss
sup_size = labels.shape[0]
sup_loss = sup_loss_fct(logits[:sup_size], labels) # shape : train_batch_size
if unsup_batch and args.do_tsa:
tsa_thresh = get_tsa_thresh(args.tsa_type,
global_step,
total_steps,
start=args.tsa_start,
end=1,
scale=args.tsa_scale)
larger_than_threshold = torch.exp(-sup_loss) > tsa_thresh
loss_mask = torch.ones_like(labels, dtype=torch.float32) * (1 - larger_than_threshold.type(torch.float32))
sup_loss = torch.sum(sup_loss * loss_mask, dim=-1) / torch.max(torch.sum(loss_mask, dim=-1), torch_device_one)
else:
sup_loss = torch.mean(sup_loss)
# unsup loss
if unsup_batch:
uda_softmax_temp = args.uda_softmax_temp if args.uda_softmax_temp > 0 else 1.
with torch.no_grad():
ori_outputs = model(ori_input_ids, ori_attention_mask, ori_token_type_ids)
ori_logits = ori_outputs[0]
ori_prob = F.softmax(ori_logits, dim=-1) # KLdiv target
# confidence-based masking
if args.uda_confidence_thresh != -1:
unsup_loss_mask = torch.max(ori_prob, dim=-1)[0] > args.uda_confidence_thresh
unsup_loss_mask = unsup_loss_mask.type(torch.float32)
else:
unsup_loss_mask = torch.ones(len(logits) - sup_size, dtype=torch.float32, device=args.device)
# softmax temperature controlling
ori_logits = ori_logits / uda_softmax_temp
ori_prob = F.softmax(ori_logits, dim=-1)
aug_log_prob = F.log_softmax(logits[sup_size:], dim=-1)
unsup_loss = torch.sum(unsup_loss_fct(aug_log_prob, ori_prob), dim=-1)
unsup_loss = torch.sum(unsup_loss * unsup_loss_mask, dim=-1) / torch.max(torch.sum(unsup_loss_mask, dim=-1),
torch_device_one)
final_loss = sup_loss + args.uda_unsup_coeff * unsup_loss
return final_loss, sup_loss, unsup_loss
return sup_loss, None, None
```
Finally we've noticed the issue is we call the forward computation twice, and we change the inner parameters within a no_grad operation. So we move the no_grad code block to the position before the first forward computation, it also resolves the issue.
But back to my question, is that possible to just add a clone() to the position_ids initialization should avoid this issue under other scenarios? Thanks.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,940 | closed | [deepspeed] docs | This PR:
* documents `train_batch_size` and `train_micro_batch_size_per_gpu` DS config entries
@sgugger | 05-30-2021 02:24:38 | 05-30-2021 02:24:38 | |
transformers | 11,939 | closed | XLM tokenizer lang2id attribute is None | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: No
### Who can help
Models:
- albert, bert, xlm: @LysandreJik
Library:
- tokenizers: @LysandreJik
## Information
Model I am using XLM with Causal language modelling:
The problem arises when using:
* [x] the official example scripts: (give details below)
## To reproduce
Steps to reproduce the behaviour:
1. Run example code from https://huggingface.co/transformers/multilingual.html
``` python
import torch
from transformers import XLMTokenizer, XLMWithLMHeadModel
tokenizer = XLMTokenizer.from_pretrained("xlm-clm-enfr-1024")
model = XLMWithLMHeadModel.from_pretrained("xlm-clm-enfr-1024")
language_id = tokenizer.lang2id['en']
```
The attribute lang2id is None and so I get a Nonetype is a non-suscriptable error. Following the example I am expecting to get 0 for language_id.
As a side note, it says that these checkpoints require language embeddings which I'm assuming is from the argument langs. What is the default behavior when this is not provided? I tried looking at https://huggingface.co/transformers/glossary.html but could not find any reference to it.
| 05-29-2021 18:07:55 | 05-29-2021 18:07:55 | FYI, I tried downgrading and I found that the most recent version that doesn't have this bug is `transformers==4.3.3`. So you could try downgrading to that version for now, until someone fixes it.
```
pip install transformers==4.3.3
```<|||||>Thanks for the advice! So far I have just been manually specifying the language id for the two languages, hopefully, that is sufficient as well.<|||||>Hello! Sorry for taking so long to get back to this issue - the issue should normally be fixed now, for all versions. We updated the configurations of the XLM models on the hub.
Thanks for flagging!<|||||>Hi @LysandreJik is the update going to solve this XLM issue as well? https://github.com/huggingface/transformers/issues/12174<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,938 | closed | [docs] XLNETModel forward returns last_hidden_state 3rd dim should be d_model instead of hidden_size | **Link to doc** - https://huggingface.co/transformers/model_doc/xlnet.html#xlnetmodel
**Problem description** - forward method of XLNETModel returns `last_hidden_state` with dimension `(batch_size, num_predict, hidden_size)`. However, `hidden_size` is not an XLNET config (unlike for BERT) instead that dimension should be `d_model` which is a config of XLNETModel.
**current shape** - (batch_size, num_predict, hidden_size)
**expected shape** - (batch_size, num_predict, d_model)
## Environment info
Not required
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
### Who can help
@sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
| 05-29-2021 16:19:55 | 05-29-2021 16:19:55 | Hi,
Actually, `hidden_size` is an alias for `d_model`, as can be seen [here](https://github.com/huggingface/transformers/blob/7e73601f3240b99d952c34b63bf4f8b78ca1462d/src/transformers/models/xlnet/configuration_xlnet.py#L233).
They mean the same thing. But I get your confusion, as some models use `hidden_size`, others `d_model`, or other names.
Feel free to open a PR to fix this in the docs of XLNet.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,937 | closed | Neptune.ai integration | # What does this PR do?
The PR integrates the trainer with [neptune.ai](https://neptune.ai/)
To start with neptune.ai logging:
1) Set env variables:
NEPTUNE_PROJECT
NEPTUNE_API_TOKEN
2) Add an option that turns on Neptune logging
```
--report_to 'neptune'
```
# Who can review?
@sgugger | 05-29-2021 09:13:19 | 05-29-2021 09:13:19 | |
transformers | 11,936 | closed | Flax text-classification multi-optimizer incorrect | The below code is incorrect:
https://github.com/huggingface/transformers/blob/8d171628fe84bdf92ee40b5375d7265278180f14/examples/flax/text-classification/run_flax_glue.py#L173-L188
- The `dict` from `traverse_util.flatten_dict` has keys which are tuples of strings, not one long string with the path separated by periods.
- `optax.masked` applies the transformation wherever the mask is True, so the masks are flipped.
- Flax's LayerNorm calls the scale parameter `scale` not `weight`
I believe following code would be correct:
```py
# We use Optax's "masking" functionality to create a multi-optimizer, one
# with weight decay and the other without. Each sub-optimizer will be applied
# wherever the mask is True
decay_path = lambda p: p[-1] != "bias" and p[-2:] != ("LayerNorm", "scale") # noqa: E731
tx = optax.chain(
optax.masked(adamw(0.0), mask=traverse(lambda path, _: not decay_path(path))),
optax.masked(adamw(weight_decay), mask=traverse(lambda path, _: decay_path(path))),
)
```
But, for the networks in that example, the parameters that shouldn't be decayed all have 1 dimension (biases and layernorm scales). L173-L188 can therefore be simplified to:
```py
# We use Optax's "masking" functionality to create a multi-optimizer, one
# with weight decay and the other without. Each sub-optimizer will be applied
# wherever the mask is True. The bias parameters and LayerNorm scale
# parameters should not be decayed, and these are the only parameters
# with 1 dimension.
tx = optax.chain(
optax.masked(adamw(0.0), mask=partial(jax.tree_map, lambda p: p.ndim == 1)),
optax.masked(adamw(weight_decay), mask=partial(jax.tree_map, lambda p: p.ndim != 1)),
)
```
Though the latter solution is simpler, perhaps the first one is better since it illustrates a more general way for users to construct a multi-optimizer. I'd be happy to open a PR with either.
cc: @marcvanzee @patrickvonplaten | 05-29-2021 09:07:04 | 05-29-2021 09:07:04 | Actually `optax.adamw` can take a `mask` directly to mask weight decay (since that's pretty common):
```py
tx = optax.adamw(learning_rate=learning_rate_fn, b1=0.9, b2=0.999, eps=1e-6, weight_decay=weight_decay,
mask=partial(jax.tree_map, lambda p: p.ndim != 1))
```<|||||>Hey @n2cholas,
Thanks for diving into this!
I think a nice solution would be:
```python
tx = optax.adamw(learning_rate=learning_rate_fn, b1=0.9, b2=0.999, eps=1e-6, weight_decay=weight_decay,
mask=partial(jax.tree_map, lambda p: p[-1] != "bias" and p[-2:] != ("LayerNorm", "scale")))
```
This way, it's consistent, but we also show the user clearly that `bias` and `LayerNorm` are not included in the weight decay. It would be great if you could open a PR for this :-) I can re-run the experiments afterward
<|||||>Great, will do! |
transformers | 11,935 | closed | Use `self.assertEqual` instead of `assert` in deberta v2 test. | PR to fix #11929 | 05-29-2021 07:33:05 | 05-29-2021 07:33:05 | |
transformers | 11,934 | closed | Predict masked word at the beginning of the sentence | Case 1: I have a sentence "After the campaign finance laws changed, Albert ran for mayor of his city." which I have modified as
\<s\> \<mask\> the campaign finance laws changed, he ran for mayor of his city. \</s\>.
This input is passed through pre-trained RobertaBase and RobertaLarge (RobertaTokenizer, RobertaForMaskedLM).
The predicted top-10 words are:
['When', 'After', 'Before', 'Once', 'As', 'Until', 'While', 'With', 'Since', 'Then']
Case 2: But when we have masked word in the middle of the sentence:
\<s\> He ran for mayor of his city <mask> the campaign finance laws changed. \</s\>
The predicted top-10 words are:
['Ġbefore', 'Ġafter', 'Ġwhen', 'Ġuntil', 'Ġas', 'Ġand', 'Ġbut', 'Ġonce', 'Ġbecause', ',']
Why the special character is added in the 2nd case and not case 1?
| 05-29-2021 06:57:43 | 05-29-2021 06:57:43 | The special characters mean that these words are preceded by a space. This is not the case when the words are the start of sentences!<|||||>So, if I insert a space at the beginning of the sentence before [MASK], I should words beginning with special character or the tokenizer removes those extra spaces at the beginning?<|||||>Yes, the encoding of the first word will change if you add a space. The tokeniser does not normalise whitespace. Try adding 2 spaces, a tab or a newline character. Please read about `add_prefix_space=True` and `is_split_into_words=True` on https://huggingface.co/transformers/model_doc/roberta.html#robertatokenizer<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,933 | closed | Porting Layoutlmv2 to Huggingface | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
https://github.com/huggingface/transformers/issues/11932
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section? Yes
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://github.com/microsoft/unilm/issues/325
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). No
- [ ] Did you write any new necessary tests? No
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->@inproceedings{Xu2020LayoutLMv2MP,
title = {LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding},
author = {Yang Xu and Yiheng Xu and Tengchao Lv and Lei Cui and Furu Wei and Guoxin Wang and Yijuan Lu and Dinei Florencio and Cha Zhang and Wanxiang Che and Min Zhang and Lidong Zhou},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL) 2021},
year = {2021},
month = {August},
}
| 05-29-2021 05:43:18 | 05-29-2021 05:43:18 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,932 | closed | LayoutLMv2 Model | # 🌟 New model addition
## Model description
LayoutLMv2 by pre-training text, layout and image in a multi-modal framework, where new model architectures and pre-training tasks are leveraged. Specifically, LayoutLMv2 not only uses the existing masked visual-language modeling task but also the new text-image alignment and text-image matching tasks in the pre-training stage, where cross-modality interaction is better learned.
<!-- Important information -->
## Open source status
* [ ] the model implementation is available: (give details) https://github.com/microsoft/unilm/tree/master/layoutlmv2
* [ ] the model weights are available: (give details) https://huggingface.co/microsoft/layoutlmv2-base-uncased/
* [ ] who are the authors: (mention them, if possible by @gh-username)
@inproceedings{Xu2020LayoutLMv2MP,
title = {LayoutLMv2: Multi-modal Pre-training for Visually-Rich Document Understanding},
author = {Yang Xu and Yiheng Xu and Tengchao Lv and Lei Cui and Furu Wei and Guoxin Wang and Yijuan Lu and Dinei Florencio and Cha Zhang and Wanxiang Che and Min Zhang and Lidong Zhou},
booktitle = {Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL) 2021},
year = {2021},
month = {August},
}
| 05-29-2021 05:30:10 | 05-29-2021 05:30:10 | What's the current status of this?
I see Microsoft has already pushed the code as mentioned above for this but i can't import it from huggingface.<|||||>I have completed the work but there is a problem with installing detectron2 using pip . Anyone can help on this ?<|||||>Can you tell me the issue with installing detectron2? I can prolly help<|||||>Its saying pip install detectron2 is giving issue . <|||||>You have to install detectron from `git+https://github.com/facebookresearch/detectron2.git`, not pypi
See https://github.com/facebookresearch/detectron2/blob/master/INSTALL.md for instructions.
Use
`python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'` |
transformers | 11,931 | closed | How to load the best performance checkpoint after training? | When I was training MLM with `https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_mlm.py`
, I have already set the `--load_best_model_at_end True`.But when finish the training,I check the trainer_state.json file and I found these mssage:
```
"best_metric": null,
"best_model_checkpoint": null,
"epoch": 100.0,
"global_step": 559300,
"is_hyper_param_search": false,
"is_local_process_zero": true,
"is_world_process_zero": true,
```
as shown above,"best_model_checkpoint" is null.
Here is my question,how to load the best performance checkpoint?If I have ONLY one checkpoint,is
it the best performance checkpoint?Thanks in advance!
@sgugger | 05-29-2021 02:13:58 | 05-29-2021 02:13:58 | Could you give us the whole command you executed? We can't reproduce the problem without it.<|||||>@sgugger Here is my whole command:
`python run_mlm.py --model_name_or_path model/roberta-base --do_train --do_eval --output_dir mlm_new --max_seq_length 128 --load_best_model_at_end True --save_steps 2000 --overwrite_output_dir True --per_device_train_batch_size 32 --train_file data/wikitext-train.txt --validation_file data/wikitext-validation.txt`
Where the wikitext-train.txt and wikitext-validation.txt are txt files ,something like :
` Three other applications make up nearly all the rest of the consumption . One of these uses is as a stabilizer and a catalyst for the production of polyethyleneterephthalate . Another application is to serve as a fining agent to remove microscopic bubbles in glass , mostly for TV screens ; this is achieved by the interaction of antimony ions with oxygen , interfering the latter from forming bubbles . The third major application is the use as pigment .
Antimony is being increasingly used in the semiconductor industry as a dopant for heavily doped n @-@ type silicon wafers in the production of diodes , infrared detectors , and Hall @-@ effect devices . In the 1950s , tiny beads of a lead @-@ antimony alloy were used to dope the emitters and collectors of n @-@ p @-@ n alloy junction transistors with antimony . Indium antimonide is used as a material for mid @-@ infrared detectors . `
IN MY case ,I just change these two files to my domain corpus txt files.<|||||>another Question,what is the difference between `load_best_model_at_end ` and Early Stop?
<|||||>Is there at least one evaluation during the training? I don't know the size of your dataset. Passing `--evaluation_strategy epoch` would ensure there is one per epoch at least.
As for your second question, `loas_best_model_at_end` loads the best model seen during the training. Early stopping, stops when the loss/metrics does not improve.<|||||>@sgugger
1.YES,there is one evaluation at the end,But `best_model_checkpoint` is still null:
```
[INFO|trainer.py:2107] 2021-06-03 02:40:32,693 >> ***** Running Evaluation *****
[INFO|trainer.py:2109] 2021-06-03 02:40:32,693 >> Num examples = 17979
[INFO|trainer.py:2112] 2021-06-03 02:40:32,693 >> Batch size = 128
100%|█████████████████████████████████████████████████████████████████████████████████████| 141/141 [00:48<00:00, 2.92it/s]
[INFO|trainer_pt_utils.py:907] 2021-06-03 02:41:21,274 >> ***** eval metrics *****
[INFO|trainer_pt_utils.py:912] 2021-06-03 02:41:21,274 >> epoch = 3.0
[INFO|trainer_pt_utils.py:912] 2021-06-03 02:41:21,274 >> eval_loss = 1.4715
[INFO|trainer_pt_utils.py:912] 2021-06-03 02:41:21,274 >> eval_mem_cpu_alloc_delta = 0MB
[INFO|trainer_pt_utils.py:912] 2021-06-03 02:41:21,274 >> eval_mem_cpu_peaked_delta = 0MB
[INFO|trainer_pt_utils.py:912] 2021-06-03 02:41:21,274 >> eval_mem_gpu_alloc_delta = 0MB
[INFO|trainer_pt_utils.py:912] 2021-06-03 02:41:21,274 >> eval_mem_gpu_peaked_delta = 3928MB
[INFO|trainer_pt_utils.py:912] 2021-06-03 02:41:21,274 >> eval_runtime = 0:00:48.49
[INFO|trainer_pt_utils.py:912] 2021-06-03 02:41:21,274 >> eval_samples = 17979
[INFO|trainer_pt_utils.py:912] 2021-06-03 02:41:21,274 >> eval_samples_per_second = 370.704
[INFO|trainer_pt_utils.py:912] 2021-06-03 02:41:21,274 >> perplexity = 4.3558
```
After passing `--evaluation_strategy epoch` ,`best_model_checkpoint` become normal and it is not null now.
```
"best_metric": 1.4732710123062134,
"best_model_checkpoint": "mlm_new1/checkpoint-426",
"epoch": 3.0,
```
2.Does `loads the best model seen during the training` mean the code will load the best model in memory or save it in disk? In my origin case (without passing `--evaluation_strategy epoch`) ,I Have only one checkpoint.Is it the best checkpoint or the last checkpoint?
<|||||>No the evaluation at the end does not count, it's not part of the `train` method. If there is no evaluation during the training phase, there can't be a best model to load, it's as simple as that.
The `load_best_model_at_end` just keeps track of the best model as you evaluate it and will reload at the end the checkpoint that had the best evaluation score.<|||||>@sgugger I think if this process can automatically save the best performance and the last checkpoints ,that will be great.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 11,930 | closed | Ray pickling issue when running hp search | ## Environment info
- `transformers` version: 4.5.1
- Platform: Amazon Linux 2
- Python version: 3.7.7
- PyTorch version (GPU?): 1.8.1
- Tensorflow version (GPU?): 2.0.0
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
Trainer / Ray
@sgugger, @richardliaw, @amogkam
## Information
I'm using BERT with my own modified training scripts for hp searches on pre-training; however the issue is reproduced on a simple example as shown below.
## To reproduce
The following code snippet is a slight modification from the blog post [here](https://huggingface.co/blog/ray-tune)
```
from datasets import load_dataset, load_metric
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
tokenizer = AutoTokenizer.from_pretrained('distilbert-base-uncased')
dataset = load_dataset('glue', 'mrpc')
metric = load_metric('glue', 'mrpc')
def encode(examples):
outputs = tokenizer(
examples['sentence1'], examples['sentence2'], truncation=True)
return outputs
encoded_dataset = dataset.map(encode, batched=True)
def model_init():
return AutoModelForSequenceClassification.from_pretrained(
'distilbert-base-uncased', return_dict=True)
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = predictions.argmax(axis=-1)
return metric.compute(predictions=predictions, references=labels)
# Evaluate during training and a bit more often
# than the default to be able to prune bad trials early.
# Disabling tqdm is a matter of preference.
training_args = TrainingArguments(
"test", eval_steps=500, disable_tqdm=True)
trainer = Trainer(
args=training_args,
tokenizer=tokenizer,
train_dataset=encoded_dataset["train"],
eval_dataset=encoded_dataset["validation"],
model_init=model_init,
compute_metrics=compute_metrics,
)
# Defaut objective is the sum of all metrics
# when metrics are provided, so we have to maximize it.
trainer.hyperparameter_search(
direction="maximize",
backend="ray",
n_trials=10, # number of trials
)
```
The call to `trainer.hyperparameter_search` creates the following error:
```
TypeError: can't pickle _thread.RLock objects
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-3-af3e9d5e18dd> in <module>
42 direction="maximize",
43 backend="ray",
---> 44 n_trials=10, # number of trials
45 )
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/transformers/trainer.py in hyperparameter_search(self, hp_space, compute_objective, n_trials, direction, backend, hp_name, **kwargs)
1457
1458 run_hp_search = run_hp_search_optuna if backend == HPSearchBackend.OPTUNA else run_hp_search_ray
-> 1459 best_run = run_hp_search(self, n_trials, direction, **kwargs)
1460
1461 self.hp_search_backend = None
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/transformers/integrations.py in run_hp_search_ray(trainer, n_trials, direction, **kwargs)
233 config=trainer.hp_space(None),
234 num_samples=n_trials,
--> 235 **kwargs,
236 )
237 best_trial = analysis.get_best_trial(metric="objective", mode=direction[:3])
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/tune/tune.py in run(run_or_experiment, name, metric, mode, stop, time_budget_s, config, resources_per_trial, num_samples, local_dir, search_alg, scheduler, keep_checkpoints_num, checkpoint_score_attr, checkpoint_freq, checkpoint_at_end, verbose, progress_reporter, log_to_file, trial_name_creator, trial_dirname_creator, sync_config, export_formats, max_failures, fail_fast, restore, server_port, resume, queue_trials, reuse_actors, trial_executor, raise_on_failed_trial, callbacks, loggers, ray_auto_init, run_errored_only, global_checkpoint_period, with_server, upload_dir, sync_to_cloud, sync_to_driver, sync_on_checkpoint)
296
297 trial_executor = trial_executor or RayTrialExecutor(
--> 298 reuse_actors=reuse_actors, queue_trials=queue_trials)
299 if isinstance(run_or_experiment, list):
300 experiments = run_or_experiment
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/tune/ray_trial_executor.py in __init__(self, queue_trials, reuse_actors, ray_auto_init, refresh_period)
198 "For cluster usage or custom Ray initialization, "
199 "call `ray.init(...)` before `tune.run`.")
--> 200 ray.init()
201
202 if ray.is_initialized():
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/_private/client_mode_hook.py in wrapper(*args, **kwargs)
45 if client_mode_enabled and _client_hook_enabled:
46 return getattr(ray, func.__name__)(*args, **kwargs)
---> 47 return func(*args, **kwargs)
48
49 return wrapper
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/worker.py in init(address, num_cpus, num_gpus, resources, object_store_memory, local_mode, ignore_reinit_error, include_dashboard, dashboard_host, dashboard_port, job_config, configure_logging, logging_level, logging_format, log_to_driver, _enable_object_reconstruction, _redis_max_memory, _plasma_directory, _node_ip_address, _driver_object_store_memory, _memory, _redis_password, _java_worker_options, _temp_dir, _lru_evict, _metrics_export_port, _system_config)
771
772 for hook in _post_init_hooks:
--> 773 hook()
774
775 node_id = global_worker.core_worker.get_current_node_id()
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/tune/registry.py in flush(self)
169 def flush(self):
170 for k, v in self.to_flush.items():
--> 171 self.references[k] = ray.put(v)
172 self.to_flush.clear()
173
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/_private/client_mode_hook.py in wrapper(*args, **kwargs)
45 if client_mode_enabled and _client_hook_enabled:
46 return getattr(ray, func.__name__)(*args, **kwargs)
---> 47 return func(*args, **kwargs)
48
49 return wrapper
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/worker.py in put(value)
1487 with profiling.profile("ray.put"):
1488 try:
-> 1489 object_ref = worker.put_object(value)
1490 except ObjectStoreFullError:
1491 logger.info(
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/worker.py in put_object(self, value, object_ref)
267 "inserting with an ObjectRef")
268
--> 269 serialized_value = self.get_serialization_context().serialize(value)
270 # This *must* be the first place that we construct this python
271 # ObjectRef because an entry with 0 local references is created when
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/serialization.py in serialize(self, value)
317 return RawSerializedObject(value)
318 else:
--> 319 return self._serialize_to_msgpack(value)
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/serialization.py in _serialize_to_msgpack(self, value)
297 metadata = ray_constants.OBJECT_METADATA_TYPE_PYTHON
298 pickle5_serialized_object = \
--> 299 self._serialize_to_pickle5(metadata, python_objects)
300 else:
301 pickle5_serialized_object = None
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/serialization.py in _serialize_to_pickle5(self, metadata, value)
257 except Exception as e:
258 self.get_and_clear_contained_object_refs()
--> 259 raise e
260 finally:
261 self.set_out_of_band_serialization()
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/serialization.py in _serialize_to_pickle5(self, metadata, value)
254 self.set_in_band_serialization()
255 inband = pickle.dumps(
--> 256 value, protocol=5, buffer_callback=writer.buffer_callback)
257 except Exception as e:
258 self.get_and_clear_contained_object_refs()
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/cloudpickle/cloudpickle_fast.py in dumps(obj, protocol, buffer_callback)
71 file, protocol=protocol, buffer_callback=buffer_callback
72 )
---> 73 cp.dump(obj)
74 return file.getvalue()
75
~/miniconda3/envs/nupic.research/lib/python3.7/site-packages/ray/cloudpickle/cloudpickle_fast.py in dump(self, obj)
572 def dump(self, obj):
573 try:
--> 574 return Pickler.dump(self, obj)
575 except RuntimeError as e:
576 if "recursion" in e.args[0]:
TypeError: can't pickle _thread.RLock objects
```
## Expected behavior
Ray seems to be pickling the Trainer although it can't. On an earlier version `transformers==4.4.2` this was not an issue. I believe some update in 4.5 is causing this and the above script should be able to complete without any pickling error.
| 05-28-2021 18:41:28 | 05-28-2021 18:41:28 | Same error as this: https://github.com/huggingface/transformers/issues/11249. We are currently looking into this. In the meantime, can you use an earlier version as you suggested? Thanks.<|||||>Will do. Didn't see that other issue there. Thanks for the help. |
transformers | 11,929 | closed | Use `self.assertEqual` instead of `assert` in tests. | IMO the test should use `self.assertEqual` instead of `assert` here:
https://github.com/huggingface/transformers/blob/8d171628fe84bdf92ee40b5375d7265278180f14/tests/test_tokenization_deberta_v2.py#L105-L106
I can provide a PR... | 05-28-2021 16:20:59 | 05-28-2021 16:20:59 | > I can provide a PR...
see #11935 |
transformers | 11,928 | closed | [Flax][WIP] Speed up pretraining | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 05-28-2021 14:22:22 | 05-28-2021 14:22:22 | The whole random masking can be done before each training loop to speed things up -> would be an interesting experiment<|||||>Ok this gives no speed-up actually -> closing |
transformers | 11,927 | closed | add relevant description to tqdm in examples | # What does this PR do?
Fixes #11797. Added description to `dataset.map` to improve tqdm bars. This would tell the user what's being processed. As of now I've only targeted `text-classification`, please let me know if it looks good or if any changes are required so that I can update the rest of them as well.
I was also thinking we should have `check_min_version` for `datasets` as well as this won't work in dataset versions prior to 1.7.0.
## Who can review?
@stas00 | 05-28-2021 14:07:36 | 05-28-2021 14:07:36 | > I was also thinking we should have `check_min_version` for `datasets` as well as this won't work in dataset versions prior to 1.7.0.
That and also we need to make sure that the CI runs 1.7.0. So probably just bumping the required min version project-wide will resolve this at once.<|||||>@stas00, Datasets 1.8.0 was released yesterday and now all tests are passing in this PR. Please let me know if it looks good or if any changes are required so that I can update the rest of them as well.
@lhoestq can I work on `check_min_version` feature in `datasets`?<|||||>Thank you for the heads up, @bhavitvyamalik!
The code now works:
```
$ python run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir output_dir
[...]
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Running tokenizer on dataset: 100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 22.54ba/s]
Running tokenizer on dataset: 100%|████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 47.85ba/s]
Running tokenizer on dataset: 100%|████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 14.15ba/s]
06/09
```
So what is missing is the version checking. normally we have all the versions setup in `setup.py` and its autogenerated table, and then we just need to do `dep_version_check("datasets")` but here this higher version is only needed for these few scripts. so one way is to code it explicitly:
```
from packaging import version
if version.parse(datasets.__version__) < version.parse("1.8.0")
raise ValueError("....")
```
But at the same time is there any reason why we can't just bump transformers's library-wide dependency to `datasets>=1.8.0`?
<|||||>> @lhoestq can I work on check_min_version feature in datasets?
The problem with this one is that it will require a certain version of `datasets` to even work,
<|||||>OK, so 2 things to do:
1. update: `examples/pytorch/text-classification/requirements.txt` to bump up `datasets`:
2. and then in the scripts:
```
from transformers.utils.versions import require_version
require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/text-classification/requirements.txt")
```
<|||||>@sgugger, I was yet to replicate this for other examples. I'll raise another PR for that! <|||||>Oops! Sorry, missed that part.<|||||>I like the approach of making the first PR with just 1 or a few examples with a commitment to extend it to other examples in a separate PR (by creating an Issue that will track the replication).
The problem with tackling more than one example is that it puts a huge onus on the developer and reviewers to do much more work if things aren't right right away. But with one example, ideas are bounced around, tried, applied, verified, merged - then if all is good it's now easy to replicate to many more examples.
This is IMHO, of course. |
transformers | 11,926 | closed | Modifying the distill bert architecture | Currently getting the following error while running the distillbert model :
**TypeError: forward() got an unexpected keyword argument 'token_type_ids'**
I constructed the model as follows:
```
class Distill_BERTBaseUncased(nn.Module):
def __init__(self):
super(Distill_BERTBaseUncased, self).__init__()
self.bert = transformers.DistilBertModel.from_pretrained(DISTILL_BERT_PATH, return_dict=False)
self.bert_drop = nn.Dropout(0.5)
self.out = nn.Linear(768 * 2, 1)
def forward(self, ids, mask, token_type_ids):
o1, _ = self.bert(ids, attention_mask=mask, token_type_ids = token_type_ids)
mean_pooling = torch.mean(o1, 1)
max_pooling, _ = torch.max(o1, 1)
cat = torch.cat((mean_pooling, max_pooling), 1)
bo = self.bert_drop(cat)
output = self.out(bo)
return output
model = Distill_BERTBaseUncased()
model = model.to(device)
```
Please help me in resolving this as I took 3 parameters as input for the model which is present in the forward function in the model class ! | 05-28-2021 13:19:04 | 05-28-2021 13:19:04 | If you look at the [documentation of `DistilBertModel`](https://huggingface.co/transformers/model_doc/distilbert.html#distilbertmodel), you can see it doesn't use token_type_ids.<|||||>> If you look at the [documentation of `DistilBertModel`](https://huggingface.co/transformers/model_doc/distilbert.html#distilbertmodel), you can see it doesn't use token_type_ids.
I removed the token_type_ids but still getting this error !!!!
```
class DISTILLBERTBaseUncased(nn.Module):
def __init__(self):
super(DISTILLBERTBaseUncased, self).__init__()
self.bert = transformers.DistilBertModel.from_pretrained(DISTILL_BERT_PATH, return_dict=False)
self.bert_drop = nn.Dropout(0.5)
self.out = nn.Linear(768, 1)
def forward(self, ids, mask):
_, x = self.bert(ids, attention_mask=mask)
bo = self.bert_drop(x)
output = self.out(bo)
return output
model = DISTILLBERTBaseUncased()
model = model.to(device)
```

<|||||>Yes it gives you an error since the model does only return a single thing, namely a tuple containing a single element (as you set `return_dict=False`). This is also explained in the documentation by the way. You should replace
`_, x = self.bert(input_ids=ids, attention_mask=mask) `
by
`outputs = self.bert(input_ids=ids, attention_mask=mask)`
You can then access the last hidden states using `outputs[0]`. <|||||>Still facing the error here


<|||||>Yes, `outputs` is a tuple. You should use `outputs[0]`, which is a PyTorch tensor.
Sorry for interrupting here, but Github issues are not the place to ask such questions. They are related to understanding PyTorch, working with Transformers. For such questions, you can use the [forum](https://discuss.huggingface.co/).
We would like to keep Github issues for bugs/feature requests.
Thanks!<|||||>Thanks , I really appreciate you and the time you put time here in the thread for helping me out. Finally everything seems like working.
Coolbeas |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.