repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 16,047 | closed | Spanish translation of the file training.mdx | I made the translation of the **training.mdx** file in collaboration to the transformers/doc/ documentation, there I created a new folder called source_es where I hosted my training.mdx document translated to Spanish, contributing to the Spanish-speaking community.
[forum](https://github.com/huggingface/transformers/issues/15947)
Approved by @omarespejel
Documentation: @sgugger
| 03-10-2022 14:54:13 | 03-10-2022 14:54:13 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger Of course, I will make the changes you indicate, leaving the syntax as per the original documentation. I have a question, I see that most of the conflicts are because of the links, do I leave all the links or remove them? I think these are important.<|||||>As soon as the syntax is reviewed by @yharyarias, I will review the Spanish.<|||||>The links are automatically generated by our doc building tool. You should leave them as they are in the `training.mdx` file English version (for instance [`Trainer`]).<|||||>Perfect! I'll follow your suggestions, thank you @sgugger 🙌🤗<|||||>I just uploaded the changes you suggested @sgugger I'll be watching for your review.<|||||>Thanks! Will review the translation.<|||||>I made the last changes suggested by @omarespejel , I think everything is ok, this is my first contribution and there will be more to come, I have loved collaborating with open source. Thank you very much 🤗 @sgugger @LysandreJik and @omarespejel <|||||>Thanks, @yharyarias! There are still a couple of comments unresolved. <|||||>IMO it is good to go :+1: Thank you @yharyarias! <|||||>Thanks again for your PR. Merging this but we will wait a little bit more to serve a doc in Spanish mainly:
- to have a few more pages
- to automatically redirect to English doc when the page is not available in Spanish.
I will put a comment here when the page is live (hopefully in a week or two!)<|||||>@sgugger Thank you, of course I will continue to contribute, I'm currently working on the translation of another document to add to source_es. |
transformers | 16,046 | closed | Docker image nightly torch job | Converts the nightly job to use the updated notification service. | 03-10-2022 13:28:35 | 03-10-2022 13:28:35 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16046). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>This will be superseded by a PR on which @ydshieh is working.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Close as this is already done. |
transformers | 16,045 | closed | Fix duplicate arguments passed to dummy inputs in ONNX export | # What does this PR do?
This PR fixes:
- a bug that was introduced in #15658 where a preprocessor and tokenizer were being passed together to the `generate_dummy_inputs()` function during the ONNX export.
- an oversight in the refactoring of the ONNX config for M2M-100
It also removes problematic TensorFlow integration tests, where the model implementation doesn't have parity with the PyTorch one (e.g. `camembert-base` is missing the causal LM head in TensorFlow). I'll address those issues in separate PRs as it involves touching the TensorFlow modeling files.
With these fixes, all slow ONNX tests now pass in all environments (only `torch`, only `tensorflow`, `torch` and `tensorflow`):
```bash
RUN_SLOW=1 python -m pytest tests/onnx/test_onnx_v2.py
```
cc @michaelbenayoun | 03-10-2022 13:02:48 | 03-10-2022 13:02:48 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,044 | closed | Fix tf pytorch test in auto | # What does this PR do?
Remove some tests in `tests/auto/test_modeling_tf_pytorch.py`:
- when we load a model using `from_pretrained` from another framework, it returns a single model object instead of a tuple of (model, loading_info) even if we specify `output_loading_info=True`. Some tests expect to have `loading_info`, and cause errors like `ypeError: cannot unpack non-iterable TFGPT2LMHeadModel object`.
## Remark
It might be better to document this behavior clearly for the users :-) | 03-10-2022 11:40:09 | 03-10-2022 11:40:09 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16044). All of your documentation changes will be reflected on that endpoint.<|||||>> Thanks for looking into this. I would be in favor of trying o return loading_info and test that it is not None maybe
Do you mean we should return (model, None) in `from_pretrained` if `output_loading_info=True` + `load from a different framework`?<|||||>I think @patrickvonplaten is suggesting having the `load_pytorch_checkpoint_in_tf2_model` that is called in the `from_pt` block return the `loading_info` the same way the regular `from_pretrained` does.<|||||>> I think @patrickvonplaten is suggesting having the `load_pytorch_checkpoint_in_tf2_model` that is called in the `from_pt` block return the `loading_info` the same way the regular `from_pretrained` does.
Yeah, this makes more sense! @patrickvonplaten could you confirm?<|||||>Kindly cc @patrickvonplaten and @sgugger (low priority PR)
I tried to return `loading_info` when the loading is across frameworks. You can see the attempt in [this commit](https://github.com/huggingface/transformers/pull/16044/commits/09284cbdca9c099c52e7c6f973c06074dd6f9631).
Before going further, I would like to have some feedbacks:
- For the methods in `src/transformers/modeling_tf_pytorch_utils.py`, in particular [load_tf2_weights_in_pytorch_model](https://github.com/huggingface/transformers/blob/09284cbdca9c099c52e7c6f973c06074dd6f9631/src/transformers/modeling_tf_pytorch_utils.py#L466), should I:
- `return model, loading_info` with a condition `if output_loading_info`
(this has fewer impacts)
- or return a tuple like `return pt_model, missing_keys, unexpected_keys` and let the caller to decide what to use
(more places need to be changed in this case)
This is related to the change in
https://github.com/huggingface/transformers/blob/09284cbdca9c099c52e7c6f973c06074dd6f9631/src/transformers/modeling_utils.py#L1512-L1514
A few line below, the PyTorch version is
`model, missing_keys, unexpected_keys, mismatched_keys, error_msgs = cls._load_state_dict_into_model(...)`
- We will also have to change the PT/Flax loading methods to take `from_flax` into account.
- [This line](https://github.com/huggingface/transformers/blob/09284cbdca9c099c52e7c6f973c06074dd6f9631/src/transformers/modeling_utils.py#L1506) will be exceptional though. I don't have a clear idea how to deal with it if we insist to have `loading_info` in the final outputs.<|||||>> Kindly cc @patrickvonplaten and @sgugger (low priority PR)
>
> I tried to return loading_info when the loading is across frameworks. You can see the attempt in [this commit](https://github.com/huggingface/transformers/pull/16044/commits/09284cbdca9c099c52e7c6f973c06074dd6f9631).
>
> Before going further, I would like to have some feedbacks:
>
> For the methods in src/transformers/modeling_tf_pytorch_utils.py, in particular [load_tf2_weights_in_pytorch_model](https://github.com/huggingface/transformers/blob/09284cbdca9c099c52e7c6f973c06074dd6f9631/src/transformers/modeling_tf_pytorch_utils.py#L466), should I:
>
> return model, loading_info with a condition if output_loading_info
> (this has fewer impacts)
Yes this sounds good to me! Default it to False for backward compatibility and I think that's a good approach :-)
>
> or return a tuple like return pt_model, missing_keys, unexpected_keys and let the caller to decide what to use
> (more places need to be changed in this case)
>
> This is related to the change in
> [transformers/src/transformers/modeling_utils.py](https://github.com/huggingface/transformers/blob/09284cbdca9c099c52e7c6f973c06074dd6f9631/src/transformers/modeling_utils.py#L1512-L1514)
>
> Lines 1512 to 1514 in [09284cb](https://github.com/huggingface/transformers/commit/09284cbdca9c099c52e7c6f973c06074dd6f9631)
>
> model, loading_info = load_tf2_checkpoint_in_pytorch_model(
> model, resolved_archive_file, allow_missing_keys=True, output_loading_info=True
> )
>
> A few line below, the PyTorch version is
> model, missing_keys, unexpected_keys, mismatched_keys, error_msgs = cls._load_state_dict_into_model(...)
> We will also have to change the PT/Flax loading methods to take from_flax into account.
>
> [This line](https://github.com/huggingface/transformers/blob/09284cbdca9c099c52e7c6f973c06074dd6f9631/src/transformers/modeling_utils.py#L1506) will be exceptional though. I don't have a clear idea how to deal with it if we insist to have loading_info in the final outputs.
Think this last line is really an edge case and I would be fine with always leaving the loading_info `None` in this case |
transformers | 16,043 | closed | DeBERTa/DeBERTa-v2/SEW Support for torch 1.11 | The internal `torch` method `_softmax_backward_data` changed API between 1.10 and 1.11, from requiring a tensor as its last output to requiring a size.
This PR updates the concerned models so that they are correctly supported.
Torch 1.11: https://github.com/pytorch/pytorch/blame/e47a5a64bbf4d388b70397e3237f9d5710ee4c9c/tools/autograd/derivatives.yaml#L1861
Before: https://github.com/pytorch/pytorch/blame/768cfaa8f86bf7c7b0af441d1536f060274c27a0/tools/autograd/derivatives.yaml#L1704 | 03-10-2022 11:28:32 | 03-10-2022 11:28:32 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Addressed your comment @sgugger, could you do a second review?
As seen with Sylvain offline, I've moved out the `packaging.version.parse` operation of the methods, as otherwise they would be called inside the methods themselves, which are called multipled times in forward passes. @patrickvonplaten could you check if that's fine with you?<|||||>@sgugger @LysandreJik Thanks for your awesome work on building this immensely valuable ecosystem (and community!).
I'm waiting to release a package that requires this post-4.17 commit and it would be great to avoid pointing to a specific commit for packaging purposes. Is a 4.17.1 patch release planned? I asked on Discord but this was indicated to be a better forum for the question. Thanks again for helping lead this great community!<|||||>I don't think we have a patch planned. We will have 4.18 released probably next week instead :-) <|||||>> I don't think we have a patch planned. We will have 4.18 released probably next week instead :-)
AWESOME! looking forward to it!<|||||>Note that this evaluates True for pre-releases such as '1.11.0a0+b6df043'. So the the error is still present.
`is_torch_less_than_1_11 = version.parse(version.parse(torch.__version__).base_version) < version.parse("1.11")` may help.
Am I missing something?
|
transformers | 16,042 | closed | [README] fix url for Preprocessing tutorial | # What does this PR do?
Fix url for Preprocessing tutorial | 03-10-2022 10:58:23 | 03-10-2022 10:58:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,041 | closed | Fix Bug in Flax-Speech-Encoder-Decoder Test | This PR fixes a bug in a Flax-Speech-Encoder-Decoder test that was failing after push (https://github.com/huggingface/transformers/runs/5493110105?check_suite_focus=true). Specifically, it amends the `test_freeze_feature_encoder` test to **omit** the random `decoder_attention_mask` from the input arguments of the speech-encoder-decoder model. This random `decoder_attention_mask` was resulting in `nan` values on the output logits of the `FlaxWav2Vec2BartModelTest`. Removing it as an input results in real valued output logits, and the test of concern passes following this change. The behaviour of the FlaxBartForCausalLM model to output `nan` values given a random `decoder_attention_mask` has been noted. | 03-10-2022 10:33:55 | 03-10-2022 10:33:55 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,040 | closed | Use decoder_inputs in generate function | # 🚀 Feature request
Use the generate function with some initial decoder inputs.
## Motivation
Imagining I have a trained encoder-decoder transformer and want to generate some output for a given input, but I also have part of the output completed. How do I pass decoder_inputs to the generate method for the model to start the beam search decoding with some initial decoder inputs?
I don't think that it is possible nowadays, even with the `model_kwargs` argument.
## Your contribution
I'm available to help change the `generate` function to make it work. Just need someone to point me to the needed changes to make it easier.
| 03-10-2022 10:31:17 | 03-10-2022 10:31:17 | it's possible to pass `decoder_input_ids` to `generate`, it takes it as a keyword argument.
```python
model.generate(..., decoder_input_ids=decoder_input_ids)
```<|||||>> it's possible to pass `decoder_input_ids` to `generate`, it takes it as a keyword argument.
>
> ```python
> model.generate(..., decoder_input_ids=decoder_input_ids)
> ```
As I said, `'I don't think that it is possible nowadays, even with the model_kwargs argument.'` Even if it accepts the argument, it does not perform the generate method as intended<|||||>> Even if it accepts the argument, it does not perform the generate method as intended
What do you mean by it does not perform as intended ? Could you post the code-snippet to re-produce what the exact issue is ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Just tried it and it's working, it was either solved or I was mistaken. Thank you! |
transformers | 16,039 | closed | How to add uppercase letter in chinese-roberta-wwm-ext-large vocab? | I found chinese-roberta-wwm-ext-large vocab without uppercase letter
?How to add uppercase letter in vocab?
chinese-roberta-wwm-ext-large: https://huggingface.co/hfl/chinese-roberta-wwm-ext-large/tree/main
| 03-10-2022 08:14:11 | 03-10-2022 08:14:11 | Hi @scaler2017 👋 We try to reserve GitHub issues for bugs and unexpected code behavior. For further other questions, please use our forum here: https://discuss.huggingface.co/ :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,038 | closed | what us the difference between Trainer and Seq2SeqTrainer ? | Anybody? | 03-10-2022 08:00:26 | 03-10-2022 08:00:26 | I advise you to use Google, it's pretty great.
<img width="892" alt="Screenshot 2022-03-10 at 09 41 21" src="https://user-images.githubusercontent.com/48327001/157622136-0a1d5ced-20dd-4d41-bbad-a1d0088f02f3.png">
=> answer explained here: https://discuss.huggingface.co/t/trainer-vs-seq2seqtrainer/3145 |
transformers | 16,037 | closed | The documentation of transformers.generation_utils.GenerationMixin.generate doesn't match the code of it | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0
- Platform: Linux
- Python version: 3.8.12
- PyTorch version (GPU?): 1.7.1
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
@sgugger@patrickvonplaten @narsil
## Information
The documentation of transformers.generation_utils.GenerationMixin.generate shows many parameters have default value, but the code of it shows all parameters are set to be None by default.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
It is about documentation, just look at it.
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 03-10-2022 05:10:30 | 03-10-2022 05:10:30 | Hi @zhaowei-wang98 ,
The documentation is sort of correct I think.
The values will be defaulted from the config values of the model, which themselves have a default value which is the one provided in the docs: check here how those variables are initialized https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L1062
So even though, technically they are None by default, they will actually inherit the default of the config values (which can be overidden too).
It seems `max_length` does say the default values comes from the config, where other values don't necessarily.
@patrickvonplaten Should we harmonize?
So I think the doc reflects reality as much as possible. https://huggingface.co/docs/transformers/v4.17.0/en/main_classes/model#transformers.generation_utils.GenerationMixin.generate
But maybe some value is incorrectly reported in particular ?
Did you encounter a particular issue with the default values ?<|||||>We do write this in the docs:
> Apart from inputs, all the arguments below will default to the value of the attribute of the same name inside the [PretrainedConfig](https://huggingface.co/docs/transformers/v4.17.0/en/main_classes/configuration#transformers.PretrainedConfig) of the model. The default values indicated are the default values of those config.
Let's maybe wrap this message in a warning to make it clearer? Putting it in my generate doc refactor PR: https://github.com/huggingface/transformers/pull/15988<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,036 | closed | RuntimeError: [enforce fail at CPUAllocator.cpp:68] Out of memory during batched inference | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.16.2
- Platform: p3.24xlarge
- Python version: 3.6
- PyTorch version (GPU?): 1.10.0+cu102
- Tensorflow version (GPU?): 2.6
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
@LysandreJik
## Information
I am using codebert model(RoBerta) for inference by wrapping it using huggingface. Below, I am trying to retrieve and store the embeddings that is output from the model.
Code:
```
from transformers import RobertaTokenizer, RobertaModel
tokenizer = RobertaTokenizer.from_pretrained("microsoft/codebert-base")
# this is the pretrained model which we plan to fine-tune/transfer-learn further
model = RobertaModel.from_pretrained("codebert")
batch_size = 8
# Data is already loaded
tokens = tokenizer(data, return_tensors="pt", truncation=True, max_length=768, padding=True)["input_ids"]
conc = torch.empty(size=(len(tokens), 768))
model.eval()
for i in range(0, len(tokens), batch_size):
conc[i:i+batch_size] = model(tokens[i:i+batch_size])[1]
```
The shape of tokens is 5000 * 768 .
System is a p3.24xlarge machine with about 700GB of RAM.
Why is the model giving "DefaultCPUAllocator: can't allocate memory" even after batching the input and the system having large RAM?
| 03-10-2022 04:05:46 | 03-10-2022 04:05:46 | Have you tried with a smaller amount of tokens to see where it fails?<|||||>Yes, I ran the "top" command in shell and observed that memory almost maxes out to 99% for just 2000. But, I was able to get over the memory issue by detaching the tensor and converting it to cpu() and then numpy() .
```model(tokens[i:i+batch_size])[1].detach().cpu().numpy()```. My problem is fixed but just wondering what was eating up the memory. |
transformers | 16,035 | closed | raise ValueError(f"Unrecognized tokenizer_type {tokenizer_type}") ValueError: Unrecognized tokenizer_type BertWordPieceCase | BertWordPieceCase is actually a builtin tokenizer inside magatron. But when I using `convert_megatron_gpt2_checkpoint.py` to convert. I got this error:
```
raise ValueError(f"Unrecognized tokenizer_type {tokenizer_type}")
ValueError: Unrecognized tokenizer_type BertWordPieceCase
```
Just wonder, how to make it support? Does it also works If i using default `gpt2` tokenizer? Any ulternate in transformers interms of `BertWordPieceCase` ? | 03-10-2022 03:40:04 | 03-10-2022 03:40:04 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @jinfagang
Can you kindly tell how did you incorporate BertWordPieceCase tokenizer into the Megatron model? I am trying to do the same.
Thanks! |
transformers | 16,034 | closed | Make BigBird model compatiable to fp16 dtype. | # What does this PR do?
Currently, the BigBird model doesn't support fp16 data type with a `.half()` call.
For example, the following code snippet:
```python
from transformers import *
import torch
if __name__ == "__main__":
device = 'cuda'
config = BigBirdConfig(attention_type="block_sparse",)
model = AutoModelForMaskedLM.from_config(config).to(device)
eval_context = torch.randint(0, config.vocab_size, (1, 4096)).to(device)
example_inputs = {'input_ids': eval_context, }
model = model.half()
model(**example_inputs)
```
will fail with the following error message:
```
Traceback (most recent call last):
File "1.py", line 11, in <module>
model(**example_inputs)
File "/home/xzhao9/data/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/xzhao9/data/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 2351, in forward
outputs = self.bert(
File "/home/xzhao9/data/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/xzhao9/data/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 2081, in forward
encoder_outputs = self.encoder(
File "/home/xzhao9/data/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/xzhao9/data/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 1615, in forward
layer_outputs = layer_module(
File "/home/xzhao9/data/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/xzhao9/data/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 1467, in forward
self_attention_outputs = self.attention(
File "/home/xzhao9/data/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/xzhao9/data/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 1385, in forward
attention_output = self.output(self_outputs[0], hidden_states)
File "/home/xzhao9/data/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/xzhao9/data/miniconda3/envs/py38/lib/python3.8/site-packages/transformers/models/big_bird/modeling_big_bird.py", line 1300, in forward
hidden_states = self.dense(hidden_states)
File "/home/xzhao9/data/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/xzhao9/data/miniconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 103, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: expected scalar type Float but found Half
```
It runs fine after merging this PR.
Other huggingface models, such as `hf_Bert`, doesn't have this problem. For example, `hf_Bert` support fp16 with [this line](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py#L314).
| 03-10-2022 01:13:21 | 03-10-2022 01:13:21 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Can I get a review, @patrickvonplaten ?<|||||>Addressed comments and rebased on master branch. Fix CI errors. Pending another review from @patrickvonplaten <|||||>Awesome thanks! |
transformers | 16,033 | closed | Fix dependency error message in ServeCommand | # What does this PR do?
Fix dependency error message in ServeComman, where "uvicorn" is misspelled as "unicorn".
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 03-10-2022 00:00:19 | 03-10-2022 00:00:19 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,032 | closed | how can we use the model output to do predict? | Hi, I am using transformers to do generation now. especially in ```m2m100``` model. We can easily call ```model.generate()``` to generate the sentences, but can not do predict using the model outputs. can anyone help me? @patrickvonplaten @Narsil
For example, in the following code, what should we do after we have ```outputs``` from the model to generate the translation text?
```
from transformers import M2M100Config, M2M100ForConditionalGeneration, M2M100Tokenizer
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="en", tgt_lang="fr")
src_text = "Life is like a box of chocolates."
tgt_text = "La vie est comme une boîte de chocolat."
model_inputs = tokenizer(src_text, return_tensors="pt")
with tokenizer.as_target_tokenizer():
labels = tokenizer(tgt_text, return_tensors="pt").input_ids
outputs = model(**model_inputs, labels=labels)
```
I mean, whether we can use the ```logits``` to generate the text? Any suggestions will be welcome. | 03-09-2022 21:23:44 | 03-09-2022 21:23:44 | Hi
I've answered this question on our forum here: https://discuss.huggingface.co/t/generate-without-using-the-generate-method/11379?u=nielsr<|||||>For such questions, please use the forum rather than GitHub issues, which are meant for bugs/feature requests.
Thanks!<|||||>@NielsRogge Thanks for the reply, this really help me and solve my problems. |
transformers | 16,031 | closed | Fix TFDebertaV2ConvLayer in TFDebertaV2Model | # What does this PR do?
Fix a [CI failure for TFDebertaV2Model](https://github.com/huggingface/transformers/runs/5474315111?check_suite_focus=true), caused by the mistake in `TFDebertaV2ConvLayer`.
## Remark
This test `test_inference_no_head` also fails with the version in #13120. I think this slow test was not run manually to ensure it works before being merged to master.
## Code to demonstrate the issue and the effect of this PR
This is adapted from [test_inference_no_head](https://github.com/huggingface/transformers/blob/a69e185074fff529ed60d936c6afe05580aee8ac/tests/deberta_v2/test_modeling_tf_deberta_v2.py#L269)
```python
########## Prep ##########
import numpy as np
import torch
import tensorflow as tf
from transformers import DebertaV2Model, TFDebertaV2Model
input_ids = np.array([[0, 31414, 232, 328, 740, 1140, 12695, 69, 46078, 1588, 2]], dtype=np.int32)
attention_mask = np.array([[0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], dtype=np.int32)
########## PT ##########
pt_model = DebertaV2Model.from_pretrained("microsoft/deberta-v2-xlarge")
input_ids_pt = torch.from_numpy(input_ids)
attention_mask_pt = torch.from_numpy(attention_mask)
pt_output = pt_model(input_ids_pt, attention_mask=attention_mask_pt)[0]
# compare the actual values for a slice.
pt_expected_slice = torch.tensor(
[[[0.2356, 0.1948, 0.0369], [-0.1063, 0.3586, -0.5152], [-0.6399, -0.0259, -0.2525]]]
)
pt_output_slice = pt_output[:, 1:4, 1:4]
pt_slice_diff = np.abs(pt_expected_slice.detach().to("cpu").numpy() - pt_output_slice.detach().to("cpu").numpy())
max_pt_slice_diff = np.amax(pt_slice_diff)
print(f"max_pt_slice_diff = {max_pt_slice_diff}")
########## TF ##########
tf_model = TFDebertaV2Model.from_pretrained("microsoft/deberta-v2-xlarge")
input_ids_tf = tf.constant(input_ids)
attention_mask_tf = tf.constant(attention_mask)
tf_output = tf_model(input_ids_tf, attention_mask=attention_mask_tf)[0]
# compare the actual values for a slice.
tf_expected_slice = tf.constant(
[[[0.2356, 0.1948, 0.0369], [-0.1063, 0.3586, -0.5152], [-0.6399, -0.0259, -0.2525]]]
)
tf_output_slice = tf_output[:, 1:4, 1:4]
tf_slice_diff = tf_expected_slice.numpy() - tf_output_slice.numpy()
max_tf_slice_diff = np.amax(tf_slice_diff)
print(f"max_tf_slice_diff = {max_tf_slice_diff}")
########## PT-TF ##########
max_pt_tf_diff = np.amax(np.abs(pt_output.detach().to("cpu").numpy() - tf_output.numpy()))
print(f"maximal pt_tf_diff = {max_pt_tf_diff}")
```
This scripts gives
Before this PR
```python
max_pt_slice_diff = 5.037523806095123e-05
max_tf_slice_diff = 0.5608187317848206
maximal pt_tf_diff = 5.981985092163086
```
With this PR:
```python
max_pt_slice_diff = 5.037523806095123e-05
max_tf_slice_diff = 4.8374757170677185e-05
maximal pt_tf_diff = 0.000133514404296875
```
| 03-09-2022 20:55:21 | 03-09-2022 20:55:21 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,030 | closed | Framework split | # What does this PR do?
This PR prepares the Transformers repo for the new syntax for framework-specific content. It should be merged once https://github.com/huggingface/doc-builder/pull/130 and https://github.com/huggingface/doc-builder/pull/63 are merged.
Note to @stevhliu : This only concerns existing code with switches, not text (except in the cases where the text dependent on PT vs TF was just before the code). It's probably worth doing a pass on the code to see what paragraphs could be split in framework specific blocks once this PR is merged. For instance, all task tutorials could have the whole fine-tuning in two such blocks. | 03-09-2022 20:20:13 | 03-09-2022 20:20:13 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16030). All of your documentation changes will be reflected on that endpoint. |
transformers | 16,029 | closed | Don't compute metrics in LM examples on TPU | # What does this PR do?
This PR removes the accuracy computation in the LM examples on TPU, as TPUs can't handle the accumulation of the model logits.
Patches #16005 A better fix would be to be able to gather those logits on TPU but waiting for advice from the pytorch XLA team on this. | 03-09-2022 19:25:37 | 03-09-2022 19:25:37 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thank you for the fix! |
transformers | 16,028 | closed | Update to 4.18.0dev0 | null | 03-09-2022 18:09:42 | 03-09-2022 18:09:42 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,027 | closed | Visual Attention Network (VAN) | # What does this PR do?
This PR adds [Visual Attention Network](https://arxiv.org/pdf/2202.09741.pdf).
Currently, the model can be used as follows
```python
import requests
from io import BytesIO
res = requests.get('https://github.com/huggingface/transformers/blob/master/tests/fixtures/tests_samples/COCO/000000039769.png?raw=true')
image = Image.open(BytesIO(res.content))
feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/van-base")
model = VanForImageClassification.from_pretrained("zuppif/van-base").eval()
inputs = feature_extractor(image, return_tensors="pt")
outputs = model(**inputs)
print(model.config.id2label[torch.argmax(outputs.logits).item()])
# tabby, tabby cat
```
## TODO
- [x] modeling
- [x] weights
- [x] doc
- [x] tests
| 03-09-2022 18:07:13 | 03-09-2022 18:07:13 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the reviews. I've resolved all the comments that can be resolved and asked the authors if we can create an organization for them in the hub. |
transformers | 16,026 | closed | TypeError: forward() got an unexpected keyword argument 'return_dict' BERT CLASSIFICATION HUGGINFACE with tuning | I'm stacked with this model, every day errors came to my code! Anyway I'm trying to implement a Bert Classifier to discriminate between 2 sequences classes (BINARY CLASSIFICATION), with AX hyperparameters tuning.
This is all my code implemented anticipated by a sample of my datasets ( I have 3 csv, train-test-val). Thank you very much ! Any help or suggestions will be very useful !
```
df_train=pd.read_csv('CLASSIFIER_train',sep=',',header=None)
df_train
0 1
M A T T D R P T P D G T D A I D L T T R V R R... 1
M K K L F Q T E P L L E L F N C N E L R I I G... 0
M L V A A A V C P H P P L L I P E L A A G A A... 1
M I V A W G N S G S G L L I L I L S L A V S A... 0
M V E E G R R L A A L H P N I V V K L P T T E... 1
M G S K V S K N A L V F N V L Q A L R E G L T... 1
M P S K E T S P A E R M A R D E Y Y M R L A M... 1
M V K E Y A L E W I D G Y R E R L V K V S D A... 1
M G T A A S Q D R A A M A E A A Q R V G D S F... 0
```
```
class SequenceDataset(Dataset):
def __init__(self, sequences, targets, tokenizer, max_len):
self.sequences = sequences
self.targets = targets
self.tokenizer = tokenizer
self.max_len = max_len
def __len__(self):
return len(self.sequences)
def __getitem__(self, item):
sequences = str(self.sequences[item])
target = self.targets[item]
encoding = self.tokenizer.encode_plus(
sequences,
add_special_tokens=True,
max_length=self.max_len,
return_token_type_ids=False,
pad_to_max_length=True,
return_attention_mask=True,
return_tensors='pt',
)
return {
'sequences_text': sequences,
'input_ids': encoding['input_ids'].flatten(),
'attention_mask': encoding['attention_mask'].flatten(),
'targets': torch.tensor(target, dtype=torch.long)
}
class SequenceDataset(Dataset):
def __init__(self, sequences, targets, tokenizer, max_len):
self.sequences = sequences
self.targets = targets
self.tokenizer = tokenizer
self.max_len = max_len
def __len__(self):
return len(self.sequences)
def __getitem__(self, item):
sequences = str(self.sequences[item])
target = self.targets[item]
encoding = self.tokenizer.encode_plus(
sequences,
add_special_tokens=True,
max_length=self.max_len,
return_token_type_ids=False,
pad_to_max_length=True,
return_attention_mask=True,
return_tensors='pt',
)
return {
'sequences_text': sequences,
'input_ids': encoding['input_ids'].flatten(),
'attention_mask': encoding['attention_mask'].flatten(),
'targets': torch.tensor(target, dtype=torch.long)
}
def create_data_loader(df, tokenizer, max_len, batch_size):
ds = SequenceDataset(
sequences=df[0].to_numpy(),
targets=df[1].to_numpy(),
tokenizer=tokenizer,
max_len=max_len
)
return DataLoader(
ds,
batch_size=batch_size,
num_workers=2,
shuffle=True
)
def net_train(net, train_data_loader, parameters, dtype, device):
net.to(dtype=dtype, device=device)
# Define loss and optimizer
#criterion = nn.CrossEntropyLoss()
criterion = nn.NLLLoss()
optimizer = optim.SGD(net.parameters(), # or any optimizer you prefer
lr=parameters.get("lr", 0.001), # 0.001 is used if no lr is specified
momentum=parameters.get("momentum", 0.9)
)
scheduler = optim.lr_scheduler.StepLR(
optimizer,
step_size=int(parameters.get("step_size", 30)),
gamma=parameters.get("gamma", 1.0), # default is no learning rate decay
)
num_epochs = parameters.get("num_epochs", 3) # Play around with epoch number
# Train Network
# Train Network
for _ in range(num_epochs):
# Your dataloader returns a dictionary
# so access it as such
for batch in train_data_loader:
# move data to proper dtype and device
labels = batch['targets'].to(device=device)
attention_mask = batch['attention_mask'].to(device=device)
input_ids = batch['input_ids'].to(device=device)
#labels = labels.long()
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs,x= net(input_ids, attention_mask,return_dict=True)
#outputs,x= net(input_ids,atten_mask)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
scheduler.step()
return net
class BERT_Arch(nn.Module):
def __init__(self, bert):
super(BERT_Arch, self).__init__()
self.bert = bert
# dropout layer
self.dropout = nn.Dropout(0.1)
# relu activation function
self.relu = nn.ReLU()
# dense layer 1
self.fc1 = nn.Linear(1024,512)
# dense layer 2 (Output layer)
self.fc2 = nn.Linear(512,1)
#softmax activation function
self.softmax = nn.LogSoftmax(dim=1)
#define the forward pass
def forward(self, input_ids, attention_mask ):
#pass the inputs to the model
_, cls_hs = self.bert(input_ids, attention_mask,return_dict=False)
x = self.fc1(cls_hs)
x = self.relu(x)
x = self.dropout(x)
# output layer
x = self.fc2(x)
# apply softmax activation
x = self.softmax(x)
return x
from transformers import AutoModel
# import BERT-base pretrained model
bert = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME)
from transformers.models.bert.modeling_bert import BertForSequenceClassification
def init_net(parameterization):
model = BERT_Arch(bert) #pretrained ResNet50
# push the model to GPU
model = model.to(device)
# The depth of unfreezing is also a hyperparameter
for param in model.parameters():
param.requires_grad = False # Freeze feature extractor
return model # return untrained model
def train_evaluate(parameterization):
# constructing a new training data loader allows us to tune the batch size
train_data_loader=create_data_loader(df_train, tokenizer, MAX_LEN, batch_size=parameterization.get("batchsize", 32))
# Get neural net
untrained_net = init_net(parameterization)
# train
trained_net = net_train(net=untrained_net, train_data_loader=train_data_loader,
parameters=parameterization, dtype=dtype, device=device)
# return the accuracy of the model as it was trained in this run
return evaluate(
net=trained_net,
data_loader=test_data_loader,
dtype=dtype,
device=device,
)
dtype = torch.float
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
best_parameters, values, experiment, model = optimize(
parameters=[
{"name": "lr", "type": "range", "bounds": [1e-6, 0.4], "log_scale": True},
{"name": "batchsize", "type": "range", "bounds": [16, 128]},
{"name": "momentum", "type": "range", "bounds": [0.0, 1.0]},
#{"name": "max_epoch", "type": "range", "bounds": [1, 30]},
#{"name": "stepsize", "type": "range", "bounds": [20, 40]},
],
evaluation_function=train_evaluate,
objective_name='accuracy',
)
print(best_parameters)
means, covariances = values
print(means)
print(covariances)
```
```
File "<ipython-input-61-aa60b2f44317>", line 35, in net_train
outputs,x= net(input_ids, attention_mask,return_dict=True)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'return_dict'
``` | 03-09-2022 17:03:18 | 03-09-2022 17:03:18 | @Ch-rode
There are several issues in your current script, lets try to fix one by one
First the `forward()` function of your model raising this error because it takes only two arguments
```python3
def forward(self, input_ids, attention_mask)
```
so correct approach would be
```python3
def forward(self, input_ids, attention_mask, return_dict)
```
But this alone won't solve all the issues.
In that same method you have used this
```python3
#pass the inputs to the model
_, cls_hs = self.bert(input_ids, attention_mask,return_dict=False)
```
This will not work. So lets try to understand what you might want to achieve here. You are using `BERT` as your classification model adding some `fcn` on top of it. `BERT` outputs embedding of size `[batch_size, num_tokens, depth]` and for each input, the first vector you get is `[CLS]` token vector with `[1, depth]` dimensions. So use that vector for classification.
```python3
cls_hs = self.bert(input_ids, attention_mask,return_dict=return_dict)['last_hidden_state'][:,0,:]
```
Now as per my knowledge the max depth `BERT` models attain is `768` so make sure you are using correct depth value in
```python3
# dense layer 1
self.fc1 = nn.Linear(1024,512)
```
which might be
```python3
# dense layer 1
self.fc1 = nn.Linear(768,512)
```
lastly change this
```python3
# forward + backward + optimize
outputs,x= net(input_ids, attention_mask,return_dict=True)
```
to this
```python3
# forward + backward + optimize
outputs= net(input_ids, attention_mask,return_dict=True)
```
Let me know if you face more issues,<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,025 | closed | [CI] switching CI to pytorch-1.11 | Heads up: pytorch-1.11 will be released tomorrow
@LysandreJik , I recommend we switch our CI to it once it's released - note that nightly is 1.12-tobe so should remain as it is now.
I have already requested torch-scatter binary wheels for 1.11 which will be needed to switch to : https://github.com/rusty1s/pytorch_scatter/issues/276
we can use this issue to track if anything else is needed. | 03-09-2022 16:51:11 | 03-09-2022 16:51:11 | @LysandreJik, `pip install torch-scatter -f https://data.pyg.org/whl/torch-1.11.0+cu113.html` is already avaialble,
`pip install torch-scatter -f https://data.pyg.org/whl/torch-1.11.0+cu115.html` should be available on Friday.
So no show-stoppers to switch to pt-1.11 now as soon as it's released.
At your convenience of course. I was just trying to make sure that all components are ready.<|||||>CI was switched to PyTorch 1.11 on the release date, `pytorch-scatter` was updated with the link you shared above. Thank you, for helping track it! |
transformers | 16,024 | closed | Doc builder fix push 2 | # What does this PR do?
try fix #16020
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-09-2022 16:47:02 | 03-09-2022 16:47:02 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,023 | closed | Fix warning message in ElectraForCausalLM | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes warning message in ElectraForCausalLM
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-09-2022 16:43:41 | 03-09-2022 16:43:41 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,022 | closed | JAX 0.2.22: Replace all deprecated `jax.ops` operations with jnp's `at` | The `jax.ops.index_update` function used for in-place `jax.ndarray` operations was deprecated in [JAX 0.2.22](https://jax.readthedocs.io/en/latest/changelog.html?highlight=0.2.22#jax-0-2-22-oct-12-2021). It is advised that any in-place array modifications be made with [`jax.numpy.ndarray.at`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.ndarray.at.html). We should update the following scripts to conform to this new approach:
- [x] https://github.com/huggingface/transformers/blob/baab5e7cdf04c5b2cd209de4e9af6cb2c51a30d2/src/transformers/generation_flax_logits_process.py#L145
- [x] https://github.com/huggingface/transformers/blob/d3ae2bd3cf9fc1c3c9c9279a8bae740d1fd74f34/tests/generation/test_generation_flax_utils.py#L222
- [x] https://github.com/huggingface/transformers/blob/2596f95e8499bf350b18e1fa0492d38b6f8148fa/src/transformers/models/marian/modeling_flax_marian.py#L895
- [x] https://github.com/huggingface/transformers/blob/60b81dfa6faae3aa90c34a7df9304036f513d055/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py#L960
- [x] https://github.com/huggingface/transformers/blob/d25e25ee2b63ebfcd099deb689a5a7272574a10f/src/transformers/models/xglm/modeling_flax_xglm.py#L134
- [x] https://github.com/huggingface/transformers/blob/7732d0fe7a759c9844215920e9f1c5540eafb1a6/src/transformers/models/big_bird/modeling_flax_big_bird.py#L2127
- [x] https://github.com/huggingface/transformers/blob/2596f95e8499bf350b18e1fa0492d38b6f8148fa/src/transformers/models/blenderbot/modeling_flax_blenderbot.py#L886
- [x] https://github.com/huggingface/transformers/blob/2596f95e8499bf350b18e1fa0492d38b6f8148fa/src/transformers/models/blenderbot_small/modeling_flax_blenderbot_small.py#L898
- [x] https://github.com/huggingface/transformers/blob/2596f95e8499bf350b18e1fa0492d38b6f8148fa/src/transformers/models/bart/modeling_flax_bart.py#L925
- [x] https://github.com/huggingface/transformers/blob/2596f95e8499bf350b18e1fa0492d38b6f8148fa/src/transformers/models/mbart/modeling_flax_mbart.py#L949 | 03-09-2022 16:32:36 | 03-09-2022 16:32:36 | |
transformers | 16,021 | closed | Fix Bug in Flax Seq2Seq Models | This PR makes two changes to each of `FlaxEncoderDecoderModel` and `FlaxSpeechEncoderDecoderModel`:
1. Amends the input docstrings to remove incorrect information about the model "_shifting tokens right for denoising_". In Flax, `decoder_input_ids` are obtained by shifting the target labels right **outside** of the seq2seq model, not **within** as stated in the docstrings.
2. Raises a `ValueError` if `decoder_input_ids` are not provided. The current behaviour allows for `decoder_input_ids` to be omitted, in which case they default to `None`. This causes errors when `decoder_input_ids=None` is manipulated with JAX functions to build the `decoder_attention_mask` and `decoder_position_ids` should they be omitted from the arguments too.
The following code snippet throws the error aforementioned in 2:
```python
from transformers import FlaxSpeechEncoderDecoderModel
import jax.numpy as jnp
model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained('hf-internal-testing/tiny-random-wav2vec2', 'hf-internal-testing/tiny-random-gpt2', encoder_from_pt=True, decoder_from_pt=True)
inputs = jnp.ones((2, 5000), dtype=jnp.float32)
outputs = model(inputs)
```
Output:
```python
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/sanchitgandhi/transformers/src/transformers/models/speech_encoder_decoder/modeling_flax_speech_encoder_decoder.py", line 688, in __call__
decoder_attention_mask = jnp.ones_like(decoder_input_ids)
File "/Users/sanchitgandhi/venv/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py", line 3706, in ones_like
_check_arraylike("ones_like", a)
File "/Users/sanchitgandhi/venv/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py", line 570, in _check_arraylike
raise TypeError(msg.format(fun_name, type(arg), pos))
TypeError: ones_like requires ndarray or scalar arguments, got <class 'NoneType'> at position 0.
``` | 03-09-2022 16:06:42 | 03-09-2022 16:06:42 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,020 | closed | Build the doc in a seperate folder then move it | # What does this PR do?
To fix the issue pointed out in #16019 this removes the need for stashing by:
- building the doc in a folder outside any repos
- pull the distant repos once the doc build is finished
- then move other the built doc to those repos and push
TODO: the dev job as well once the main job has been tested | 03-09-2022 16:05:56 | 03-09-2022 16:05:56 | |
transformers | 16,019 | closed | Doc build bug? | # What does this PR do?
If you compare doc-build [transformers](https://github.com/huggingface/doc-build/tree/main/transformers/master/en) vs [datasets](https://github.com/huggingface/doc-build/tree/main/datasets/master/en), you see that transformers master version files are NOT being overwritten. (the should be overwritten).
| tsfms | dtsts |
|-------|-------|
| <img width="1374" alt="Screenshot 2022-03-09 at 16 16 49" src="https://user-images.githubusercontent.com/11827707/157473832-8726ce0f-1ddd-4e6b-9052-20365f3fbb76.png"> | <img width="1477" alt="Screenshot 2022-03-09 at 16 18 19" src="https://user-images.githubusercontent.com/11827707/157473852-6b7294f4-db09-4c54-900b-0a27a7e349ec.png"> |
When I compared build_documentation.yml between [tsfms](https://github.com/huggingface/transformers/blob/master/.github%2Fworkflows%2Fbuild_documentation.yml#L100) & [datasets](https://github.com/huggingface/datasets/blob/master/.github%2Fworkflows%2Fbuild_documentation.yml#L72-L79), I see
diff
```
git stash && git pull && git stash apply &&
```
I don't see https://github.com/huggingface/doc-builder/pull/131 being applied when I go to https://huggingface.co/docs/transformers/master/en/model_doc/bert
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-09-2022 15:28:09 | 03-09-2022 15:28:09 | Removing this line will cause an error each time the doc-build repo has been updated by another commit for another library.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16019). All of your documentation changes will be reflected on that endpoint.<|||||>Ugh, it does look like it prevents any update though, which is weird.<|||||>I see
However, in this img https://user-images.githubusercontent.com/11827707/157473832-8726ce0f-1ddd-4e6b-9052-20365f3fbb76.png
everything except _toctree.yml should say `updated 6 mins ago` because they should have been part of the new build (in each build [this section](https://github.com/huggingface/doc-build/blob/main/transformers/master/en/accelerate.html#L2-L10) of every .html file changes) |
transformers | 16,018 | closed | Choose framework for ONNX export | # What does this PR do?
This allows to choose which framework to use between PyTorch and TensorFlow for the ONNX export.
Fixes #15990
| 03-09-2022 15:01:58 | 03-09-2022 15:01:58 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,017 | closed | Update build_documentation.yml | Rm unnecessary step?
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-09-2022 14:47:08 | 03-09-2022 14:47:08 | I don't see node being installed anywhere else in this file, and it's necessary to build the html files.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>oh totally missed that |
transformers | 16,016 | closed | what's the difference between Megatron-gpt2 and GPT2 inside transformers? | what's the difference between Megatron-gpt2 and GPT2 inside transformers? | 03-09-2022 14:24:15 | 03-09-2022 14:24:15 | Hi @jinfagang ! For such general questions please use the [forum](https://discuss.huggingface.co/). We use issues for bug reports and feature requests.
Megatron-gpt2 is the models trained using the megatron library. The arch is similar to the GPT2 arch. You can find more info in the doc https://huggingface.co/docs/transformers/model_doc/megatron_gpt2<|||||>@patil-suraj thanks. I'll move to forum for further discuss. hope you can give me a hand on understanding.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,015 | closed | Make transformers.utils.fx. _SUPPORTED_MODELS unique | Currently this tuple contains duplicates(e.g. `RobertaForMaskedLM` or `GPT2LMHeadModel`): `AlbertModel, AlbertForPreTraining, AlbertForMaskedLM, AlbertForMultipleChoice, AlbertForQuestionAnswering, AlbertForSequenceClassification, AlbertForTokenClassification, BertModel, BertForPreTraining, BertForNextSentencePrediction, BertForMaskedLM, BertLMHeadModel, BertForMultipleChoice, BertForQuestionAnswering, BertForSequenceClassification, BertForTokenClassification, DistilBertModel, DistilBertForMaskedLM, DistilBertForMaskedLM, DistilBertForMultipleChoice, DistilBertForQuestionAnswering, DistilBertForSequenceClassification, DistilBertForTokenClassification, MobileBertModel, MobileBertForPreTraining, MobileBertForNextSentencePrediction, MobileBertForMaskedLM, MobileBertForMultipleChoice, MobileBertForQuestionAnswering, MobileBertForSequenceClassification, MobileBertForTokenClassification, ElectraModel, ElectraForPreTraining, ElectraForMaskedLM, ElectraForCausalLM, ElectraForMultipleChoice, ElectraForQuestionAnswering, ElectraForSequenceClassification, ElectraForTokenClassification, MegatronBertModel, MegatronBertForPreTraining, MegatronBertForNextSentencePrediction, MegatronBertForMaskedLM, MegatronBertForCausalLM, MegatronBertForMultipleChoice, MegatronBertForQuestionAnswering, MegatronBertForSequenceClassification, MegatronBertForTokenClassification, GPT2Model, GPT2LMHeadModel, GPT2LMHeadModel, GPT2ForSequenceClassification, GPT2ForTokenClassification, GPTJModel, GPTJForCausalLM, GPTJForQuestionAnswering, GPTJForSequenceClassification, GPTNeoModel, GPTNeoForCausalLM, GPTNeoForSequenceClassification, T5Model, T5ForConditionalGeneration, T5ForConditionalGeneration, RobertaModel, RobertaForMaskedLM, RobertaForMaskedLM, RobertaForCausalLM, RobertaForMultipleChoice, RobertaForQuestionAnswering, RobertaForSequenceClassification, RobertaForTokenClassification, GPT2DoubleHeadsModel`
@michaelbenayoun | 03-09-2022 13:53:22 | 03-09-2022 13:53:22 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Merging with offline approval from @LysandreJik |
transformers | 16,014 | closed | Swag example: Update doc format | # What does this PR do?
Updates the doc format in the swag examples (requested [here](https://github.com/huggingface/transformers/pull/15868#discussion_r816869590)) | 03-09-2022 12:27:20 | 03-09-2022 12:27:20 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16014). All of your documentation changes will be reflected on that endpoint. |
transformers | 16,013 | closed | Learning rate finder for the trainer | Hi,
is it possible to integrate a learning rate finder with the trainer api?
Something like this:
https://github.com/davidtvs/pytorch-lr-finder
It's probably has been done before, but couldn't find it yet. | 03-09-2022 10:43:28 | 03-09-2022 10:43:28 | The LR finder does not give reliable results for Transformers models (usually indicates one in the 1e-3 when the actual best value is in the 1e-5 range). That's why we don't have support for it.<|||||>@sgugger thanks for the explanation. Any other best practices? |
transformers | 16,012 | closed | Fix MaskFormer failing test on master | # What does this PR do?
This PR fixes a failing test on master, the fix was easy. In the test I was checking for `transformer_decoder_hidden_states` that only exists when `output_hidden_states == True`, what I needed to check was `transformer_decoder_last_hidden_state`.
I have also made `output_hidden_states` depend on `config.use_auxiliary_loss` for the `MaskFormerModel` inside `MaskFormerForInstanceSegmentation` since we need all the `hidden_states` when computing the auxiliary loss | 03-09-2022 10:37:13 | 03-09-2022 10:37:13 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,011 | closed | Removed an outdated check about hdf5_version | # What does this PR do?
Remove a check `self.assertTrue(h5py.version.hdf5_version.startswith("1.10"))` which is outdated and makes the CI daily test fail.
@LysandreJik @sgugger | 03-09-2022 10:05:48 | 03-09-2022 10:05:48 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16011). All of your documentation changes will be reflected on that endpoint. |
transformers | 16,010 | closed | Beam search uses large amounts of VRAM even with depth of 1 | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.15.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.5
- PyTorch version (GPU?): 1.10.2+cu113 (True)
- Tensorflow version (GPU?): 2.5.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten @narsil
## Information
Model I am using (KoboldAI/GPT-Neo-2.7B-AID):
The problem arises when using:
* A simple script that calls model.generate() with beam search of 50 beams and max_new_token of 1. When the prompt gets long enough, e.g. a few hundred tokens, CUDA will run out of memory with 8gb vram
The tasks I am working on is:
* I am simply attempting to generate all the probabilities of the next predicted token, similar to OpenAI's playground with the option "display probabilites: Full spectrum" enabled. For example, given "Jane and I went to the", the program should output something like "park: 0.2, store: 0.15, party: 0.12, etc..". According to https://github.com/huggingface/transformers/issues/10012 this is only possible to do via "beam search".
## To reproduce
Steps to reproduce the behavior:
1. Call model.generate with beam search of 50 beams, max_new_token of 1, and a prompt of about 300 tokens.
## Expected behavior
I do not understand why this task requires so much VRAM. It should be no more demanding than generating 50 new tokens serially. In fact, it should be even less demanding than that, because each iteration we stay with the same token size, whereas under normal operation, the input will keep growing. | 03-09-2022 09:42:10 | 03-09-2022 09:42:10 | Hi @monsieurpooh ,
I think it's because you're using `num_beams=50` quite simply.
We don't include optimizations to get only 1 beam when `max_new_token=1` so even though for this specifically we don't need the `50 copies` there are actually created (this actually speeds up later when generating more tokens since there's less data moving around later).
So `100 tokens x 50 beams x Vocab_size` logits, it gets fast quick.
If you are only interested in the `top_k` logits on a single step, then I think you shouldn't use `num_beams`.
Two suggestions:
- Using a custom logits processor to intercept the logits while using generate.
```python
class LogitsInspector(LogitsProcessor):
def __call__(self, input_ids: torch.Tensor, scores: torch.Tensor) -> torch.FloatTensor:
# Do what you want, `scores` should contain everything you need. they are the logits returned by the model.
model.generate(... Old arguments...., logits_processor=LogitsProcessorList(LogitsInspector())
```
- Using a simpler loop
```python
model = AutoModelForConditionalGeneration(... ) # Maybe ForCausalLM, I don't remember if gpt-neo2.7B is decoder-only or not)
tokenizer = AutoTokenizer.from_pretrained(...)
inputs = tokenizer(sentence, return_tensors="pt")
outputs = model(**inputs)
# outputs.logits should contain what you need
```
Depending on your context either of these solutions might suit better.
The first one is more involved but allow you finer control and maybe modifying the logits on the fly if you want.
The second one is simpler, but you might need to readd some of the stuff `.generate()` takes care for your (all information coming from the config, and selecting the logits if you want an interative way to see the logits at each step).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Thank you @Narsil ; forgive me for the noob question but in the 2nd solution how do I get the corresponding tokens of the outputs.logits? It seems to be just a 2D tensor of floats.
Same question about the scores in the 1st solution; how do I get which tokens each score corresponds to?<|||||>Also I'm unable to get the code to work for the 1st solution. What should the LogitsInspector return?<|||||>I found out the answer to my first question. It's a 2d tensor of size 115, and 50000+. The 50000+ corresponds to the tokens, and the position in the list corresponds to the index of the token as determined by the tokenizer.
However, after being able to sort these by score, I'm not sure how to convert the score into the actual probabilities or logprobs.<|||||>I found the answer to my question; just call .softmax(-1) on logits to get the probabilities.
Oddly after finishing this implementation I got very slightly differing scores from the beam search implementation.
Beam search:
```
" she": 0.826171875
" it": 0.0556640625
" that": 0.04119873046875
" this": 0.0152435302734375
" her": 0.0082855224609375
" the": 0.00806427001953125
" your": 0.006633758544921875
" there": 0.0038547515869140625
" you": 0.003482818603515625
" one": 0.003108978271484375
```
Simple logits style:
```
" she": 0.849609375
" it": 0.057830810546875
" that": 0.042236328125
" this": 0.0157470703125
" her": 0.00847625732421875
" the": 0.00844573974609375
" your": 0.00679779052734375
" there": 0.0039825439453125
" you": 0.0035495758056640625
" one": 0.0031948089599609375
```
I'm not sure which one is more accurate; I presume the latter, because the former required a 1-line hack from https://github.com/huggingface/transformers/issues/10012.
In any case the 2nd version is wayyyyyy faster, so that's good.
EDIT: Actually, the 1st version is the correct one. I had to call .softmax(-1) _before_ filtering to topk, not after. Now the outputs are identical, and the 2nd version is still faster.
I'm still curious to know if there's any tutorial on how to use LogitsInspector, in case I need to do some more complex logic.<|||||>> I'm still curious to know if there's any tutorial on how to use LogitsInspector, in case I need to do some more complex logic.
Not really that's intended for power users.
But if you feel that displaying the entire probability distribution is worth it, feel free to open a PR, we'll take a look and could add it to the lib so that it's easier for others to use maybe.
When doing so, try to share as early as possible your design and intent so the community can give feedback before investing too much work and effort which might have some flaw.
Cheers,
Nicolas |
transformers | 16,009 | closed | Fix github actions comment | Fixes the name of the bot that comments. | 03-09-2022 09:22:10 | 03-09-2022 09:22:10 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Ready for merge, feel free to check the edited comment above to see how it was impacted by the jobs. |
transformers | 16,008 | closed | resize_token_embeddings() failed with GPT-J, after sync to the latest DeepSpeed 0.6.1 | ## Environment info
- DeepSpeed version: 0.6.1+097efeb7
- Transformers version: 4.18.0.dev0
- Platform: Linux-5.4.0-99-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- Huggingface_hub version: 0.2.1
- PyTorch version (GPU?): 1.10.0+cu113 (True)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.4.0 (cpu)
- Jax version: 0.3.1
- JaxLib version: 0.3.0
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@stas00 @patil-suraj @jeffra
## Information
Model I am using GPT-J and GPT2
The problem arises when using:
* [ ] the official example scripts: https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py
## To reproduce
Steps to reproduce the behavior:
Replace line 360 in run_clm.py from ` model.resize_token_embeddings(len(tokenizer))` to ` model.resize_token_embeddings(50402); exit()`. Then run DeepSpeed + run_clm.py:
```
deepspeed --num_gpus 2 /home/meiyang/src/transformers_fork/examples/pytorch/language-modeling/run_clm.py --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --per_device_train_batch_size 4 --per_device_eval_batch_size 4 --deepspeed zero3.json --output_dir /tmp/model_output --model_name_or_path ~/models/gpt-j-6B/
Traceback (most recent call last):
File "/home/meiyang/src/transformers_fork/examples/pytorch/language-modeling/run_clm.py", line 546, in <module>
main()
File "/home/meiyang/src/transformers_fork/examples/pytorch/language-modeling/run_clm.py", line 360, in main
model.resize_token_embeddings(54002)
File "/home/meiyang/src/transformers_fork/src/transformers/modeling_utils.py", line 744, in resize_token_embeddings
model_embeds = self._resize_token_embeddings(new_num_tokens)
File "/home/meiyang/src/transformers_fork/src/transformers/modeling_utils.py", line 765, in _resize_token_embeddings
new_lm_head = self._get_resized_lm_head(old_lm_head, new_num_tokens)
File "/home/meiyang/src/transformers_fork/src/transformers/modeling_utils.py", line 911, in _get_resized_lm_head
new_lm_head.bias.data[:num_tokens_to_copy] = old_lm_head.bias.data[:num_tokens_to_copy]
RuntimeError: The expanded size of the tensor (50400) must match the existing size (0) at non-singleton dimension 0. Target sizes: [50400]. Tensor sizes: [0]
```
1. The error was triggered by the following code and happened to GPT-J only, not other GPT models such as GPT2 or GPT-neo, probably because only GPT-J has_new_lm_head_bias.
2. The error didn't happen if I run run_clm.py along without DeepSpeed.
3. The error first occurred when I pulled the latest source code of DeepSpeed. I've tried to bring Transformers to the latest but no help.
https://github.com/huggingface/transformers/blob/5b7dcc73427d16218488846a365d10866dca9c3e/src/transformers/modeling_utils.py#L833
```
# Copy bias weights to new lm head
if has_new_lm_head_bias:
new_lm_head.bias.data[:num_tokens_to_copy] = old_lm_head.bias.data[:num_tokens_to_copy]
```
| 03-09-2022 09:19:51 | 03-09-2022 09:19:51 | Thank you for the report, @dunalduck0 and to @jeffra for the PR with the fix. |
transformers | 16,007 | closed | TrOCR Backslash problem | hello,I use https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TrOCR/Fine_tune_TrOCR_on_IAM_Handwriting_Database_using_Seq2SeqTrainer.ipynb this jupyter to fine-tune my task .It about generate the math expression LATEX sequence according to the handwritten math expression image.
but when I go predicting the image,the model predicts an extra backslash like this

Would like to ask if this can be solved?
processor = TrOCRProcessor.from_pretrained("microsoft/trocr-base-handwritten")
model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-small-stage1")
labels before training is no problem.Do I need a custom tokenizer?

| 03-09-2022 07:58:27 | 03-09-2022 07:58:27 | I think that I get the problem's point.
Why is there no difference between Beam Search and greedy?
Do I need to customize tokenizers?


<|||||>@win5923 the double backslash quirk is a Python "feature" :D See this StackOverflow issue, which explains what's going on: https://stackoverflow.com/questions/24085680/why-do-backslashes-appear-twice
As for beam search / greedy generate, they can get the same results (beam search is an elaborated greedy generation). If you were looking for multiple outputs with beam search, try using the `num_return_sequences` argument. If you want to add more entropy to the mix, try with `sample=True` and explore other arguments.
Finally, the application you're trying to build is super interesting! We'd be super happy if you would share your fine-tuned model with us, and/or create a Space showcasing it <3<|||||>thank you so much! I will.<|||||>Let us know if this solves your problem (so we can close the issue), or if there are further bugs :) <|||||>I'm not sure how to upload the model from checkpoint, I upload the code to my repo.
https://github.com/win5923/TrOCR-Handwritten-Mathematical-Expression-Recognition
Not sure if there is something wrong with my code.The current model does not predict very well.
Consider adjusting the parameters of beam search to see if it can improve performance or accuracy.
It's ok now.Thanks a lot.
<|||||>Hi,
You can easily upload a model to the hub using the `push_to_hub`[method](https://huggingface.co/docs/transformers/main_classes/model#transformers.file_utils.PushToHubMixin.push_to_hub). For example:
```
model.push_to_hub(
repo_path_or_name="trocr-handwritten-math",
organization="nielsr",
commit_message="Add model",
use_temp_dir=True,
)
```
This way, you'll be able to just do:
```
model = VisionEncoderDecoderModel.from_pretrained("nielsr/trocr-handwritten-math")
```<|||||>Closing this issue as I believe we've answered your questions. Feel free to re-open if you have further questions.<|||||>@NielsRogge @gante can you please explain how to annotate the below files for custom handwritten mathematical equation training. More importantly s^2





|
transformers | 16,006 | closed | Recommended way of exporting encoder-decoder model to ONNX with `transformers[onnx]` | I am looking for a way to export an encoder-decoder to ONNX to run inference. I followed the guide at [Exporting Transformers Models](https://huggingface.co/docs/transformers/serialization) but that only shows an example of an encoder-only model. Trying to accomplish this for the specific case of the [Helsinki-NLP/Opus-MT model for Spanish to English](https://huggingface.co/Helsinki-NLP/opus-mt-es-en), I did the following:
1. I exported the model with the following command: `python -m transformers.onnx --model=Helsinki-NLP/opus-mt-es-en --feature=seq2seq-lm --atol=2e-05 workspace/onnx/opus-mt-es-en `
The output of the model was successful.
2. Then, as in the docs, I tried running inference on the model with a code similar to the one below.
```
from transformers import AutoTokenizer
from onnxruntime import InferenceSession
tokenizer = ...
session = InferenceSession("onnx/model.onnx")
inputs = tokenizer("Probando el uso de Marian despues de haberlo exportando a ONNX", return_tensors="np", padding=True)
outputs = session.run(output_names=["logits"], input_feed=dict(inputs))
```
This yields the following exception:
`ValueError: Model requires 4 inputs. Input Feed contains 2`.
------
I tried a similar thing with T5, and the same exception was raised. After some debugging, I realized that any encoder-decoder architecture expects the following 4 arguments: `input_ids`, `attention_mask`, `decoder_input_ids`, `decoder_attention_mask`.
After thorough reading of the transformer's code, my understanding is that any model in transformers inherits from `PreTrainedModel`, which obviously defines models that are intended for training. This implies that the associated Config requires inputs that are used during training and that explains the need for 4 arguments instead of 2, contrary to the trivial case of encoder-only models that is documented in website.
However, when working with a transformer model (no export to ONNX), one is able to use the `generate()` function that is added on top of seq2seq models by `GenerationMixin`. This function is the helper to perform seq2seq during inference.
------
The question is the following:
Is there a way (or maybe a recommended workaround) to export an encoder-decoder model to ONNX, such that it behaves as the `generate()` function from `GenerationMixin` and not as the `forward()` method in `PreTrainedModel`?
-----
I know that a possible workaround would be to export both the encoder and the decoder separately and programmatically connect the input/outputs of each Individual `InferenceSession`. Other than that, I cannot figure out an obvious solution to this problem using the out-of-the-box methods in `transformers`.
Any help will be highly appreciated :)
| 03-09-2022 05:02:57 | 03-09-2022 05:02:57 | Hey @gomerudo thanks for raising this issue and taking the time to clearly explain the problem!
Your analysis is spot on regarding the need for 4 arguments instead of 2. We mention the role of `output_names` in the guide, but we should also mention that one can get the input names via:
```python
from transformers.models.marian import MarianConfig, MarianOnnxConfig
config = MarianConfig()
onnx_config = MarianOnnxConfig(config)
# Returns ['input_ids', 'attention_mask', 'decoder_input_ids', 'decoder_attention_mask']
print(list(onnx_config.inputs.keys()))
```
I'll update the guide to clarify this :)
Currently, the best solution is to export the encoder and decoder separately (e.g. as done in the `fastt5` project [here](https://github.com/Ki6an/fastT5/blob/2f73bd57ca3bab226952679b4381049eb09721a4/fastT5/onnx_models.py#L140)). We're investigating an approach to support text generation in our `optimum` [library](https://github.com/huggingface/optimum), but there's no timeline I can share yet<|||||>Hi @gomerudo, did you succeed at making this work?<|||||>@lewtun I used `fastT5` to export a T5 checkpoint to ONNX. The export finished to completion, and I was able to make inference using the `generate()` function, but was unable to extract hidden states. Please see code snippet below.
```
from fastT5 import export_and_get_onnx_model
from transformers import AutoTokenizer
model_path = "path/to/model/checkpoint"
model = export_and_get_onnx_model(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
input_terms = ["this is a sample input"]
token = tokenizer(input_terms, max_length=512 * 2, padding=True, truncation=True, return_tensors='pt')
tokens = model.generate(input_ids=token['input_ids'].to('cpu'),
attention_mask=token['attention_mask'].to('cpu'),
return_dict_in_generate=True,
max_length=512 * 2,
num_beams=1,
output_scores=True,
output_hidden_states=True)
```
The resulting dictionary returned by `model.generate()` is as follows:
```
GreedySearchEncoderDecoderOutput(
sequences=tensor([[ 0, 119, 114, 102, 108, 111, 108, 125, 120, 112, 100, 101, 35, 53,
...
...),
encoder_attentions=None,
encoder_hidden_states=None,
decoder_attentions=None,
cross_attentions=None,
decoder_hidden_states=(None, None,...
...
None)
)
```
Any idea why this is happening? How can I get the hidden states? Using the model before conversion with the above `generate()` function call generates the hidden states for both encoder and decoder.<|||||>Hey @vsoesanto I think that kind of question would be best asked on the `fastT5` repo as I believe they wrap the model classes in a special way. This might explain why you can no longer see the hidden states with their `generate()` method<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>HI all! Any updates on this? I've also tried to convert the OPUS-MT Marian Models to ONNX without success. I'm aware of the option to convert the encoder and decoder separately, but the issue is implementing Beam Search for the decoding process. Any changes the `optimum` package will be able to handle seq2seq models in the near future? Thanks in advance :) cc @lewtun <|||||>> HI all! Any updates on this? I've also tried to convert the OPUS-MT Marian Models to ONNX without success. I'm aware of the option to convert the encoder and decoder separately, but the issue is implementing Beam Search for the decoding process. Any changes the `optimum` package will be able to handle seq2seq models in the near future? Thanks in advance :) cc @lewtun
Hello Pedrogov! Did you successfully convert the encoder-decoder model to onnx format? I have encountered this problem now, and I plan to convert the transformer model written by pytorch into onnx, which feels very tricky. If you have achieved success, can you lend me a reference? Thanks in advance!<|||||>Hi @doubletfly! No, I have not yet converted neither the encoder nor the decoder to ONNX. The main issue I issue I faced was that the models seem to generate text using Beam Search instead of Greedy, so this would need to be implemented by hand in order to replace the model.<|||||>@doubletfly @Pedrohgv this worked for me, at least for the m2m_100: https://github.com/huggingface/optimum/blob/b8ea77029bb6fffddc175a540cc29f15efa0fe68/docs/source/onnxruntime/modeling_ort.mdx#export-and-inference-of-sequence-to-sequence-models<|||||>> Hi @doubletfly! No, I have not yet converted neither the encoder nor the decoder to ONNX. The main issue I issue I faced was that the models seem to generate text using Beam Search instead of Greedy, so this would need to be implemented by hand in order to replace the model.
Okay!Thanks for your reply. :)<|||||>> @doubletfly @Pedrohgv this worked for me, at least for the m2m_100: https://github.com/huggingface/optimum/blob/b8ea77029bb6fffddc175a540cc29f15efa0fe68/docs/source/onnxruntime/modeling_ort.mdx#export-and-inference-of-sequence-to-sequence-models
That's good, i'm going to study it, thanks for your advice. :)<|||||>@malloc-naski that's great, I tried Optimum once and it didn't work for the OPUS models, but it was some time ago and I don't remember exactly what it was I was doing. If I have time, I'll test this. Thank you!<|||||>Hi @Pedrohgv @doubletfly @malloc-naski we've recently added support for seq2seq models in `optimum`, so you can now do text generation with ONNX models directly via the `pipeline()` function from `transformers` 😎
Here's a simple demo:
```python
# Install `optimum` from source with ONNX Runtime backend
%pip install git+https://github.com/huggingface/optimum.git#egg=optimum[onnxruntime]
from transformers import AutoTokenizer, pipeline
from optimum.onnxruntime import ORTModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de")
# `from_transformers=True` downloads the PyTorch weights and converts them to ONNX format
model = ORTModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-de", from_transformers=True)
onnx_translation = pipeline("translation_en_to_de", model=model, tokenizer=tokenizer)
text = "My name is Lewis and I live in Switzerland."
pred = onnx_translation(text)
# Returns [{'translation_text': 'Mein Name ist Lewis und ich lebe in der Schweiz'}]
pred
```
Hope this helps!<|||||>@lewtun Thank you very much! <|||||>> Hi @Pedrohgv @doubletfly @malloc-naski we've recently added support for seq2seq models in `optimum`, so you can now do text generation with ONNX models directly via the `pipeline()` function from `transformers` 😎
>
> Here's a simple demo:
>
> ```python
> # Install `optimum` from source with ONNX Runtime backend
> %pip install git+https://github.com/huggingface/optimum.git#egg=optimum[onnxruntime]
>
> from transformers import AutoTokenizer, pipeline
> from optimum.onnxruntime import ORTModelForSeq2SeqLM
>
> tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de")
> # `from_transformers=True` downloads the PyTorch weights and converts them to ONNX format
> model = ORTModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-de", from_transformers=True)
> onnx_translation = pipeline("translation_en_to_de", model=model, tokenizer=tokenizer)
>
> text = "My name is Lewis and I live in Switzerland."
> pred = onnx_translation(text)
> # Returns [{'translation_text': 'Mein Name ist Lewis und ich lebe in der Schweiz'}]
> pred
> ```
>
> Hope this helps!
Hi!
This works well. But I have problems when trying to load a saved onnx model.
The error is as follows:
`ValueError: Model requires 49 inputs. Input Feed contains 25`
Any suggestions?<|||||>Hi @ahmedbr! Can you provide an example script to reproduce this issue please?<|||||>https://colab.research.google.com/drive/14Gmc9xkkvJNCf7EiPYp6cjhThwAux7M6?usp=sharing
I am trying to get the resulted text from this model but I only get two results
last_hidden_state with shape (1, 2, 512) and onnx::MatMul_949 with shape of (1, 1500, 512)
how can I get token ids from this two outputs
this onnx was made by transformers.onnx as one output
I tried using optimum too and it gave me several models :
"decoder_model.onnx",
"decoder_model_quantized.onnx",
"encoder_model.onnx",
"decoder_model_merged.onnx",
"decoder_with_past_model.onnx",
"encoder_model_quantized.onnx",
"decoder_model_merged_quantized.onnx",
"decoder_with_past_model_quantized.onnx"
here is the code i used this models in :
https://colab.research.google.com/drive/167UnyEdzPfXFWg81m32efvBNqTHxR37Q?usp=sharing
I used the encoder and decoder and I was able to get logits that gave
valid token ids but only 2 tokens, the first I am sure it's correct<|||||>@AMF777 Optimum is the recommended tool to export models to ONNX as `transformers.onnx` is no longer maintained.
Trying to make the different exported decoders clearer:
- `decoder_model.onnx` is the decoder without key-value cache. It is usually used in the first generation iteration.
- `decoder_with_past_model.onnx` is the decoder with key-value cache. It is used after the first generation iteration, when a cache can be set to avoid recomputing intermediate key and values.
- `decoder_model_merged` is the merge of the two previous decoders, so it will automatically switch between both. That's the one you should use.
Let me know if that helps.<|||||>Thank you very much @regisss it helped |
transformers | 16,005 | closed | `mlm` training fails due to large message size for `nested_gather` on torch_xla | The `PyTorch/XLA/TPU` HF tests for `mlm-bert` and `mlm-roberta` fail as discussed below. I have extensively tested this issue on both 2VM and 1VM machines. On both machines, when I set `--num_core 1`, the test passes as expected, and when I set `--num_core 8` I get the error below.
This error suggests the [`mesh_reduce`](https://github.com/pytorch/xla/blob/master/torch_xla%2Fcore%2Fxla_model.py#L957-L979) API called by [evaluate()](https://github.com/huggingface/transformers/blob/master/src%2Ftransformers%2Ftrainer.py#L2271-L2279) > [evaluation_loop](https://github.com/huggingface/transformers/blob/master/src%2Ftransformers%2Ftrainer.py#L2460) > [nested_xla_mesh_reduce()](https://github.com/huggingface/transformers/blame/master/src/transformers/trainer_pt_utils.py#L155-L165) communicates larger than expected tensor payloads.
Reference to an older [issue](https://github.com/pytorch/xla/issues/1924) which sounds relevant here.
Repro command:
```
python3 examples/pytorch/xla_spawn.py --num_cores 8 examples/pytorch/language-modeling/run_mlm.py --logging_dir ./tensorboard-metric --cache_dir ./cache_dir --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --overwrite_output_dir --output_dir language-modeling --logging_steps 30 --save_steps 3000 --overwrite_cache --tpu_metrics_debug --model_type=bert --tokenizer=bert-base-cased --num_train_epochs 1 --per_device_train_batch_size 16 --per_device_eval_batch_size 4
```
Error message:
```
***** train metrics *****
epoch = 1.0
train_loss = 8.969
train_runtime = 0:02:58.03
train_samples = 4771
train_samples_per_second = 26.798
train_steps_per_second = 0.213
03/09/2022 03:22:36 - INFO - run_mlm - *** Evaluate ***
[INFO|trainer.py:570] 2022-03-09 03:22:36,278 >> The following columns in the evaluation set don't have a corresponding argument in `BertForMaskedLM.forward` and have been ignored: special_tokens_mask. If special_tokens_mask are not expected by `BertForMaskedLM.forward`, you can safely ignore this message.
[INFO|trainer.py:2403] 2022-03-09 03:22:36,281 >> ***** Running Evaluation *****
[INFO|trainer.py:2405] 2022-03-09 03:22:36,281 >> Num examples = 493
[INFO|trainer.py:2408] 2022-03-09 03:22:36,281 >> Batch size = 2
Exception in device=TPU:7: tensorflow/compiler/xla/xla_client/mesh_service.cc:377 : Failed to meet rendezvous 'nested_gather': Received message larger than max (950146944 vs. 4194304) (8)
Exception in device=TPU:2: tensorflow/compiler/xla/xla_client/mesh_service.cc:377 : Failed to meet rendezvous 'nested_gather': Received message larger than max (950146944 vs. 4194304) (8)
Exception in device=TPU:0: tensorflow/compiler/xla/xla_client/mesh_service.cc:377 : Failed to meet rendezvous 'nested_gather': Received message larger than max (950146944 vs. 4194304) (8)
Exception in device=TPU:3: tensorflow/compiler/xla/xla_client/mesh_service.cc:377 : Failed to meet rendezvous 'nested_gather': Received message larger than max (950146944 vs. 4194304) (8)
...
Traceback (most recent call last):
Traceback (most recent call last):
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/usr/local/lib/python3.8/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn
fn(gindex, *args)
File "/home/miladmo/transformers/examples/pytorch/language-modeling/run_mlm.py", line 582, in _mp_fn
main()
File "/home/miladmo/transformers/examples/pytorch/language-modeling/run_mlm.py", line 545, in main
metrics = trainer.evaluate()
File "/home/miladmo/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2271, in evaluate
output = eval_loop(
File "/usr/local/lib/python3.8/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/home/miladmo/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2460, in evaluation_loop
logits = self._nested_gather(logits)
File "/usr/local/lib/python3.8/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn
fn(gindex, *args)
File "/usr/local/lib/python3.8/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/home/miladmo/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2546, in _nested_gather
tensors = nested_xla_mesh_reduce(tensors, name)
File "/home/miladmo/transformers/examples/pytorch/language-modeling/run_mlm.py", line 582, in _mp_fn
main()
File "/usr/local/lib/python3.8/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn
fn(gindex, *args)
File "/home/miladmo/.local/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 163, in nested_xla_mesh_reduce
return xm.mesh_reduce(name, tensors, torch.cat)
File "/home/miladmo/transformers/examples/pytorch/language-modeling/run_mlm.py", line 545, in main
metrics = trainer.evaluate()
File "/home/miladmo/transformers/examples/pytorch/language-modeling/run_mlm.py", line 582, in _mp_fn
main()
File "/home/miladmo/transformers/examples/pytorch/language-modeling/run_mlm.py", line 545, in main
metrics = trainer.evaluate()
File "/usr/local/lib/python3.8/dist-packages/torch_xla/core/xla_model.py", line 974, in mesh_reduce
xdata = rendezvous(tag, bio.getvalue())
File "/home/miladmo/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2271, in evaluate
output = eval_loop(
File "/home/miladmo/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2271, in evaluate
output = eval_loop(
File "/usr/local/lib/python3.8/dist-packages/torch_xla/core/xla_model.py", line 926, in rendezvous
return torch_xla._XLAC._xla_rendezvous(get_ordinal(), tag, payload, replicas)
File "/home/miladmo/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2460, in evaluation_loop
logits = self._nested_gather(logits)
File "/home/miladmo/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2460, in evaluation_loop
logits = self._nested_gather(logits)
File "/home/miladmo/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2546, in _nested_gather
tensors = nested_xla_mesh_reduce(tensors, name)
File "/home/miladmo/.local/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 163, in nested_xla_mesh_reduce
return xm.mesh_reduce(name, tensors, torch.cat)
File "/home/miladmo/.local/lib/python3.8/site-packages/transformers/trainer.py", line 2546, in _nested_gather
tensors = nested_xla_mesh_reduce(tensors, name)
RuntimeError: tensorflow/compiler/xla/xla_client/mesh_service.cc:377 : Failed to meet rendezvous 'nested_gather': Received message larger than max (950146944 vs. 4194304) (8)
File "/home/miladmo/.local/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 163, in nested_xla_mesh_reduce
return xm.mesh_reduce(name, tensors, torch.cat)
File "/usr/local/lib/python3.8/dist-packages/torch_xla/core/xla_model.py", line 974, in mesh_reduce
xdata = rendezvous(tag, bio.getvalue())
File "/usr/local/lib/python3.8/dist-packages/torch_xla/core/xla_model.py", line 974, in mesh_reduce
xdata = rendezvous(tag, bio.getvalue())
File "/usr/local/lib/python3.8/dist-packages/torch_xla/core/xla_model.py", line 926, in rendezvous
return torch_xla._XLAC._xla_rendezvous(get_ordinal(), tag, payload, replicas)
File "/usr/local/lib/python3.8/dist-packages/torch_xla/core/xla_model.py", line 926, in rendezvous
return torch_xla._XLAC._xla_rendezvous(get_ordinal(), tag, payload, replicas)
RuntimeError: tensorflow/compiler/xla/xla_client/mesh_service.cc:377 : Failed to meet rendezvous 'nested_gather': Received message larger than max (950146944 vs. 4194304) (8)
RuntimeError: tensorflow/compiler/xla/xla_client/mesh_service.cc:377 : Failed to meet rendezvous 'nested_gather': Received message larger than max (950146944 vs. 4194304) (8)
Traceback (most recent call last):
``` | 03-09-2022 04:20:50 | 03-09-2022 04:20:50 | cc @sgugger <|||||>Thanks for reporting this issue. Could you check if the problem persists when you comment out [this line](https://github.com/huggingface/transformers/blob/c1aaa439350051acdcd585946e91525502a6b063/examples/pytorch/language-modeling/run_mlm.py#L516) and the next (line 517) in the `Trainer` definition inside this example?<|||||>Just tried your recommendation. The error disappears after commenting out [these lines](https://github.com/huggingface/transformers/blame/c1aaa439350051acdcd585946e91525502a6b063/examples/pytorch/language-modeling/run_mlm.py#L516-L517).
@sgugger, what do you recommend as the solution to this issue?<|||||>Leaving those two lines commented out for now. I'm a bit surprised it's not possible to gather the logits of a language model. Will investigate.<|||||>Looks like the tensor payload size exceed the `XRT_MESH_MAX_MSGSIZE` requirements.
Our tests clone from HF `master`. While we can have a side patch that unblocks, I prefer to wait for a fix to `master`. Wdyt?<|||||>PR linked above patches the issue by removing the metric computation on TPU. I hope we can have a better fix to re-enable it in the future.<|||||>Thanks @sgugger <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Any progress on re-enabling compute_metrics? |
transformers | 16,004 | closed | Fix wav2vec2 export onnx model with attention_mask error | # What does this PR do?
This PR fixes a problem with wav2vec2 when convert to onnx model with attention_mask. see: https://github.com/huggingface/transformers/issues/10004, https://github.com/pytorch/fairseq/issues/3010#issuecomment-999821804
@patrickvonplaten
| 03-09-2022 03:22:47 | 03-09-2022 03:22:47 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Looks good to me! cc @lewtun <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Should this be merged?<|||||>Yes I think this can be merged since it's only touching the modelling code and the Wav2Vec2 ONNX export that I'm working on will anyway need this.
I'll merge once the CI passes<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@patrickvonplaten @lewtun we probably still need to merge this, if there are no objections :sweat_smile: <|||||>> Sorry just one question - why do we use `repeat` here instead of https://pytorch.org/docs/stable/generated/torch.broadcast_to.html ? Broadcasting should save some memory no?
>
> Also could we please run all the Wav2Vec2 slow tests ones to make sure there is no unexpected numerical difference?
Yes @anton-l - I'm waiting for a reply to Patrick's comment above. Alternatively, if Patrick agrees we can merge this "as is" (looks good to me)<|||||>Good to merge for me if all Wav2Vec2 slow tests pass (including the pretraining ones). @anton-l could you check it maybe otherwise?<|||||>Slow+pretraining tests are OK, thanks for the fix @nilboy! |
transformers | 16,003 | closed | Multiclass image classification with ViT - computer vision | Hi, are there building blocks available in the repo to extend current single class to **multilabel classification**?
It should not be very big of an issue. I guess. | 03-09-2022 01:53:05 | 03-09-2022 01:53:05 | #15978 <|||||>And also I would want to know if we can do prediction interpretation by visualizing attention maps or something like that. <|||||>@NielsRogge <|||||>Hi,
You can easily fine-tune ViT (or any other sequence classifier) using the `problem_type` argument.
```
from transformers import ViTForImageClassification
model = ViTForImageClassification.from_pretrained("google/vit-base-patch16--224", problem_type="multi_label_classification")
```
This will make sure the appropriate loss function is used (i.e. `BCEWithLogitsLoss`).<|||||>> Hi,
>
> You can easily fine-tune ViT (or any other sequence classifier) using the `problem_type` argument.
>
> ```
> from transformers import ViTForImageClassification
>
> model = ViTForImageClassification.from_pretrained("google/vit-base-patch16--224", problem_type="multi_label_classification")
> ```
>
> This will make sure the appropriate loss function is used (i.e. `BCEWithLogitsLoss`).
And what about dataset? How do I load it from images folder (specifically for multilabel) <|||||>
> And also I would want to know if we can do prediction interpretation by visualizing attention maps or something like that.
This as well please let me know it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>> > Hi,
> > You can easily fine-tune ViT (or any other sequence classifier) using the `problem_type` argument.
> > ```
> > from transformers import ViTForImageClassification
> >
> > model = ViTForImageClassification.from_pretrained("google/vit-base-patch16--224", problem_type="multi_label_classification")
> > ```
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > This will make sure the appropriate loss function is used (i.e. `BCEWithLogitsLoss`).
>
> And what about dataset? How do I load it from images folder (specifically for multilabel)
Hi, im also having trouble with loading the dataset. specifically - how does the model expect the labels for multilabel? did you manage to solve this?<|||||>Hi,
For multi-label classification, you need to make sure that you provide `pixel_values` of shape (batch_size, num_channels, height, width) and `labels` of shape (batch_size, num_labels). The latter contain the one-hot encoded labels.
So let's say you have an image that contains a cat and a remote and your labels are cat, remote, dog and house.
In that case, your labels need to be this: `torch.tensor([[1, 1, 0, 0]])`.<|||||>Thanks! :) |
transformers | 16,002 | closed | Translate to Spanish of training.mdx | null | 03-09-2022 01:22:21 | 03-09-2022 01:22:21 | |
transformers | 16,001 | closed | Update troubleshoot guide | This PR adds more content to the troubleshooting guide. I filtered the repo issues for errors and browsed the most viewed topics on the forums to find some of the most common issues users encounter and how to resolve them. Updates include:
- CUDA out of memory errors
- Unable to load a saved TF model (thanks @gante)
- ImportError
- CUDA error: device-side assert triggered
@ydshieh also mentioned users have trouble with [`resize_token_embeddings`](https://github.com/huggingface/transformers/issues?q=resize_token_embeddings), but I'm not sure what exactly users are having issues with (seems like it could be more than one thing?). Let me know if you have more context on this, or if it should be included! :) cc @patrickvonplaten @patil-suraj
Feel free to let me know if there are any other common issues that can be documented! | 03-08-2022 20:40:11 | 03-08-2022 20:40:11 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Another `resize_token_embeddings()` issue 👉 https://github.com/huggingface/transformers/issues/16008 |
transformers | 16,000 | closed | removing azureml-specific code - now covered by MLflow | Signed-off-by: Walter Martin <[email protected]>
# What does this PR do?
This PR removes AzureML code from the repo since AzureML tracking can all be done through MLflow.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-08-2022 20:32:59 | 03-08-2022 20:32:59 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16000). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Very nice to be able to handle it through MLFlow! What do you think @sgugger?<|||||>Removing the callback as is is breaking though. It should be left with a deprecation cycle.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,999 | closed | Sharded DDP returns extra predictions EvalPrediction | ## Environment info
- `transformers` version: 4.17.0.dev0
- Platform: Linux-5.13.0-28-generic-x86_64-with-glibc2.17
- Python version: 3.8.11
- PyTorch version (GPU?): 1.10.2 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes, trying DataParallel and ShardedDDP
### Who can help
I'm using Trainer in a multi-GPU setup for training a custom model. Tagging @sgugger for Trainer.
## Information
I'm using a custom Multi-instance learning model with a BERTweet submodule.
The problem arises when using:
* [N] the official example scripts: (give details below)
* [Y] my own modified scripts: (give details below)
The tasks I am working on is:
* [N] an official GLUE/SQUaD task: (give the name)
* [Y] my own task or dataset: (give details below)
## To reproduce
Settings passed to Trainer:
```bash
python -m torch.distributed.launch --nproc_per_node 2 "${MINERVA_HOME}/src/evaluation/mil_clean.py" \
--run_name "finetune-shuffle" \
--output_dir "${OUTPUT_DIR}" \
--logging_dir "${LOG_DIR}" \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 30 \
--learning_rate 1e-5 \
--do_train \
--do_eval \
--do_predict \
--logging_strategy "steps" \
--log_on_each_node 0 \
--logging_steps 50 \
--log_level "info" \
--save_strategy "steps" \
--save_steps 1000 \
--evaluation_strategy "steps" \
--eval_steps 1000 \
--optim "adamw_torch" \
--load_best_model_at_end True \
--metric_for_best_model "f1" \
--num_train_epochs 10 \
--dataloader_num_workers 2 \
--dataloader_drop_last True \
--num_tweets_per_day 100 \
--shuffle_samples True \
--seed 42 \
--gradient_accumulation_steps 8 \
--finetune_instance_model True \
--dataloader_pin_memory False \
--sharded_ddp "zero_dp_3"
```
```python
def compute_metrics(eval_prediction):
probs, label_ids = eval_prediction # Model returns probabilities instead of logits
probs = probs.reshape(-1)
predictions = (probs > 0.5).astype(np.uint8)
try:
prec, recall, f1, support = precision_recall_fscore_support(label_ids, predictions, zero_division=0, average="weighted")
except ValueError as err:
logger.error(f"{err=}\n{eval_prediction.predictions.shape=}\n{eval_prediction.label_ids.shape=}")
exit(1)
return {
"precision": prec,
"recall": recall,
"f1": f1
}
```
Output with ShardedDDP. No error is thrown with DataParallel so I think there is a collation issue.
```bash
eval_prediction.predictions.shape=(14, 30)
eval_prediction.label_ids.shape=(378,)
```
Also, I saw "padding" mentioned in the Trainer documentation, but I printed out all the `predictions` values and did not see anything other than my probabilities.
> If your predictions or labels have different sequence length (for instance because you’re doing dynamic padding in a token classification task) the predictions will be padded (on the right) to allow for concatenation into one array. The padding index is -100.
## Expected behavior
I expect DataParallel and Distributed Data Parallel setups to send the same number of predictions to `compute_metrics`
| 03-08-2022 18:56:04 | 03-08-2022 18:56:04 | There is little we can do to help without knowing the code that is run and having a clear reproducer.<|||||>Ah sorry for the lack of detail, here is my code.
https://gist.github.com/AADeLucia/86d5d75dde878cfa3dbd38a0230be2cf<|||||>Oh you're using custom models. Are they all returning loss then logits if labels are provided, logits only if no labels are provided? Also are the logits tensors with batch first dimension? While the `Trainer` works with other models, it has been optimized for Transformers models, so it has certain expectations.
It might also be specifically linked to sharded DDP. If you just used distributed training, is there still the issue?<|||||>I tried without the sharding and I received the same error (quick test with only 100 examples)
```bash
***** Running Evaluation *****
Num examples = 100
Batch size = 30
ERROR:root:err=ValueError('Found input variables with inconsistent numbers of samples: [100, 120]')4s/it]
eval_prediction.predictions.shape=(4, 30)
eval_prediction.label_ids.shape=(100,)
ERROR:root:err=ValueError('Found input variables with inconsistent numbers of samples: [100, 120]')
eval_prediction.predictions.shape=(4, 30)
eval_prediction.label_ids.shape=(100,)
```
So far I haven't omitted passing in labels, so it always assumes labels are there to compute loss. Logits are tensors with batch first dimension.
Would it be best if I did something like this, using a huggingface output class?
```python
from transformers.file_utils import ModelOutput
def forward(*****):
output = {}
output["logits"] = logits
if labels are not None:
output["loss"] = calculate loss
return ModelOutput(output)
```
<|||||>Not sure it will change anything to switch to a dictionary as long as you put your outputs in the right order (loss first if present) in your tuple. The `Trainer` can handle both. I'm curious to see what's being badly accumulated, could you try printing the shapes at the end of the forward pass of your model so we can try to figure out what's going on?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,998 | closed | Uncomment to see error | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-08-2022 18:30:15 | 03-08-2022 18:30:15 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15998). All of your documentation changes will be reflected on that endpoint.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15998). All of your documentation changes will be reflected on that endpoint. |
transformers | 15,997 | closed | Freeze Feature Encoder in FlaxSpeechEncoderDecoder | This PR builds on #15873 by enabling the feature encoder of a Flax Wav2Vec2 model to be frozen when used in the FlaxSpeechEncoderDecoder framework. | 03-08-2022 17:31:44 | 03-08-2022 17:31:44 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Add one quick test here maybe as well? :-) |
transformers | 15,996 | closed | Update for doc-builder -> hf-doc-utils | # What does this PR do?
This PR deals with the upcoming rename of `doc-builder` to `hf-doc-utils`. Should be merged at the same time as the rename happens. | 03-08-2022 16:54:15 | 03-08-2022 16:54:15 | |
transformers | 15,995 | closed | Add FlaxBartForCausalLM | This PR contributes the Flax Bart model for causal language modelling (Causal LM). It adds:
- `FlaxBartPreTrainedDecoderModel`: a subclass of `FlaxPreTrainedModel` designed specifically for leveraging pre-trained Bart decoder checkpoints. This is as opposed to `FlaxBartPreTrainedModel`, which treats pre-trained Bart encoder-decoder models.
- `FlaxBartDecoderWrapper`: a wrapper class required to correctly load pre-trained checkpoints when the causal language model is used in combination with the `(Speech)EncoderDecoderModel` framework.
- `FlaxBartForCausalLM`: a Bart Model with a language modelling (LM) head on top. It facilitates encoder-decoder cross-attention layers, thus enabling it to be used in the `(Speech)EncoderDecoderModel` for seq2seq tasks.
The implementation of the Causal LM model is validated through use as a decoder module in two key `(Speech)EncoderDecoderModel` tests:
- `FlaxBartEncoderDecoderModelTest`: The Flax Bart Causal LM model is used as a decoder module in a `Flax-Bert-2-Bart` seq2seq configuration. The PyTorch-Flax cross-tests assert that all Flax hidden-states are to within a `1e-5` tolerance of their PyTorch equivalents.
- `FlaxWav2Vec2BartModelTest`: The Flax Bart Causal LM model is used as a decoder module in a` Flax-Wav2Vec2-2-Bart` speech-encoder-decoder configuration. The PyTorch-Flax cross-tests assert that all Flax output logits are to within a `4e-2` tolerance of their PyTorch equivalents.
The PyTorch-Flax cross-tests verify that the CausalLM model has been correctly implemented and that it works as expected in its use case as a decoder module in an encoder-decoder framework. | 03-08-2022 16:45:54 | 03-08-2022 16:45:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,994 | closed | Adding new train_step logic to make things less confusing for users | Change to `train_step` I discussed earlier with @gante, making the new TF approaches less confusing for users.
This PR is **extremely** experimental right now, and since it touches `train_step` it could break the entire TF side of the library, so please don't merge it until I'm done! | 03-08-2022 16:40:08 | 03-08-2022 16:40:08 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This should now be ready for review! The main changes are:
1) `model.fit()` with the internal loss is now tested correctly
2) The internal loss will now work no matter where you pass your labels, which should reduce user confusion by about 50%
3) Attempting to use Keras metrics with the internal loss will now throw an informative error, instead of just creating a huge mess like it used to<|||||>I see some failing tests which are possibly caused by an outdated version of TF in the Docker image that doesn't understand `steps_per_execution`. Will investigate tomorrow!<|||||>Tests are passing and I think we're ready for final review!
I have a couple of things I'm still unsure about, though:
@gante: I removed the `expand_1d` call in `train_step`. Keras by default expands 1D input arrays to 2D, e.g. shape (8,) becomes (8, 1). This is not what our models expect at all and the test revealed some failures, that were fixed by removing it. Can you think of anything that change might break, though?
@sgugger / @LysandreJik: Is there any way to get a list of all possible 'label' column names for our models? This probably seems like a strange question, but the reason I need it is that if the user passes a single tensor as the labels to Keras, I need to make that tensor a key in the input dict, which means figuring out which argument name is the 'label'. Right now I just hardcoded a list of common ones, and then I inspect the function signature to see which of those is present, but it would be a lot better if I had some kind of definitive list.<|||||>We don't have a definitive list and maintaining it is going to be difficult as well with new modalities arriving. Perhaps we should add an attribute `label_names` to all models to have this info stored somewhere? The `Trainer` would also benefit from it on the PyTorch side.
Wdyt @LysandreJik ?<|||||>I wouldn't mind having the models know their label names, otherwise inspecting the signature and looking for any argument that contains `label` should give the correct information as well. If doing that during inference/training, it might slow down the forward pass so if you feel like adding a `label_names` is cleaner that's fine for me.<|||||>The problem is that some models have labels without labels in their names (QA models, I'm looking at you!)<|||||>For TF models trained with Keras, the `call` (`forward`) method is usually only run once to build the graph, so expensive Python calls like `inspect` are totally okay!
That compilation step is really the source of a lot of different design decisions between the two frameworks - e.g. TF `einsum` takes a long time to figure out the optimal contraction during compilation and then saves it, so you can get very different runtimes as a result.<|||||>> The problem is that some models have labels without labels in their names (QA models, I'm looking at you!)
For consistency's sake, shouldn't we rename them to have `label` in their name? (with appropriate deprecation cycle of course)
Maybe a non-issue if no users have ever been misled, but if it makes everything clearer *and* helps programmatic handling of labels, then it might be worth it<|||||>I don't think making a change on the labels for the QA models is warranted as we can make an exception for those (double check the name "QuestionAnswering" and that it has `start_positions` and `end_positions` in its signature for instance). We can do the inspection once at init, so it's not a big deal in terms of performance.<|||||>> @gante: I removed the expand_1d call in train_step. Keras by default expands 1D input arrays to 2D, e.g. shape (8,) becomes (8, 1). This is not what our models expect at all and the test revealed some failures, that were fixed by removing it. Can you think of anything that change might break, though?
I can't think of any problem. It is also being tested, so if there are problems, we should be able to catch them quickly.<|||||>For reference, here are all of the arguments for any `forward()` method in the codebase on a subclass of `PreTrainedModel` that contain "label" in their name:
`{'sentence_order_label', 'mc_labels', 'next_sentence_label', 'obj_labels', 'mask_labels', 'sentence_image_labels', 'class_labels', 'labels', 'entity_labels', 'matched_label'}`
@sgugger @LysandreJik None of them look especially wrong to me, so I'm happy to assume this list + {"start_positions", "end_positions"} covers all of the possible labels and just do an `inspect` call if you think that's a good solution!<|||||>Added a utility in #16526 <|||||>Quick update: This PR is mostly ready, but I'm going to wait for `find_labels` to be merged, then rebase and use it here.<|||||>Rebased with `find_labels` and looks good in testing, so I'm going to merge! |
transformers | 15,993 | closed | Unclear message with `add-new-model-like` and no flax installed | ## Environment info
- `transformers` version: 4.18.0.dev0
- Platform: macos high sierra 10.13.4
- Python version: 3.9
- PyTorch version (GPU?): 1.8.1 CPU
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@sgugger @LysandreJik
## Information
When trying to create a new model with `add-new-model-like` without having flax installed, an error message is raised.
While I think the code should work without flax, especially considering the numerous flags across the codebase (`is_*_available()`) to handle when one of the deep learning framework isn't installed.
The problem arises when using:
* [X] the official example scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. run `transformers-cli add-new-model-like`
2. ask for `deit`
3. see error message
```bash
$ transformers-cli add-new-model-like
What is the model you would like to duplicate? deit
Traceback (most recent call last):
File "/usr/local/bin/transformers-cli", line 33, in <module>
sys.exit(load_entry_point('transformers==4.18.0.dev0', 'console_scripts', 'transformers-cli')())
File "/usr/local/lib/python3.9/site-packages/transformers-4.18.0.dev0-py3.9.egg/transformers/commands/transformers_cli.py", line 52, in main
service = args.func(args)
File "/usr/local/lib/python3.9/site-packages/transformers-4.18.0.dev0-py3.9.egg/transformers/commands/add_new_model_like.py", line 1283, in add_new_model_like_command_factory
return AddNewModelLikeCommand(config_file=args.config_file, path_to_repo=args.path_to_repo)
File "/usr/local/lib/python3.9/site-packages/transformers-4.18.0.dev0-py3.9.egg/transformers/commands/add_new_model_like.py", line 1314, in __init__
) = get_user_input()
File "/usr/local/lib/python3.9/site-packages/transformers-4.18.0.dev0-py3.9.egg/transformers/commands/add_new_model_like.py", line 1419, in get_user_input
old_model_info = retrieve_info_for_model(old_model_type)
File "/usr/local/lib/python3.9/site-packages/transformers-4.18.0.dev0-py3.9.egg/transformers/commands/add_new_model_like.py", line 690, in retrieve_info_for_model
model_classes = retrieve_model_classes(model_type, frameworks=frameworks)
File "/usr/local/lib/python3.9/site-packages/transformers-4.18.0.dev0-py3.9.egg/transformers/commands/add_new_model_like.py", line 624, in retrieve_model_classes
"flax": auto_module.modeling_flax_auto,
File "/usr/local/lib/python3.9/site-packages/transformers-4.18.0.dev0-py3.9.egg/transformers/file_utils.py", line 2770, in __getattr__
raise AttributeError(f"module {self.__name__} has no attribute {name}")
AttributeError: module transformers.models.auto has no attribute modeling_flax_auto
```
## Expected behavior
Either work without having to install flax, or raise an error message telling me I need to install flax (which would be a bit annoying if I only intend to contribute to pytorch).
Thank you very much for this library, it's great! ❤️
| 03-08-2022 16:30:18 | 03-08-2022 16:30:18 | Even if you only want to contribute the model for one framework only, you need to have a `dev` install (like for any PR to Transformers). This is all detailed in the [contributing guide](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests)<|||||>Noted, thanks. |
transformers | 15,992 | closed | Translation of documentation into Spanish | I translated "**Fine-tune a pretrained model**" into Spanish to file [training.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/training.mdx) and I'll make a pull request. | 03-08-2022 16:05:06 | 03-08-2022 16:05:06 | |
transformers | 15,991 | closed | Add DPT | # What does this PR do?
This PR adds [DPT](https://arxiv.org/abs/2103.13413), Dense Prediction Transformers, to the library. It's some very nice work from Intel Labs that applies Transformers for dense prediction tasks such as semantic segmentation and depth estimation.
Feel free to play around with the notebook [here](https://colab.research.google.com/drive/177o79Qm8qsjGDQk5TDSS4tawcFNSZsy8?usp=sharing).
I've defined 3 models:
* `DPTModel`
* `DPTForDepthEstimation`
* `DPTForSemanticSegmentation`.
DPTModel is the backbone only (ViT in this case). The head models use a neck (DPTNeck) combined with a task-specific head (either `DPTDepthEstimationHead` or `DPTSemanticSegmentationHead`).
Important here:
* a neck is an nn.Module that takes a list of tensors and produces another list of tensors.
* a head takes a list of tensors and return `logits`.
To do:
- [x] make sure heads take a list of tensors as input
- [x] add tests for `DPTFeatureExtractor`
- [x] discuss `out_indices` and `in_index` names
- [x] transfer weights to `Intel` organization | 03-08-2022 14:56:41 | 03-08-2022 14:56:41 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for your reviews, addressed most comments. Main thing to update is:
* rename `out_indices` (which features to use from the backbone)
* rename `in_index` (which features to use in the head)
Ideally we have names that are going to be used by all vision models.<|||||>stop with all the emails I’m getting mad
Sent from my iPhone
> On Mar 11, 2022, at 8:37 AM, NielsRogge ***@***.***> wrote:
>
>
> @NielsRogge commented on this pull request.
>
> In src/transformers/models/dpt/modeling_dpt.py:
>
> > + if output_attentions:
> + all_self_attentions = all_self_attentions + (layer_outputs[1],)
> +
> + if output_hidden_states:
> + all_hidden_states = all_hidden_states + (hidden_states,)
> +
> + if not return_dict:
> + return tuple(v for v in [hidden_states, all_hidden_states, all_self_attentions] if v is not None)
> + return BaseModelOutput(
> + last_hidden_state=hidden_states,
> + hidden_states=all_hidden_states,
> + attentions=all_self_attentions,
> + )
> +
> +
> +class DPTReassembleBlocks(nn.Module):
> Renamed to DPTReassembleStage
>
> —
> Reply to this email directly, view it on GitHub, or unsubscribe.
> Triage notifications on the go with GitHub Mobile for iOS or Android.
> You are receiving this because you are subscribed to this thread.
|
transformers | 15,990 | closed | FeaturesManager assumes only one of Torch or TensorFlow is installed | ## Environment info
- `transformers` version: 4.12.5
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.10
- PyTorch version (GPU?): 1.10.0 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@michaelbenayoun @Albertobegue
## Information
When both Torch and TensorFlow are installed, `FeaturesManager` defaults to using `AutoModel`, so the model returned by `get_model_from_feature` is always Torch.
## To reproduce
Steps to reproduce the behavior:
1. Install Torch and TF
2. Call `FeatureManager.get_model_from_feature` with arbitrary but supported `features` and `model_name` arguments
3. The resulting model is always a Torch model
```python
features = "default" # randomly chosen, supported feature
model_name = "bert" # randomly chosen, supported model
model = FeaturesManager.get_model_from_feature(features, model_name)
```
## Expected behavior
Some test environments have both Torch and TensorFlow installed, because the immediate task is to ensure functionality is the same regardless of the framework. I would expect `FeaturesManager.get_model_from_feature` to allow TensorFlow to be used even when Torch is installed. This could be implemented by e.g. a keyword argument to `get_model_from_feature` with a default value of `None`. When the keyword argument is `None`, and both Torch and TensorFlow are installed, `FeatureManager` would default to Torch, as it does now. Otherwise, it would use the specified framework. | 03-08-2022 14:43:16 | 03-08-2022 14:43:16 | Also cc @lewtun <|||||>I'm on it!<|||||>> I'm on it!
Thanks! |
transformers | 15,989 | closed | Use tiny models for get_pretrained_model in TFEncoderDecoderModelTest | # What does this PR do?
Use tiny models for `get_pretrained_model` in `TFEncoderDecoderModelTest`.
This is originally for avoiding **GPU OOM** for `TFRembertEncoderDecoderModelTest` on CI daily testing.
But @patrickvonplaten suggests that we should actually use the small model in the following quote:
_... think we can rename it to test_model_save_loaf_from_pretrained(...) :wink: I think this "real" name was propragated since the first encoder-decoder tests existed in PyTorch. **Since the test does no integration testing** (e.g. checking if the output corresponds to something reasonable) **it makes 0 difference whether we use dummy weights or no** dummy weights here ..._ | 03-08-2022 14:25:07 | 03-08-2022 14:25:07 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,988 | closed | [Docs] Improve PyTorch, Flax generate API | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR is the first step to make `generate` a 1st class citizen in the docs. It improves the generate API for PyTorch and Flax generate, improves the examples for PyTorch and adds PyTorch to the example doc tests.
Once the TF generate refactor is complete - it's API can also be improved with better examples.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-08-2022 13:45:05 | 03-08-2022 13:45:05 | No doc-builder triggered here? :cry: <|||||>Docs live here: https://moon-ci-docs.huggingface.co/docs/transformers/pr_15988/en/index<|||||>> Docs live here: https://moon-ci-docs.huggingface.co/docs/transformers/pr_15988/en/index
The docs are not updated on the link if the PR is changed (or it takes too long). Will build the docs locally now, but I think it makes it quite difficult for the community to add/change docs.<|||||>The job updates the docs. Are they not up to date here? https://moon-ci-docs.huggingface.co/docs/transformers/pr_15988/en/main_classes/text_generation<|||||>> The job updates the docs. Are they not up to date here? https://moon-ci-docs.huggingface.co/docs/transformers/pr_15988/en/main_classes/text_generation
Nope |
transformers | 15,987 | closed | add doctests for bart like seq2seq models | # What does this PR do?
Enable doctests for bart like seq2seq models. | 03-08-2022 13:22:49 | 03-08-2022 13:22:49 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,986 | closed | Swin support for any input size | # What does this PR do?
This PR adds padding to Swin allowing to support any (if divisible by `32`) input size.
Example:
```python
from transformers import SwinConfig, SwinModel
import torch
model = SwinModel(SwinConfig(image_size=384))
x = torch.randn((1, 3, 1024, 640))
out = model(x)
```
Moreover, it adds a new field to the outputs, `hidden_states_spatial_dimensions`, containing the spatial dimension of all the stages' inputs | 03-08-2022 13:21:36 | 03-08-2022 13:21:36 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15986). All of your documentation changes will be reflected on that endpoint.<|||||>Swin now returns a list of reshaped `hidden_states` (B, C, H, W). However, due to the `view` operation, the viewed tensor won't have `.grad` in it, so `test_retain_grad_hidden_states_attentions` fails. Not sure how to proceed
```python
from transformers import SwinConfig, SwinModel
import torch
model = SwinModel(SwinConfig(image_size=384))
x = torch.randn((1, 3, 1024, 640))
out = model(x, output_hidden_states=True)
[print(e.shape) for e in out.hidden_states]
torch.Size([1, 96, 256, 160])
torch.Size([1, 192, 128, 80])
torch.Size([1, 384, 64, 40])
torch.Size([1, 768, 32, 20])
torch.Size([1, 768, 32, 20])
```
Maybe I am missing something, kindly pinging @sgugger <|||||>Following @NielsRogge suggestion, we now return the reshaped hidden sized inside `reshape_hidden_sizes` in all four (`Encoder/Model/MaskedImage/ImageClassifier`) outputs<|||||>Thanks to all the reviewers. I've resolved all the conversation and renamed some layers to match our convention of `Stage` and `Layer` |
transformers | 15,985 | closed | Add the XTREME-S fine-tuning example | # What does this PR do?
This adds and example script and benchmark results for fine-tuning speech models on the [XTREME-S](https://huggingface.co/datasets/google/xtreme_s) becnhmark tasks, namely Speech Recognition and Speech Classification.
The results so far:
| Task | Dataset | Result | Fine-tuned model & logs | Training time | GPUs |
|-----------------------|-----------|-----------------------|--------------------------------------------------------------------|---------------|--------|
| Speech Recognition | MLS | 30.33 WER | [here](https://huggingface.co/anton-l/xtreme_s_xlsr_300m_mls/) | 18:47:25 | 8xV100 |
| Speech Classification | Minds-14 | 94.74 F1 / 94.70 Acc. | [here](https://huggingface.co/anton-l/xtreme_s_xlsr_300m_minds14/) | 04:46:40 | 2xA100 |
| 03-08-2022 12:29:59 | 03-08-2022 12:29:59 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Looks very nice! I'm not 100% sure under which folder to put the script, but I tend to `audio-classification` as `run_glue.py` is under `text-classification`.
This xtreme-s is a new speech benchmark that has both speech recognition, speech translation and speech classification. Think we have the following options:
a) Put it under something like `speech-representation` / `speech-embeddings` because the benchmark is supposed to evaluate exactly this
b) Put it under `audio-classification` similar to `run_glue.py` under text classification
c) ...other ideas?
@sgugger - any preference? Here a short summary of the benchmark: https://huggingface.co/datasets/patrickvonplaten/xtreme-s (see diagram) |
transformers | 15,984 | closed | Add Document Image Transformer (DiT) | # What does this PR do?
This PR adds the conversion script used to convert DiT checkpoints from the [original repo](https://github.com/microsoft/unilm/tree/master/dit) to the hub.
It also adds a dedicated docs page (referring to the docs of BEiT). | 03-08-2022 12:17:24 | 03-08-2022 12:17:24 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Failing test is unrelated, therefore merging. |
transformers | 15,983 | closed | TFEncoderDecoderModel generate() gvies different results after #15562 | ## Environment info
- `transformers` version: 4.17.0.dev0
- Platform: Linux-5.13.0-1015-gcp-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyTorch version (GPU?): 1.10.2+cu102 (False)
- Tensorflow version (GPU?): 2.7.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.4.0 (cpu)
- Jax version: 0.3.1
- JaxLib version: 0.3.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
## Information
`TFEncoderDecoderModel.generate()` for `ydshieh/bert2bert-cnn_dailymail-fp16` gives different results after #15562, see below.
## To reproduce
PT
```python
article = """(CNN)Sigma Alpha Epsilon is under fire for a video showing party-bound fraternity members singing a racist chant. SAE's national chapter suspended the students, but University of Oklahoma President David Boren took it a step further, saying the university's affiliation with the fraternity is permanently done. The news is shocking, but it's not the first time SAE has faced controversy. SAE was founded March 9, 1856, at the University of Alabama, five years before the American Civil War, according to the fraternity website. When the war began, the group had fewer than 400 members, of which "369 went to war for the Confederate States and seven for the Union Army," the website says. The fraternity now boasts more than 200,000 living alumni, along with about 15,000 undergraduates populating 219 chapters and 20 "colonies" seeking full membership at universities. SAE has had to work hard to change recently after a string of member deaths, many blamed on the hazing of new recruits, SAE national President Bradley Cohen wrote in a message on the fraternity's website. The fraternity's website lists more than 130 chapters cited or suspended for "health and safety incidents" since 2010. At least 30 of the incidents involved hazing, and dozens more involved alcohol. However, the list is missing numerous incidents from recent months. Among them, according to various media outlets: Yale University banned the SAEs from campus activities last month after members allegedly tried to interfere with a sexual misconduct investigation connected to an initiation rite. Stanford University in December suspended SAE housing privileges after finding sorority members attending a fraternity function were subjected to graphic sexual content. And Johns Hopkins University in November suspended the fraternity for underage drinking. "The media has labeled us as the 'nation's deadliest fraternity,' " Cohen said. In 2011, for example, a student died while being coerced into excessive alcohol consumption, according to a lawsuit. SAE's previous insurer dumped the fraternity. "As a result, we are paying Lloyd's of London the highest insurance rates in the Greek-letter world," Cohen said. Universities have turned down SAE's attempts to open new chapters, and the fraternity had to close 12 in 18 months over hazing incidents."""
expected = """sae was founded in 1856, five years before the civil war. the fraternity has had to work hard to change recently. the university of oklahoma president says the university's affiliation with the fraternity is permanently done. the sae has had a string of members in recent months."""
from transformers import AutoTokenizer, EncoderDecoderModel
loc = "patrickvonplaten/bert2bert-cnn_dailymail-fp16"
model = EncoderDecoderModel.from_pretrained(loc)
tokenizer = AutoTokenizer.from_pretrained(loc)
input_ids = tokenizer(article, return_tensors="pt").input_ids
output_ids = model.generate(input_ids, use_cache=False)
summary = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(summary)
print(summary == expected)
```
On both commit `a3dbbc346` and `2e12b907a`, this gives `sae was founded in 1856, ...`
TF
```python
article = """(CNN)Sigma Alpha Epsilon is under fire for a video showing party-bound fraternity members singing a racist chant. SAE's national chapter suspended the students, but University of Oklahoma President David Boren took it a step further, saying the university's affiliation with the fraternity is permanently done. The news is shocking, but it's not the first time SAE has faced controversy. SAE was founded March 9, 1856, at the University of Alabama, five years before the American Civil War, according to the fraternity website. When the war began, the group had fewer than 400 members, of which "369 went to war for the Confederate States and seven for the Union Army," the website says. The fraternity now boasts more than 200,000 living alumni, along with about 15,000 undergraduates populating 219 chapters and 20 "colonies" seeking full membership at universities. SAE has had to work hard to change recently after a string of member deaths, many blamed on the hazing of new recruits, SAE national President Bradley Cohen wrote in a message on the fraternity's website. The fraternity's website lists more than 130 chapters cited or suspended for "health and safety incidents" since 2010. At least 30 of the incidents involved hazing, and dozens more involved alcohol. However, the list is missing numerous incidents from recent months. Among them, according to various media outlets: Yale University banned the SAEs from campus activities last month after members allegedly tried to interfere with a sexual misconduct investigation connected to an initiation rite. Stanford University in December suspended SAE housing privileges after finding sorority members attending a fraternity function were subjected to graphic sexual content. And Johns Hopkins University in November suspended the fraternity for underage drinking. "The media has labeled us as the 'nation's deadliest fraternity,' " Cohen said. In 2011, for example, a student died while being coerced into excessive alcohol consumption, according to a lawsuit. SAE's previous insurer dumped the fraternity. "As a result, we are paying Lloyd's of London the highest insurance rates in the Greek-letter world," Cohen said. Universities have turned down SAE's attempts to open new chapters, and the fraternity had to close 12 in 18 months over hazing incidents."""
expected = """sae was founded in 1856, five years before the civil war. the fraternity has had to work hard to change recently. the university of oklahoma president says the university's affiliation with the fraternity is permanently done. the sae has had a string of members in recent months."""
from transformers import AutoTokenizer, TFEncoderDecoderModel
loc = "ydshieh/bert2bert-cnn_dailymail-fp16"
model = TFEncoderDecoderModel.from_pretrained(loc)
tokenizer = AutoTokenizer.from_pretrained(loc)
input_ids = tokenizer(article, return_tensors="tf").input_ids
output_ids = model.generate(input_ids, use_cache=False)
summary = tokenizer.decode(output_ids[0], skip_special_tokens=True)
print(summary)
print(summary == expected)
```
- commit `a3dbbc346` : `sae was founded in 1856, ` (same as the PyTorch version)
- commit `2e12b907a` : `sae's national chapter suspended students, ...`
### Who can help
@patrickvonplaten (generate) | 03-08-2022 11:17:24 | 03-08-2022 11:17:24 | @ydshieh - thanks for the PR!
Tiny feedback, when we format code you use:
```python
class Example:
```
instead of
```
class Example:
```
it's a bit easier to quickly read the code this way.<|||||>@patrickvonplaten I will take a look of this issue if you haven't been able to find the time on it.
I think it would be great if we can fix this before the next release which will have exciting news about TF generate :-).
<|||||>This would be amazing if you find a bit of time for it @ydshieh <|||||>Fixed by #17426 by the changes in `generation_tf_utils.py`
```
is_pad_token_not_equal_to_eos_token_id = (eos_token_id is None) or (
(eos_token_id is not None) and (pad_token_id != eos_token_id)
)
``` |
transformers | 15,982 | closed | Marian cannot be fully serialized because it accesses the filesystem after the object instantiation | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0
- Platform: linux-64
- Python version: 3.8
- PyTorch version (GPU?): 1.10.2 (CPU-only)
- Tensorflow version (GPU?): none
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: yes
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART: @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten
@SaulLu
@Narsil
## Information
Model I am using (Bert, XLNet ...): Marian (Helsinki-NLP/opus-mt-it-en)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
I'm trying to parallelize inference through Apache Spark. While this works for other models/pipelines (e.g. https://huggingface.co/joeddav/xlm-roberta-large-xnli), it doesn't work for Marian (e.g. https://huggingface.co/Helsinki-NLP/opus-mt-it-en). The problem is that the tokenizer/model/pipeline object needs to be serialized and broadcasted to the worker nodes, so the tokenizer/model/pipeline object needs to include all the required data. However, for the Marian tokenizer, when the tokenizer/model/pipeline is unserialized and `__setstate__` is called, it tries to reload the tokenizer files (source.spm, target.spm, etc.) from the filesystem (see https://github.com/huggingface/transformers/blob/master/src/transformers/models/marian/tokenization_marian.py#L330), but those files aren't available anymore to the worker nodes, so it fails. The `__setstate__` method shouldn't access the filesystem anymore.
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: translation
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
from transformers import pipeline
translator = pipeline("translation", model=model_dir)
broadcasted_translator = spark_session.sparkContext.broadcast(translator)
def compute_values(iterator):
for df in iterator:
batch_size = 32
sequences = df["text"].to_list()
res = []
for i in range(0, len(sequences), batch_size):
res += broadcasted_translator.value(sequences[i:i+batch_size])
df["translation"] = [item["translation_text"] for item in res]
yield df
schema = "text STRING, translation STRING"
sdf = spark_dataframe.mapInPandas(compute_values, schema=schema)
```
I get the following error:
```
File "/tmp/conda-78ffd793-e3a4-4b56-a869-cedd86c5eeaa/real/envs/conda-env/lib/python3.8/site-packages/pyspark/broadcast.py", line 129, in load
return pickle.load(file)
File "/tmp/conda-78ffd793-e3a4-4b56-a869-cedd86c5eeaa/real/envs/conda-env/lib/python3.8/site-packages/transformers/models/marian/tokenization_marian.py", line 330, in __setstate__
self.spm_source, self.spm_target = (load_spm(f, self.sp_model_kwargs) for f in self.spm_files)
File "/tmp/conda-78ffd793-e3a4-4b56-a869-cedd86c5eeaa/real/envs/conda-env/lib/python3.8/site-packages/transformers/models/marian/tokenization_marian.py", line 330, in <genexpr>
self.spm_source, self.spm_target = (load_spm(f, self.sp_model_kwargs) for f in self.spm_files)
File "/tmp/conda-78ffd793-e3a4-4b56-a869-cedd86c5eeaa/real/envs/conda-env/lib/python3.8/site-packages/transformers/models/marian/tokenization_marian.py", line 357, in load_spm
spm.Load(path)
File "/tmp/conda-78ffd793-e3a4-4b56-a869-cedd86c5eeaa/real/envs/conda-env/lib/python3.8/site-packages/sentencepiece/__init__.py", line 367, in Load
return self.LoadFromFile(model_file)
File "/tmp/conda-78ffd793-e3a4-4b56-a869-cedd86c5eeaa/real/envs/conda-env/lib/python3.8/site-packages/sentencepiece/__init__.py", line 171, in LoadFromFile
return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
OSError: Not found: "/container_e165_1645611551581_313304_01_000001/tmp/model_dir/source.spm": No such file or directory Error #2
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
As I've explained in the "Information" section, I should be able to serialize, broadcast, unserialize and apply the tokenizer/model/pipeline within the worker nodes. However, it fails because `__setstate__` is called and it tries to reload the tokenizer files (source.spm, target.spm, etc.) from a filesystem which is not available to the worker nodes. The `__setstate__` method shouldn't access the filesystem. | 03-08-2022 10:26:18 | 03-08-2022 10:26:18 | The reason seems to be due to the sentence piece library: https://github.com/google/sentencepiece -> should we maybe post the issue there?
Otherwise @candalfigomoro, could you maybe try to use the `tokenizers` libraries instead of `sentencepiece`? https://github.com/huggingface/tokenizers<|||||>> Otherwise @candalfigomoro, could you maybe try to use the `tokenizers` libraries instead of `sentencepiece`? https://github.com/huggingface/tokenizers
@patrickvonplaten Is there a specific tokenizer class in the `tokenizers` library that I can use as drop-in replacement for `MarianTokenizer` in a translation `pipeline`?
Something like `pipeline("translation", tokenizer=<TOKENIZER FROM TOKENIZERS LIBRARY>)`? Sorry but I'm new to huggingface's libraries.
<|||||><s>Yes you should be able to use:
```python
from transformers import MarianTokenizerFast
tokenizer = MarianTokenizerFast.from_pretrained(...)
```
</s>
Sorry thanks to @SaulLu , I just noticed that we don't have a fast implementation for MarianTokenizer :-/ We should work on adding this one though (I'll put it on my TODO list)<|||||>Let me know if you need help, I vaguely remember there was a reason it wasn't added, but I can't put my finger on it atm.<|||||>@Narsil @patrickvonplaten
IIRC the fast tokenizer was not added because Marian uses two setencepiece models and two vocabs (for source and target) and it's not clear how to add a fast version for such tokenizers<|||||>@patil-suraj Can't we just have two different `tokenizer_{source, target}.json` files like you have 2 different `spm` files ?
Also ideally I wouldn't mutate the state of the tokenizer like `as_target_tokenizer()` does. If those 2 objects are different, they should stay 2 different objects.
This could be a different PR, but I feel it would also solve issues like the fact that `len(tokenizer)` doesn't match (There was a recent issue but I can't find it anymore). It's quite important for fast tokenizers too because they handle all the special and added tokens, which would be super confusing if it's the same "object" but they would have 2 different things added. (How do you deal with that in the slow tokenizer ?)<|||||>> Can't we just have two different tokenizer_{source, target}.json files like you have 2 different spm files ?
I actually don't know about this. I have the same question.
> It's quite important for fast tokenizers too because they handle all the special and added tokens, which would be super confusing if it's the same "object" but they would have 2 different things added
Aah, good catch! Yes, you are right. The current design is not ideal.
> How do you deal with that in the slow tokenizer ?
In Marian the spm files are only used to tokenize the text and then the `vocab` is used to convert token to id and vice-versa.
And before https://github.com/huggingface/transformers/pull/15831 marian used a joint vocab file, so we didn't need to handle this.
But now marian can also have two vocab files (for source and target) and adding tokens is actually not handled correctly. <|||||>> But now marian can also have two vocab files (for source and target) and adding tokens is actually not handled correctly.
I see ! Ok, let me know if you start working on this, I can help with the tokenizers part.<|||||>I've the same issue with this model https://huggingface.co/MoritzLaurer/mDeBERTa-v3-base-mnli-xnli used as Zero Shot Classifier (instead, this other model https://huggingface.co/joeddav/xlm-roberta-large-xnli works fine as Zero Shot Classifier).
So it seems like the `deberta_v2/tokenization_deberta_v2.py` tokenizer has the same problem.
Related to https://github.com/huggingface/transformers/pull/15529 ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@patrickvonplaten @patil-suraj @Narsil
So the solution would be to implement a fast tokenizer for Marian?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,981 | closed | [Env Command] Add hf hub to env version command | # What does this PR do?
Adds the HF hub to the env command
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-08-2022 10:10:43 | 03-08-2022 10:10:43 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15981). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15981). All of your documentation changes will be reflected on that endpoint.<|||||>Doc dev test is green so good to merge no? |
transformers | 15,980 | closed | Bad error message when downloading private model without being logged in. | Let's say an organization creates a private model and wants to share it with other team members which are less savy of `huggingface_hub` and `transformers`.
So e.g. I create: https://huggingface.co/NewT5/dummy_model
and want to share it with others.
Now if I run:
```python
from transformers import BertModel
BertModel.from_pretrained("NewT5/dummy_model")
```
I'm getting a very nice error message:
```
OSError: NewT5/dummy_model is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.
```
After this error message I think people will have an easy time doing the correct thing which is passing **use_auth_token=True** and previously running `huggingface-cli login`.
Now what will often happen though in my opinion is that someone will share the following code with unsavy coworkers / collaborators:
```python
from transformers import BertModel
BertModel.from_pretrained("NewT5/dummy_model", use_auth_token=True)
```
Now **if you are not logged in**, you are getting the following error message:
```
OSError: Can't load config for 'NewT5/dummy_model'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'NewT5/dummy_model' is the correct path to a directory containing a config.json file
```
This error message is not great really, because the problem is not that the model doesn't exist, but it's because the user didn't run `huggingface-cli login`
I think it's worth fixing the error message here (maybe just the same as when passing `use_auth_token=True` is missing because IMO it's a common case that people will hare code with `use_auth_token=True`.
We probably need to do this in moon-landing though no?
## Env
- `transformers` version: 4.18.0.dev0
- Platform: Linux-5.15.15-76051515-generic-x86_64-with-glibc2.34
- Python version: 3.9.7
- PyTorch version (GPU?): 1.10.0 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.6 (cpu)
- Jax version: 0.2.25
- JaxLib version: 0.1.73
and hugging face hub version:
`0.4.0.dev0` | 03-08-2022 10:06:05 | 03-08-2022 10:06:05 | Related: https://github.com/huggingface/datasets/issues/3855<|||||>> We probably need to do this in moon-landing though no?
I don't think that requires changes to moon-landing
From what i understand, you need to catch [this error](https://github.com/huggingface/transformers/blob/master/src/transformers/file_utils.py#L2117) and display the appropriate message [here](https://github.com/huggingface/transformers/blob/master/src/transformers/configuration_utils.py#L629) (currently the underlying error message is suppressed)<|||||>To catch [this error](https://github.com/huggingface/transformers/blob/master/src/transformers/file_utils.py#L2117), the `huggingface_hub` would need to issue a proper subclass of `EnvironmentError` (for instance `HFNotLoggedInError`).
Of course defaulting `use_auth_token` to `True` would be easier, but that's something that has been debated several times already.<|||||>In my mind `huggingface_hub` already had support for these errors, will fix the issue upstream in `huggingface_hub`.<|||||>Will be solved by https://github.com/huggingface/huggingface_hub/pull/878<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,979 | closed | Fix TFEncDecModelTest - Pytorch device | # What does this PR do?
Fix pytorch device issue for `test_pt_tf_equivalence` in `TFEncoderDecoderMixin`.
@patrickvonplaten @sgugger
## Remark:
There are other issues to fix for TFEncoderDecoder test pass:
1. use small models for `test_real_model_save_load_from_pretrained`
2. `test_bert2bert_summarization` test fails after #15562
Think these should be addressed in separate PRs (especially for 2.) | 03-08-2022 07:18:58 | 03-08-2022 07:18:58 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15979). All of your documentation changes will be reflected on that endpoint. |
transformers | 15,978 | closed | Add custom classifcation dataset - computer vision | Hi, I recently read your amazing blog https://huggingface.co/blog/fine-tune-vit
There I could not find any reference to how to create our own dataset. Is there any class helper that we can use? Something like -
```python
from xyz import ImageClassificationDataset
dataset = ImageClassificationDataset.from_directory('/path/to/dataset')
```
For directory structured as
```
train
+- class0
+- class1
...
+- classn
valid
+- class0
+- class1
...
+- classn
test
+- class0
+- class1
...
+- classn
```
where every class folder has images
| 03-08-2022 05:48:36 | 03-08-2022 05:48:36 | Or, If you don't have that kind of class what would be the best way to generate dataset dict from folder structure mentioned above
```python
DatasetDict({
train: Dataset({
features: ['image_file_path', 'image', 'labels'],
num_rows: 1034
})
validation: Dataset({
features: ['image_file_path', 'image', 'labels'],
num_rows: 133
})
test: Dataset({
features: ['image_file_path', 'image', 'labels'],
num_rows: 128
})
})
```<|||||>Hi,
We've just added the [ImageFolder](https://huggingface.co/docs/datasets/master/en/loading#image-folders) to the Datasets library :)
This way, you can easily load your custom data as follows:
```
from datasets import load_dataset
dataset = load_dataset("imagefolder", data_dir="/path/to/data")
``` |
transformers | 15,977 | closed | Add FlaxConvNext | # What does this PR do?
Adds a Flax ConvNext model to transformers.
## Who can review?
This is currently an in-progress PR as discussed with @patil-suraj
TODO: Fix docstrings, ensure proper parameter name mapping for inter-framework weight conversion and write tests | 03-08-2022 04:27:37 | 03-08-2022 04:27:37 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15977). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @DarshanDeshpande ! Sorry to only reply here now. This slipped through the cracks. LMK if you are still interested to add this model in Flax :)<|||||>Hey @patil-suraj, I am still interested in adding the Flax ConvNeXt but I will need some guidance with the TODOs mentioned in the first commit.<|||||>I will go through the PR tomorrow and do an initial review.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,976 | closed | Deadlock when loading the model in multiprocessing context | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0
- Platform: Linux
- Python version: 3.9.7
- PyTorch version (GPU?): 1.10.2
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes.
Models:
- BART EncoderDecoder: @patrickvonplaten
-->
## Information
Model I am using (BART):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
## To reproduce
```python
import torch
from pathlib import Path
import multiprocessing as mp
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
queue = mp.Queue()
def load_model(filename):
device = queue.get()
print('Loading')
model = AutoModelForSeq2SeqLM.from_pretrained('models/sqgen').to(device)
print('Loaded')
queue.put(device)
def parallel():
num_gpus = torch.cuda.device_count()
with mp.get_context('spawn').Pool(processes=num_gpus) as pool:
for gpu_id in range(num_gpus):
queue.put('cuda:{0}'.format(gpu_id))
pool = mp.Pool(processes=num_gpus)
flist = list(Path('data').glob('*.json'))
pool.map(
load_model,
flist,
)
pool.close()
pool.join()
if __name__ == '__main__':
parallel()
```
Steps to reproduce the behavior:
1. Run the above script.
2. Script hangs when loading the model.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
1. Script should not hang. | 03-08-2022 01:50:06 | 03-08-2022 01:50:06 | Hey @vikigenius,
Could you please use the forum: https://discuss.huggingface.co/ instead for this error? This seems to be quite a special case and we are trying to use Transformers issues only for issues that are directly related to Transformers models. We cannot guarantee that they work for every specific use case such as this one where the model loading is wrapped into Python's multiprocessing functions.
If you are looking into doing distributed training on GPU, does this doc maybe help: https://github.com/huggingface/transformers/tree/master/examples/pytorch#distributed-training-and-mixed-precision<|||||>@patrickvonplaten Thanks for the suggestion, I will post in the forum.
But to give you more context, I am not trying to do distributed training. I am trying to do distributed inference. I have multiple files that I want to generate questions on in parallel. So I am trying to assign each file to some GPU, and then once done, I release the GPU (to the queue) so that others use it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,975 | closed | add special tokens does not work in GPT2Tokenizer | ## Environment info
- `transformers` version: 3.5.0
- Platform: Pytorch
- Python version: 3.7.5
- PyTorch version (GPU?): torch-1.6.0 cuda102
Models: GPT-2 @patrickvonplaten
- Tokenizers: @SaulLu
## Information
I want to add some special tokens to the GPT2 vocab but the function "add_ special_tokens" do not work.
My code just like this
`tokenizer=GPT2Tokenizer.from_pretrained(gpt2path)`
`kb=torch.load(kbpath)`
`tokenizer.add_special_tokens({'additional_special_tokens':list(kb)})`
`print(tokenizer.tokenize("i'd like to book a table at resto_bombay_expensive_british_4stars_2",add_special_tokens=True))`
`print("resto_bombay_expensive_british_4stars_2" in list(kb))`
The output of last print is
**"True"**,
so I think the special tokens in KB has been added into GPT2 vocab.
But the output of the tokenizer.tokenize(...) is
**['i', "'d", 'Ġlike', 'Ġto', 'Ġbook', 'Ġa', 'Ġtable', 'Ġat', 'Ġrest', 'o', '_', 'bomb', 'ay', '_', 'expensive', '_', 'b', 'rit', 'ish', '_', '4', 'stars', '_', '2']**
The special tokens is splited. Why?
## Expected behavior
**['i', "'d", 'Ġlike', 'Ġto', 'Ġbook', 'Ġa', 'Ġtable', 'Ġat', 'Ġresto_bombay_expensive_british_4stars_2']**
| 03-08-2022 01:17:08 | 03-08-2022 01:17:08 | Hey @ahxchzt,
Thanks for your issue.
Could you please provide at code-snippet that one can copy-paste into a Python environment and execute directly. Your code snippet doesn't work because:
- 1) `GPT2Tokenizer` is not imported
- 2) No one has access to `gpt2path` nor `kbpath` so we can't reproduce the error at all even though we know how to do 1)
Could you make sure the code snippet is reproducible? <|||||>Thanks for your reply.@patrickvonplaten
I updated my transformers to 4.17.0 and the problem had been solved. I think the issue can be closed.
|
transformers | 15,974 | closed | Models traced with HFTracer cannot be TorchScripted or serialized | ## Environment info
- `transformers` version: 4.17.0.dev0
- Platform: Linux-5.4.0-1051-aws-x86_64-with-glibc2.27
- Python version: 3.9.5
- PyTorch version (GPU?): 1.11.0a0+git708f7b1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@michaelbenayoun
@sgugger
## Information
Model I am using (Bert, XLNet ...): BERT, but also happens e.g. for GPT-2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
import torch
from transformers import BertConfig, BertModel
from transformers.utils import fx
bert = BertModel(BertConfig())
bert.eval()
bs, seq_length = 20, 512
bert_input = torch.zeros(bs, seq_length, dtype=torch.long).random_(bert.config.vocab_size)
orig_out = bert(bert_input)
# Set-up: fx trace the model
bert_traced = fx.symbolic_trace(bert)
traced_out = bert_traced(bert_input)
torch.testing.assert_allclose(traced_out['last_hidden_state'], orig_out['last_hidden_state'])
# Issue 1: TorchScript breakage. Leaf function patching breaks TorchScript tracing, in this
# instance the generated wrapper for `torch.ones`. I believe this is because TorchScript is
# unable to
# scripted = torch.jit.script(bert_traced)
#
# The preceeding fails at pytorch/torch/_sources.py", line 22, in get_source_lines_and_file
# sourcelines, file_lineno = inspect.getsourcelines(obj). When printing out the object that
# is being resolved, `obj` is `<function _VariableFunctionsClass.ones at 0x7fbc8a6c9af0>`, the
# torch.ones wrapper that is programmatically generated in transformers.utils.fx._function_to_leaf
# Issue 2: Serialized model does not have metadata needed to re-trace on load path
import pickle, tempfile, os
with tempfile.TemporaryDirectory() as tmp_dir_name:
pkl_file_name = os.path.join(tmp_dir_name, "bert_model.pkl")
# with open(pkl_file_name, 'wb') as f:
# pickle.dump(bert_traced, f)
# with open(pkl_file_name, 'rb') as f:
# loaded = pickle.load(f)
# The previous fails with: torch.package.importer.ObjNotFoundError:
# <function _VariableFunctionsClass.ones at 0x7f4e46740ca0> was not
# found as transformers.utils.fx._VariableFunctionsClass.ones. This is
# because the ones wrapper was programmatically generated and cannot
# be resolved to a call target in a deserialization context, which
# only has references to target by qualified name (by virtue of needing
# to work across different processes).
# We can hack around this and replace the `torch.ones` wrapper with a wrapper
# that can be resolved by qualified name:
def ones_wrapper(*args, **kwargs):
return torch.ones(*args, **kwargs)
for node in bert_traced.graph.nodes:
if node.op == 'call_function' and node.target.__qualname__ == '_VariableFunctionsClass.ones':
node.target = ones_wrapper
bert_traced.recompile()
# This leads us to Issue 3: module does not have enough metadata to do re-tracing
# on the deserialization path.
with tempfile.TemporaryDirectory() as tmp_dir_name:
pkl_file_name = os.path.join(tmp_dir_name, "bert_model.pkl")
with open(pkl_file_name, 'wb') as f:
pickle.dump(bert_traced, f)
# with open(pkl_file_name, 'rb') as f:
# loaded = pickle.load(f)
#
# The above fails with:
#
# Traceback (most recent call last):
# File "/transformers_issue.py", line 64, in <module>
# loaded = pickle.load(f)
# File "/pytorch/torch/fx/graph_module.py", line 105, in reduce_graph_module
# return _deserialize_graph_module(forward, body)
# File "/pytorch/torch/fx/graph_module.py", line 163, in _deserialize_graph_module
# graph = KeepModules().trace(com)
# File "/transformers/src/transformers/utils/fx.py", line 467, in trace
# self.record(root, input_names, method_names=method_names)
# File "/transformers/src/transformers/utils/fx.py", line 418, in record
# inputs.update(self._generate_dummy_input(model, input_name, shape))
# File "/transformers/src/transformers/utils/fx.py", line 361, in _generate_dummy_input
# device = model.device
# File "/pytorch/torch/nn/modules/module.py", line 1186, in __getattr__
# raise AttributeError("'{}' object has no attribute '{}'".format(
# AttributeError: 'CodeOnlyModule' object has no attribute 'device'
# We can patch HF transformers to customize the serialization/deserialization process
# to include metadata like `device` and the input shapes that were generated during
# initial symbolic tracing: https://gist.github.com/jamesr66a/7304d8818c04abd49df7a70a2ae51c02
# The following should now pass:
with tempfile.TemporaryDirectory() as tmp_dir_name:
pkl_file_name = os.path.join(tmp_dir_name, "bert_model.pkl")
with open(pkl_file_name, 'wb') as f:
pickle.dump(bert_traced, f)
with open(pkl_file_name, 'rb') as f:
loaded = pickle.load(f)
loaded_outs = loaded(bert_input)
torch.testing.assert_allclose(loaded_outs['last_hidden_state'], orig_out['last_hidden_state'])
```
## Expected behavior
`torch.jit.script` or `pickle.dump/load` serialization/deserialization should work out-of-the box. I believe that a) switching leaf function to reference functions that can be resolved by qualified name and b) customizing HFTracer serialization to preserve the metadata needed during serialization should fix this issue
| 03-08-2022 00:39:09 | 03-08-2022 00:39:09 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,973 | closed | error invoking create_optimizer from Jupyter lab | - `transformers` version: 4.17.0
- Platform: Darwin-21.3.0-x86_64-i386-64bit
- Python version: 3.7.3
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@Rocketknight1, @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): BART
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
import tensorflow as tf
import transformers
from transformers import BartConfig, TFBartForConditionalGeneration, BartTokenizerFast
create_optimizer(
init_lr=2e-4,
num_train_steps=18000,
num_warmup_steps=100,
)
** Error message **
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-100-7951434374ef> in <module>
3 init_lr=2e-4,
4 num_train_steps=18000,
----> 5 num_warmup_steps=100,
6 )
~/anaconda3/lib/python3.7/site-packages/transformers/utils/dummy_tf_objects.py in create_optimizer(*args, **kwargs)
2155
2156 def create_optimizer(*args, **kwargs):
-> 2157 requires_backends(create_optimizer, ["tf"])
2158
2159
~/anaconda3/lib/python3.7/site-packages/transformers/file_utils.py in requires_backends(obj, backends)
846 failed = [msg.format(name) for available, msg in checks if not available()]
847 if failed:
--> 848 raise ImportError("".join(failed))
849
850
ImportError:
create_optimizer requires the TensorFlow library but it was not found in your environment. Checkout the instructions on the
installation page: https://www.tensorflow.org/install and follow the ones that match your environment.
## Expected behavior
Error free create_optimizer invocation
| 03-08-2022 00:13:54 | 03-08-2022 00:13:54 | I could fix the issue by running `pip install importlib`. Shouldn't importlib be installed automatically as part of transformers installation? I had to go through the code to identify the importlib dependency.<|||||>Hi, I can't reproduce this issue. I made a [Colab notebook](https://colab.research.google.com/drive/1BmkheJNsw9tOyirgW0CJujdluVI0sOnJ?usp=sharing) to test and `create_optimizer` worked correctly. Is it possibly a Conda issue?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,972 | closed | Make `pos` optional in `PerceiverAudioPreprocessor` to avoid crashing `PerceiverModel` operation | Updates `PerceiverAudioPreprocessor` `forward()` implementation to match most other preprocessors / postprocessors.
Fixes #15971.
| 03-07-2022 20:27:59 | 03-07-2022 20:27:59 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Merging this to solve your issue in the meantime. |
transformers | 15,971 | closed | PerceiverAudioPreprocessor: forward() missing 1 required positional argument: 'pos' | The [`forward()` method implementation of `PerceiverAudioPreprocessor`](https://github.com/huggingface/transformers/blob/38cc35069c10d153e872162265288263bb7394b7/src/transformers/models/perceiver/modeling_perceiver.py#L3267) seems to be problematic because it has an extra required `pos` argument, whereas all other preprocessors do not (or it is optional). In fact, while it is passed to `_build_network_inputs()`, `pos` is not actually used in that method. Hence, I am not sure why it exists. Moreover, it breaks the default `PerceiverModel` operation, because that class assumes that there only one positional argument called `inputs` [when calling the preprocessor](https://github.com/huggingface/transformers/blob/38cc35069c10d153e872162265288263bb7394b7/src/transformers/models/perceiver/modeling_perceiver.py#L865). This seems to be a serious issue in my opinion because it crashes any instantiation of `PerceiverModel` with a `PerceiverAudioPreprocessor` as `input_preprocessor`. | 03-07-2022 20:19:08 | 03-07-2022 20:19:08 | Hi,
Thanks for raising this issue. This was a leftover from porting Deepmind's original repo. However, as I'm implementing the position embeddings differently in PyTorch (vs. the original implementation which was in Haiku), this can probably be deleted. I'll work on a PR to remove it everywhere.<|||||>Reopening this and marking it as a good first issue.<|||||>Is this issue open? I'd love to take it on. |
transformers | 15,970 | closed | Unigram tokenizer Result is Incorrect | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.16.2
- Platform: Windows
- Python version: 3.7.0
- PyTorch version (GPU?): 1.10.2 (CPU)
- Tensorflow version (GPU?): No
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART: @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
LysandreJik & SaulLu
## Information
Model I am using (Bert, XLNet ...): [xlm-roberta-base](https://huggingface.co/xlm-roberta-base)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ * ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ * ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Following is the code I use to run the XLM-Roberta tokenizer:
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-base')
line = "スナップリング SC-40"
print(tokenizer(line))
```
And following is hugging face's output:
```bash
{'input_ids': [0, 6, 3385, 17456, 13451, 17462, 113810, 75862, 246514, 17715, 41734, 2], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
```
But when I run the same query with [google's sentence-piece](https://github.com/google/sentencepiece) (Note that saved the same query into a file then use cat to send it to google's encoder):
```bash
cat /mnt/e/sample_query.txt | ./spm_encode --model=/mnt/e/sentencepiece.bpe.model --output_format=id
5 3384 17455 46404 76930 17714 41733
```
The result is not same. And even if I considered about the [fairseq map mentioned at huggingface](https://huggingface.co/transformers/v2.4.0/_modules/transformers/tokenization_xlm_roberta.html)
```bash
# Original fairseq vocab and spm vocab must be "aligned":
# Vocab | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
# -------- | ------- | ------- | ------ | ------- | --- | --- | --- | ----- | ----- | ----
# fairseq | '<s>' | '<pad>' | '</s>' | '<unk>' | ',' | '.' | '▁' | 's' | '▁de' | '-'
# spm | '<unk>' | '<s>' | '</s>' | ',' | '.' | '▁' | 's' | '▁de' | '-' | '▁a'
```
The output is still not match. Basically speaking, when considering about the map, google's output corresponds to:
```bash
6 3385 17456 46405 76931 17715 41734
```
which not matches with huggingface's output
```bash
6, 3385, 17456, 13451, 17462, 113810, 75862, 246514, 17715, 41734
```
I think huggingface uses a fast implementation for the tokenization, but the fast implementation contains bugs in it.
BTW: If you really need a fast implementation with better parity, maybe I can provide one after agreed by my manager.
## Expected behavior
Huggingface's output should be
{'input_ids': [0, 6 3385 17456 46405 76931 17715 41734, 2] ...} | 03-07-2022 19:06:49 | 03-07-2022 19:06:49 | I can reproduce it for this particular text, there indeed is some difference between fast and slow tokenizer.
```python
from transformers import XLMRobertaTokenizer, AutoTokenizer
tok_s = XLMRobertaTokenizer.from_pretrained("xlm-roberta-base")
tok_f = AutoTokenizer.from_pretrained("xlm-roberta-base")
line = "スナップリング SC-40"
tok_s(line).input_ids
# [0, 6, 3385, 17456, 46405, 76931, 17715, 41734, 2]
tok_f(line).input_ids
# [0, 6, 3385, 17456, 13451, 17462, 113810, 75862, 246514, 17715, 41734, 2]
```
cc @SaulLu
<|||||>I was going to open a new issue, but it seems it may be related to this.
I am wondering if this is expected behavior? U+FF08 "(" and U+0028 "(" both encode to [0, 15, 2] using `XLMRobertaTokenizer.from_pretrained("xlm-roberta-base")`.
CC: @patil-suraj <|||||>Hi @liweifriends126,
Thank you for bringing this problem to our attention! It is indeed a problem that the sequence of ids are not identical!
Investigating the problem, I think the problem lies in the encoding of u"\u30d5\u309a" (プ) and u"\u30af\u3099" (グ). Let me share with you my little test bellow:
```python
# Define texts to compare
text_1 = u"\u30d5\u309a" # プ
text_2 = u"\u30af\u3099" # グ
# Installations
!pip install transformers
!pip install sentencepiece
!git clone https://github.com/pytorch/fairseq
!cd fairseq
!pip install .
!wget https://dl.fbaipublicfiles.com/fairseq/models/xlmr.base.tar.gz
!tar -xzvf xlmr.base.tar.gz
# Load the model in fairseq
from fairseq.models.roberta import XLMRModel
xlmr = XLMRModel.from_pretrained('/content/data/xlmr.base', checkpoint_file='model.pt')
xlmr.eval()
# Load the model in transformers
from transformers import AutoTokenizer
tokenizer_f = AutoTokenizer.from_pretrained('xlm-roberta-base')
# Compare encoding
def compare(text):
faiseq_input_ids = xlmr.encode(text).tolist()
faiseq_ids_to_tokens = [xlmr.decode(torch.tensor([id])) for id in faiseq_input_ids]
faiseq_ids_to_tokens_unicode = [tok.encode('raw_unicode_escape') for tok in faiseq_ids_to_tokens]
trfs_input_ids = tokenizer_f.encode(text)
trfs_ids_to_tokens = tokenizer_f.convert_ids_to_tokens(trfs_input_ids)
trfs_ids_to_tokens_unicode = [tok.encode('raw_unicode_escape') for tok in trfs_ids_to_tokens]
print(f"{'Version':8}|{'Input ids':24}|{'Corresponding tokens':30}|Corresponding tokens in unicode format")
print(f"{'fairseq':8}|{repr(faiseq_input_ids):24}|{repr(faiseq_ids_to_tokens):30}|{repr(faiseq_ids_to_tokens_unicode)}")
print(f"{'trfs':8}|{repr(trfs_input_ids):24}|{repr(trfs_ids_to_tokens):30}|{repr(trfs_ids_to_tokens_unicode)}")
compare(text_1)
# Version |Input ids |Corresponding tokens |Corresponding tokens in unicode format
# fairseq |[0, 6, 16985, 2] |['', '', 'プ', ''] |[b'', b'', b'\\u30d7', b'']
# trfs |[0, 6, 17462, 113810, 2]|['<s>', '▁', 'フ', '゚', '</s>']|[b'<s>', b'\\u2581', b'\\u30d5', b'\\u309a', b'</s>']
compare(text_2)
# Version |Input ids |Corresponding tokens |Corresponding tokens in unicode format
# fairseq |[0, 6, 21300, 2] |['', '', 'グ', ''] |[b'', b'', b'\\u30b0', b'']
# trfs |[0, 6, 4758, 246514, 2] |['<s>', '▁', 'ク', '゙', '</s>']|[b'<s>', b'\\u2581', b'\\u30af', b'\\u3099', b'</s>']
```
What is surprising about this test is that sentencepiece transforms the `\u30d5\u309a` sequence into the composed `u30d7` version (same for `\u30af\u3099`). The resulting character in both cases is identical but the unicode encoding is different: this is a problem for the consistency of the input for our model.
The bad news is that I don't know how sentencepiece manages to change a decomposed character into a composed character becaused we are taking the normalization operation directly from the sentencepiece proto model.
https://github.com/huggingface/transformers/blob/4975002df50c472cbb6f8ac3580e475f570606ab/src/transformers/convert_slow_tokenizer.py#L463-L470
Let me ping @Narsil who may have an idea of where the difference lies.
> BTW: If you really need a fast implementation with better parity, maybe I can provide one after agreed by my manager.
If you ever have time, I think this is indeed an important bug but most probably hard to solve! Thanks a lot for offering your help :pray: <|||||>Hi @kristjanArumae,
The case you report is in my opinion well expected because the encoding is identical between the code base of the authors of xlm-r and the fast implementation in transformers. This is a normalization of the text selected by the authors.
By reusing the functions defined in my previous comment, we can check that:
```python
def compare_ids(text):
faiseq_input_ids = xlmr.encode(text).tolist()
trfs_input_ids = tokenizer_f.encode(text)
print(f"{'Version':8}|Input ids")
print(f"{'fairseq':8}|{repr(faiseq_input_ids)}")
print(f"{'trfs':8}|{repr(trfs_input_ids):}")
compare_ids(text_3)
# Version |Input ids
# fairseq |[0, 15, 2]
# trfs |[0, 15, 2]
compare_ids(text_4)
# Version |Input ids
# fairseq |[0, 15, 2]
# trfs |[0, 15, 2]
```
<|||||>Hi @SaulLu:
Thanks for the response. For the normalization logic, I think you can check the following file:
https://raw.githubusercontent.com/google/sentencepiece/master/data/nmt_nfkc.tsv
This file defines the normalization logic. For example, for the following line:
```bash
41 302 300 1EA6 # Ầ => Ầ
```
It means that if "41 302 300" is encountered, it should be replaced to "1EA6"
Thanks
<|||||>Hello everyone.
When the fast tokenizer was implemented, extreme care was taken that there was no divergence between algorithm.
To the exception of "AAA" -> ("AA", "A") vs ("A¨, "AA"), since both are valid, have the same score, and it's up to a float calculation divergence between both code bases (f64 vs f32).
The check was running ALL spm tokenizers, against the entire XNLI database (Seemed not too big ,yet provide ample amount of weird unicode oddities to be a good testing ground).
That doesn't mean something didn't work properly/ wasn't check against.
Here is the code to replicate the `spm_precompiled` code: https://github.com/huggingface/spm_precompiled
And here is how it's tied to the tokenizer: https://github.com/huggingface/tokenizers/blob/main/tokenizers/src/normalizers/precompiled.rs
One definite potential suspect is the highly suspicious code defined here: https://github.com/huggingface/tokenizers/blob/main/tokenizers/src/normalizers/precompiled.rs#L46
As mentionned by past me, this code is super odd, but it seemed to really operate that way at the time.
Purely respecting the Trie on bytes didn't work, working only full graphemes neither, unfortunately I don't remember all specifics.
It could have been bad implementations on my end when trying those solutions. (Offsets are also a source of headaches for this code)
IIRC all of the issues encountered were engraved in tests.
The bad grapheme here does seem to be of length `6` which magically makes it not being respected in our code: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=562ab464479995b315bcc585c24b2e0a
**I think we have to trust my past self, attempt to find the better code, but have a huge amount of testing to make sure we don't break anything else.**
I also looked `sentencepiece` itself has made some modifications on those files. They shouldn't have modified anything on the surface, but maybe something to keep in mind for this https://github.com/google/sentencepiece/commit/fab966ad218c6d3449f7ebf088c8b891afbabec2
There's also a lot of details in the PR that originated this: https://github.com/huggingface/tokenizers/pull/401
Part of the explanation over there predates the `Precompiled` code, as I was attempting to use our own `normalizers` as first rules. Precompiled should match `spm` 1-1 (It's just rewritten in Rust, but it's really a big copy paste).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,969 | closed | [Doctests] Move doctests to new GPU & Fix bugs | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This fixes all the doc tests and moves them to a CPU runner.
99% of doc-tests are run on CPU anyways in PyTorch and it shouldn't make a difference in Tensorflow whether they are on CPU or GPU, so I think to save some $2000 per month we can safely run them on CPU.
Ok for you @sgugger @LysandreJik ?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-07-2022 15:55:06 | 03-07-2022 15:55:06 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15969). All of your documentation changes will be reflected on that endpoint.<|||||>@sgugger @LysandreJik - if you're fine with new CPU doc test runner, then if possible it would also be nice to report test failures just like we do the daily slow tests.
Would be happy to help set it up<|||||>> Thanks for fixing! I'm just not for making the examples more complex (especially in the quicktour) just for the sake of the doctests. Apart from that, it all looks good to me!
>
> Note that moving the job to a CPU means we won't test any of the training tutorials however, so not sure it's worth it. But we can switch back when those pass.
Yeah, I see the point here! I was thinking a bit too much about just the model docstring examples which all always run on CPU. Happy to switch it back once we have the training examples working! <|||||>Think everything works as expected now! Tests will run daily on GPU. Thanks for the help @LysandreJik <|||||>Cleaning up the rounding stuff in the docs in a follow-up PR |
transformers | 15,968 | closed | How to use GPT-2 for predicting the next word in batch | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:transformers 4.16.0.ver0
- Python version:3.7
- PyTorch version (GPU?):1.10.1
- Tensorflow version (GPU?):
- Using GPU in script?:yes
- Using distributed or parallel set-up in script?:no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.-->
@patrickvonplaten @LysandreJik
### Information
I'm using GPT2 for predicting the next word and I expect to get the loss. For running the model in batch, I should make paddings so that all the samples are of the same length. But when padding I got the significantly different loss values.
```
from transformers import GPT2LMHeadModel, GPT2Tokenizer
import torch
import numpy as np
device = 'cuda'
model_id = 'gpt2-medium'
model = GPT2LMHeadModel.from_pretrained(model_id).to(device)
model.eval()
tokenizer = GPT2Tokenizer.from_pretrained(model_id)
def get_loss(input,target,encoded=True):
input=input.to(device)
target=torch.tensor(target).to(device)
#print(final_input,target)
output=model(**input,return_dict=True)
output = output.logits[0][-1:]
print(output)
loss_fct = torch.nn.CrossEntropyLoss()
#print(output.view(-1, output.size(-1)).shape, target.shape)
loss = loss_fct(output.view(-1, output.size(-1)), target)
return loss.cpu().detach()
tokenizer.pad_token = '!'
tokenizer.padding_side = 'left'
text='I like playing the'
target=' piano'
target=tokenizer.encode(target)
text1=tokenizer(text, return_tensors="pt")
print(text1)
text2=tokenizer(text,padding='max_length',max_length=300, return_tensors="pt")
print(text2)
print(get_loss(text1,target))
print(get_loss(text2,target))
```
And I got the following results:
#### not padding
```
tensor([[-67.2690, -65.8415, -70.7127, ..., -70.0382, -72.1321, -67.0170]],
device='cuda:0', grad_fn=<SliceBackward>)
tensor(4.4344)
```
#### padding
```
tensor([[-205.1836, -207.1315, -211.1725, ..., -218.3117, -216.3883,
-205.4879]], device='cuda:0', grad_fn=<SliceBackward>)
tensor(16.0859)
```
The loss is much higher after padding. And the logits value is also strange.
I have to pad on the left to make sure logits[-1:] is always the encode state of the last input.
## Expected behavior
I think that GPT2 can do this task but I don't know how to use it in batch.
<!-- A clear and concise description of what you would expect to happen. -->
| 03-07-2022 15:16:25 | 03-07-2022 15:16:25 | |
transformers | 15,967 | closed | Fix broken code blocks in README.md | # What does this PR do?
Fix a broken code blocks in `README.md` at [`transformers/examples/pytorch/contrastive-image-text`](https://github.com/huggingface/transformers/tree/master/examples/pytorch/contrastive-image-text).
## Current screenshot
<img width="1030" alt="Screen Shot 2022-03-07 at 22 45 57" src="https://user-images.githubusercontent.com/31459778/157046074-24163384-b132-4e98-b179-40137e382d9a.png">
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-07-2022 13:48:02 | 03-07-2022 13:48:02 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,966 | closed | EvalPrediction does not allow for "sources" parameter which "sari" metric requires | https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/src/transformers/trainer_utils.py#L67
Hello, I have been following [this example](https://github.com/huggingface/transformers/blob/master/examples/pytorch/translation/run_translation.py) and would like to use the [sari metric](https://github.com/huggingface/datasets/tree/master/metrics/sari), which requires sources in addition to predictions and references.
Would it be possible to modify this to allow passing in source utterances so that the [compute_metrics parameter](https://huggingface.co/docs/transformers/v4.16.2/en/main_classes/trainer#transformers.Trainer) can successfully pass the appropriate information to my custom compute_metrics function? Thanks! | 03-07-2022 12:26:00 | 03-07-2022 12:26:00 | I've placed a similar request to this (I would say is the same :)), not sure if it is released. @mariosasko your advise on this?
https://github.com/huggingface/datasets/issues/3818
<|||||>I saw this! I think it's fixed in the datasets library but isn't fixed yet in this one.<|||||>Hey! We're happy to review PRs if any of you want to try your hand at contributing!<|||||>I might be interested in contributing, but I would have quite a few questions first.
I looked at the code, and seems like [transformers/src/transformers/trainer](https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/src/transformers/trainer.py) is one of the main files that calls `compute_metrics` with `EvalPrediction`. I see the inputs variable in [this line](https://github.com/huggingface/transformers/blob/db7d6a80e82d66127b2a44b6e3382969fdc8b207/src/transformers/trainer.py#L2372). How do I get just the inputs from this variable and not the targets? One of the docstrings says `inputs (`Dict[str, Union[torch.Tensor, Any]]`): The inputs and targets of the model.`
My goal is to have a variable `all_inputs` just like `all_labels` and `all_preds` that contains the source utterances so that it can be used as a parameter for compute_metrics.<|||||>I've partially make it work with my code with the following changes, but more work is needed to actually have a solution in production since it depends on a lot of code.
Adding the inputs to `trainer.py`, from line 2419 (just a summary):
```
# losses/preds/labels on CPU (final containers)
..
all_labels = None
..
for step, inputs in enumerate(dataloader):
..
inputs = inputs.data['decoder_input_ids']
..
# Update containers on host
..
if inputs is not None:
inputs = self._pad_across_processes(inputs)
inputs = self._nested_gather(inputs)
inputs_host = inputs if inputs_host is None else nested_concat(inputs_host, inputs, padding_index=-100)
..
# Gather all tensors and put them back on the CPU if we have done enough accumulation steps.
..
if inputs_host is not None:
inputs = nested_numpify(inputs_host)
all_inputs = inputs if all_inputs is None else nested_concat(all_inputs, inputs, padding_index=-100)
..
# Gather all remaining tensors and put them back on the CPU
..
if inputs_host is not None:
inputs = nested_numpify(inputs_host)
all_inputs = inputs if all_inputs is None else nested_concat(all_inputs, inputs, padding_index=-100)
..
# Number of losses has been rounded to a multiple of batch_size and in a distributed training, the number of
..
if all_inputs is not None:
all_inputs = nested_truncate(all_inputs, num_samples)
..
# Metrics!
if self.compute_metrics is not None and all_preds is not None and all_labels is not None:
metrics = self.compute_metrics(EvalPrediction(inputs=all_inputs, predictions=all_preds, label_ids=all_labels))
```
Then to file `trainer_utils.py`, from line 67:
```
class EvalPrediction(NamedTuple):
..
inputs: Union[np.ndarray, Tuple[np.ndarray]]
..
```
And more work has to be done in the `compute_metric()` function from the `trainer.py` class. For now, I'm using my metric directly in my transformers example file:
```
def compute_metrics(eval_preds):
inputs, preds, labels = eval_preds
if isinstance(preds, tuple):
preds = preds[0]
decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True)
if data_args.ignore_pad_token_for_loss:
# Replace -100 in the labels as we can't decode them.
labels = np.where(labels != -100, labels, tokenizer.pad_token_id)
inputs = np.where(inputs != -100, inputs, tokenizer.pad_token_id)
decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True)
decoded_inputs = tokenizer.batch_decode(inputs, skip_special_tokens=True)
# Some simple post-processing
decoded_inputs, decoded_preds, decoded_labels = postprocess_text(decoded_inputs, decoded_preds,
decoded_labels, "sari")
sari_result = sari._compute(inputs=decoded_inputs, predictions=decoded_preds, references=decoded_labels)
```
I hope it helps with the refactoring :) <|||||>cc @sgugger <|||||>I don't really understand the changes you suggest @lmvasque since they use a variable `inputs_host` that is not defined anywhere in the `trainer.py` file. It would be easier to study the diff of what you suggest on a PR.
Note that a line such as
```
inputs = inputs.data['decoder_input_ids']
```
can't be accepted, since it's super specific to a model (not all models have `decoder_input_ids`) and also relies on the `data` field of the batch, which doesn't always exist. <|||||>Thanks for reviewing this @sgugger. Yes, I've just realized that this code is executed only when running on GPU. I had the chance to run it in this setting this week and yes, you are right about the changes you mention.
I've added all my changes as a pull request so you can easily review them (please use them as a reference not as a ready to go feature): https://github.com/huggingface/transformers/pull/16461
About these changes:
- These are definitely not enough for production, further changes are needed in the compute_metrics(), but the dependencies start to get messy.
- These changes work for me by using my own version of compute metrics in my external metrics file.
- For adding the inputs, I've replicated the code of the preds and labels across the code. However, I don't know if all of these transformations are necessary. I don't understand deeply this code to tell :)
<|||||>I had not realized you were adding a field to the `EvalPrediction` named tuple. That's unfortunately a breaking change we can't do, as it would break the code of every user doing evaluation with a `compute_metrics` function.<|||||>Can this be supported otherwise? Research in Text Simplification uses the inputs in its main evaluation metric [SARI](https://huggingface.co/metrics/sari), so we cannot use Huggingface pipeline (Datasets + Transformers) for our models (unless we hack the code for our purposes..).<|||||>I'll look into adding something that would be backward compatible and does the same as your PR, but it might take a bit of time. In the meantime, I'd advise using a subclass of the `Trainer` with your custom code.<|||||>That's sounds good, I'm happy to do that meanwhile. Thanks again for this! It would be a good step for the Simplification world :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I think this can be closed now, since your PR was merged @lmvasque <|||||>Thanks everyone! Closing.<|||||>For everyone to use the latest version of Transformer (>=v.4.21.0 (https://newreleases.io/project/github/huggingface/transformers/release/v4.21.0) simply define: ``` include_inputs_for_metrics = True ``` in the training arguments.
```
training_args = Seq2SeqTrainingArguments(
include_inputs_for_metrics = True,
# other arguments here
# ...
)
```
Then in the `compute_metrics()` function, you can use `inputs`.
```
compute_metrics(pred):
# do something with pred.inputs
```
Thank all guys above here. Cheers!
|
transformers | 15,965 | closed | remove re-defination of FlaxWav2Vec2ForCTCModule | # What does this PR do?
Remove duplicate definations of `FlaxWav2Vec2ForCTCModule` | 03-07-2022 11:26:50 | 03-07-2022 11:26:50 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15965). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15965). All of your documentation changes will be reflected on that endpoint. |
transformers | 15,964 | closed | Feature Extractor accepts `segmentation_maps` | # What does this PR do?
This PR modifies `MaskFormerFeatureExtractor` in order to accepts a `segmentation_maps`. Under the hood in convert the map to binary masks using `.convert_segmentation_map_to_binary_masks`. It follows an usage example:
```python
image = Image.open("./transformers/tests/fixtures/tests_samples/ADE20K/ADE_val_00000001.jpg")
segmentation_map = Image.open("./transformers/tests/fixtures/tests_samples/ADE20K/ADE_val_00000001.png")
feature_extractor = MaskFormerFeatureExtractor(num_labels=150)
inputs = feature_extractor(
images=[image],
segmentation_maps=[segmentation_map],
return_tensors="pt",
)
mask_fomer = MaskFormerForInstanceSegmentation.from_pretrained("facebook/maskformer-swin-small-ade").eval()
mask_fomer(**inputs)
```
This simplifies and aligns the APIs to other models (e.g. `SegFormer`).
## Issues
We need to store somewhere the `num_labels`. Currently, I've added it as a `__init__` params. However, we may want to avoid storing dataset-related information in the feature extractor. Open for discussion :)
## TODO
- [ ] re-upload the feature extractors config to the hub | 03-07-2022 09:49:11 | 03-07-2022 09:49:11 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the comments. All the conversations have been resolved, the feature extractors config have to be re-uploaded to the hub.
I think passing the `num_labels` in the `__init__` is not the best solution (but it's a solution) since:
- the user must know and remember to override it later on
- the feature extractor is coupled with the dataset used to pretrained
Let me know if you have a better solution in mind :) <|||||>We need the number of labels stored somewhere to be able to run the inference widgets on those models, so having it in the feature extractor is the best solution I can think of.<|||||>Added `ignore index` and `reduce labels` following `SegFormer` APIs<|||||>Updated the code base, now the feature extractor will return two list when `segmentation_maps` is passed:
- **mask_labels** -- list of tensor of shape `(labels, height, width)`
- **class_labels** -- list of tensor of shape `(labels, num_labels)`
They identify the labels of `mask_labels`, e.g. the label of `mask_labels[i][j]` if `class_labels[i][j]`. Due to different sizes in the first dimensions, they cannot be stacked together.
We could pad them but this will add more development time<|||||>Thanks
- `ignore_index` is needed for some cases in which we want to deal with, well, indeces we want to ignore :)
<|||||>Update the doc in the feature extractor with an example<|||||>Update the documentation by changing `labels -> num_labels` and providing a clear example on how `segmentation_maps` are preprocessed for maskformer<|||||>Update docstring in modeling masksformer to reflect the changes in the inputs |
transformers | 15,963 | closed | Speedup T5 Flax training by using Numpy instead of JAX for batch shuffling | # What does this PR do?
The example Flax T5 MLM script was modified to not use JAX for batch shuffling.
With this change, training time per step decreased from 0.246 s to 0.185 s, a speedup of 1.3 (t5-base model trained on TPU v3-8 VM, batch size 64, Adafactor optimizer).
Shuffling on CPU also prevents using memory on the accelerator devices for the batch index array, which is significant for the large datasets that are typically used with pre-training. For instance, the batch index array size for a 30GB dataset with sequence length 512 is 500MB.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR. @patrickvonplaten | 03-07-2022 08:26:23 | 03-07-2022 08:26:23 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15963). All of your documentation changes will be reflected on that endpoint.<|||||>Think I'm in favor of this. However could we also set a numpy random seed in the script to make the runs reproducible?
cc @patil-suraj - what do you think here? <|||||>> However could we also set a numpy random seed in the script to make the runs reproducible?
The script calls trainer_utils.py's set_seed() which amongst others sets numpy's seed. |
transformers | 15,962 | closed | longformer-large's hidden_states and last_hidden_states have different size in sequence_length | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version: 3.7
- PyTorch version (GPU?): GPU
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART: @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Longformer-large
The problem arises when using: student's homework
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [1 ] my own task or dataset: (give details below): NER
## To reproduce
Steps to reproduce the behavior:
1. input the sequence into Longformer-Large
2. oupt the last_states.shape and last_hidden_states[-1].shape
3. find out that them have different length in sequence_length
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 03-07-2022 07:35:09 | 03-07-2022 07:35:09 | sorry,should be the hidden_states[-1].shape and last_hidden_states.shape<|||||>Hi @ddf62 ! Could you post a shot code snippet so we can reproduce this ? Thanks!<|||||>```python
class MyModel(tez.Model):
def __init__(self, freeze_bert=False, model_name='longformer-large', hidden_size=1024, num_classes=2):
super().__init__()
config = AutoConfig.from_pretrained(model_name)
hidden_dropout_prob: float = 0.22
layer_norm_eps: float = 17589e-7
config.update(
{
"output_hidden_states": True,
"hidden_dropout_prob": hidden_dropout_prob,
"layer_norm_eps": layer_norm_eps,
"add_pooling_layer": False,
}
)
self.automodel = AutoModel.from_config(config)
self.fc = nn.Sequential(
nn.Dropout(p=0.5),
nn.Linear(hidden_size * 4, num_classes, bias=False),
)
if freeze_bert:
for p in self.automodel.parameters():
p.requires_grad = False
def forward(self, ids, mask):
outputs = self.automodel(ids, mask)
k = ids.shape[1]
hidden_states = torch.cat(tuple([outputs.hidden_states[i] for i in [-1, -2, -3, -4]]),
dim=-1) # [bs, seq_len, hidden_dim*4]
first_hidden_states = hidden_states[:, :k, :] # [bs, hidden_dim*4]
print(outputs.hidden_states[-1].shape, outputs.last_hidden_states.shape)
logits = self.fc(first_hidden_states)
logits = torch.softmax(logits, dim=-1)
return logits, 0, {}
```
in forward,’print(outputs.hidden_states[-1].shape, outputs.last_hidden_states.shape)’s results is shown in the picture blow:
The second dim is different in two tensors.
发件人: Suraj Patil
发送时间: 2022年3月7日 18:48
收件人: huggingface/transformers
抄送: ddf62; Mention
主题: Re: [huggingface/transformers] longformer-large's hidden_states andlast_hidden_states have different size in sequence_length (Issue #15962)
Hi @ddf62 ! Could you post a shot code snippet so we can reproduce this ? Thanks!
—
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you were mentioned.Message ID: ***@***.***>
<|||||>the output is:
torch.Size([16, 1536, 1024]) torch.Size([16, 1304, 1024])<|||||>cc @patrickvonplaten <|||||>@ddf62 - Could you please post a formatted code snippet in:
```python
class MyModel(tez.Model):
```
and one that we can reproduce? What is `tez` ? This issue also doesn't seem to be a core issue of Transformers so maybe the forum https://discuss.huggingface.co/ is the better place?<|||||>sorry,you can use this code and their outputs are same:
```py
from transformers import AutoTokenizer, AutoModel, LongformerTokenizerFast
import torch
class MyModel(nn.Module):
def __init__(self, freeze_bert=False, model_name='longformer-large', hidden_size=1024, num_classes=15):
super(MyModel, self).__init__()
self.automodel = AutoModel.from_pretrained(model_name, output_hidden_states=True, return_dict=True)
self.fc = nn.Sequential(
nn.Dropout(p=0.5),
nn.Linear(hidden_size * 4, num_classes, bias=False),
)
if freeze_bert:
for p in self.automodel.parameters():
p.requires_grad = False
def forward(self, input_ids, attn_masks):
outputs = self.automodel(input_ids, attention_mask=attn_masks)
print(outputs.hidden_states[-1].shape, output.last_hidden_state.shape)
hidden_states = torch.cat(tuple([outputs.hidden_states[i] for i in [-1, -2, -3, -4]]),
dim=-1) # [bs, seq_len, hidden_dim*4]
first_hidden_states = hidden_states[:, :, :] # [bs, hidden_dim*4]
logits = self.fc(first_hidden_states)
return logits
```<|||||>Gently pinging @ydshieh here since he is becoming our longformer expert. If you have some time it would be amazing if you could take a look here :-)<|||||>OK, added to my TODO list :)<|||||>Hi @ddf62
Could you also include the code that you used to create an instance of `MyModel`, prepare the inputs, and pass the inputs to the model in your above code snippet , please? This way, I can run the code to get the output directly and to investigate it.
Thank you.<|||||>```
from transformers import AutoTokenizer, AutoModel, LongformerTokenizerFast
import torch
from torch import nn
class MyModel(nn.Module):
def __init__(self, freeze_bert=False, model_name='longformer-large', hidden_size=1024, num_classes=15):
super(MyModel, self).__init__()
self.automodel = AutoModel.from_pretrained(model_name, output_hidden_states=True, return_dict=True)
self.fc = nn.Sequential(
nn.Dropout(p=0.5),
nn.Linear(hidden_size * 4, num_classes, bias=False),
)
if freeze_bert:
for p in self.automodel.parameters():
p.requires_grad = False
def forward(self, input_ids, attn_masks):
outputs = self.automodel(input_ids, attention_mask=attn_masks)
print(outputs.hidden_states[-1].shape, outputs.last_hidden_state.shape)
hidden_states = torch.cat(tuple([outputs.hidden_states[i] for i in [-1, -2, -3, -4]]),
dim=-1) # [bs, seq_len, hidden_dim*4]
first_hidden_states = hidden_states[:, :, :] # [bs, hidden_dim*4]
logits = self.fc(first_hidden_states)
return logits
model = MyModel(model_name='../model/longformer-large', num_classes=15, freeze_bert=False)
tokenizer = LongformerTokenizerFast.from_pretrained('../model/longformer-large', add_prefix_space=True)
encoding = tokenizer('80% of Americans believe seeking multiple opinions can help them make better choices, and for good reason.'.split(),
is_split_into_words=True,
# return_offsets_mapping=True,
truncation=True)
model(torch.tensor([encoding['input_ids']]), torch.tensor([encoding['attention_mask']]))
```
the output is :torch.Size([1, 512, 1024]) torch.Size([1, 22, 1024])
But i think the shape of hidden_states[-1] should also be [1, 22, 1024].<|||||>Hi @ddf62
- Could you run the command `transformers-cli env` and copy-and-paste its output below. We need this information in order to reproduce the issue.
- I could not find any pretrained model with the name `longformer-large`. In your code snippet, you have `'../model/longformer-large'`, which is a local file we don't have. Could you check where does your model comes from?
With `allenai/longformer-large-4096` and a dev version `version: 4.18.0.dev0`, I can't reproduce the issue. The output I got is:
```python
torch.Size([1, 22, 1024]) torch.Size([1, 22, 1024])
```<|||||>```
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.15.0
- Platform: Linux-5.4.0-92-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.11
- PyTorch version (GPU?): 1.10.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: 3090<fill in>
- Using distributed or parallel set-up in script?: <fill in>
```
'../model/longformer-large' is the model i download from [allenai](https://huggingface.co/allenai)
/
[longformer-large-4096](https://huggingface.co/allenai/longformer-large-4096)<|||||>> ```
> Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
>
> - `transformers` version: 4.15.0
> - Platform: Linux-5.4.0-92-generic-x86_64-with-debian-buster-sid
> - Python version: 3.7.11
> - PyTorch version (GPU?): 1.10.1 (True)
> - Tensorflow version (GPU?): not installed (NA)
> - Flax version (CPU?/GPU?/TPU?): not installed (NA)
> - Jax version: not installed
> - JaxLib version: not installed
> - Using GPU in script?: 3090<fill in>
> - Using distributed or parallel set-up in script?: <fill in>
> ```
>
> '../model/longformer-large' is the model i download from [allenai](https://huggingface.co/allenai) / [longformer-large-4096](https://huggingface.co/allenai/longformer-large-4096)
Thank you! I am able to reproduce the issue. This issue is already fixed in a pull request #15537 and is included in the latest version [v4.17.0](https://github.com/huggingface/transformers/releases/tag/v4.17.0).
I am going to close this issue, but don't hesitate if you have further question. Thanks! |
transformers | 15,961 | closed | Seed _get_train_sampler's generator with arg seed to improve reproducibility | ... and make the world_size<=1 code path more similar to the others
# What does this PR do?
Seed the generator that is (sometimes) used in _get_train_sampler with self.args.seed to improve reproducibility/control over randomness.
https://github.com/huggingface/transformers/pull/11582/files introduced using torch generators (when possible) to improve reproducibility in trainer's _get_train_sampler. However, the seed in TrainingArguments isn't used explicitly. This is usually fine, but if there's any non-determinism in the number of calls to random generators in, e.g., model_init, this doesn't do what you want. (For instance, let's say you wanted to play with differently sized models but leave all else equal including data loading order.)
The other branches of this method appear (I think) to all use this same seed, so that helps bring this in line too, but that's probably not important.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
No, hope that's ok.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
internal bugfix
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Suggesting @sgugger (cc @siddk)
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-07-2022 06:53:54 | 03-07-2022 06:53:54 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15961). All of your documentation changes will be reflected on that endpoint.<|||||>No. As mentioned in the other issues, this code is intended to be this way. The torch RNG has been seeded so this is completely deterministic.<|||||>Ok, I think it depends on what guarantees you want to provide for
reproducibility. As I say in the pr, if the seed should guarantee that you
get the same data in the same order, even for different models, then you
need this. Otherwise you're of course right.
It's of course your call, but the test provided fails against master.
On Mon, Mar 7, 2022, 4:46 AM Sylvain Gugger ***@***.***>
wrote:
> No. As mentioned in the other issues, this code is intended to be this
> way. The torch RNG has been seeded so this is completely deterministic.
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/15961#issuecomment-1060650001>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAACLIJ6VJKQGDR4VNWCJJDU6X3BLANCNFSM5QCMI7JA>
> .
> Triage notifications on the go with GitHub Mobile for iOS
> <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
> or Android
> <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
>
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
<|||||>Oh, I read too fast and didn't catch your whole point. Thanks for expanding :-)
How about we turn this on and off with a flag? Since the `seed` is set by default, I'm afraid this will hinder users launching the train method several times (they'll get the same shuffle then, instead of different ones).<|||||>That makes sense to me! How about:
* we add a new flag called "sampler_seed" (or "data_seed" or whatever you
want) which defaults to None, and
* we use the old behavior if it's None and otherwise use sampler_seed for
this generator and for all the other seeds in that method otherwise.
On Mon, Mar 7, 2022 at 7:54 AM Sylvain Gugger ***@***.***>
wrote:
> Oh, I read too fast and didn't catch your whole point. Thanks for
> expanding :-)
> How about we turn this on and off with a flag? Since the seed is set by
> default, I'm afraid this will hinder users launching the train method
> several times (they'll get the same shuffle then, instead of different
> ones).
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/15961#issuecomment-1060838414>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAACLILPQPQTGMDV4GY35D3U6YRDTANCNFSM5QCMI7JA>
> .
> Triage notifications on the go with GitHub Mobile for iOS
> <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
> or Android
> <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
>
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
<|||||>Sounds good to me!<|||||>Fwiw, in messing with our downstream project mistral, I discovered another
place where the existing behavior could prove problematic, though
realistically mostly in unit tests
```python
trainerA = Trainer(args=TrainingArguments(seed=7), ...)
trainerB = Trainer(args=TrainingArguments(seed=7))
loaderA = trainerA.get_train_dataloader()
loaderB = trainerB.get_train_dataloader()
```
loaderA and loaderB will be initialized with different seeds because the
calls to set_seed are in the constructors for trainerA and trainerB. This
seems surprising and undesirable to me.
Happy to stick with the plan, but if that changes your mind about the
default, please let me know!
On Mon, Mar 7, 2022 at 10:14 AM Sylvain Gugger ***@***.***>
wrote:
> Sounds good to me!
>
> —
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/15961#issuecomment-1060983404>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAACLIIK6IOJE7KF6D6LMU3U6ZBRNANCNFSM5QCMI7JA>
> .
> Triage notifications on the go with GitHub Mobile for iOS
> <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
> or Android
> <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
>
> You are receiving this because you authored the thread.Message ID:
> ***@***.***>
>
<|||||>not sure what's going on with the doc build. it looks unrelated to me, but please let know if I'm wrong.
Otherwise, I made data_seed along the lines discussed, so I think it's RFAL.<|||||>Thanks again for your PR!<|||||>Thanks for reviewing and accepting! |
transformers | 15,960 | closed | How to get T5 encoder&decoder's hidden states and keep requires_grad=True | transformers version:
4.15.0
Hi! I am making a custom T5 generation model, and I have some problems. In my model, I want to take out the decoder's last hidden states and encoder's last hidden states to do some other operations, and these operations will be used to fine-tune the encoder and decoder. So I need last_hidden_state's requires_grad=True and is_leaf=False.
I get the encoder_last_hidden_states and decoder_last_hidden_states in the following code:
outputs1 = self.T5model1(
input_ids=batch["source_ids"],
attention_mask=batch["source_mask"],
labels=lm_labels,
decoder_attention_mask=batch['target_mask'],
output_hidden_states=True,
)
decoder_last_hidden_states=outputs1['decoder_hidden_states'][-1]
encoder_last_hidden_states=outputs1['encoder_last_hidden_state']
But I found that the requires_grad of decoder_last_hidden_states and encoder_last_hidden_states are False and is_leaf of them is True. It looks like the gradient doesn't propagate.
How do I get these vectors so that I can finetune the full six-layer encoder and six-layer decoder.
| 03-07-2022 06:45:36 | 03-07-2022 06:45:36 | |
transformers | 15,959 | closed | ValueError: DebertaV2Model does not support gradient checkpointing. | ValueError: DebertaV2Model does not support gradient checkpointing! | 03-07-2022 06:33:53 | 03-07-2022 06:33:53 | Hi,
DebertaV2 supports gradient checkpointing as per #14175.
Can you provide a code snippet to reproduce your issue?<|||||>`config = AutoConfig.from_pretrained(model_name)
config.update(
{
"gradient_checkpointing":True,
"output_hidden_states": True,
"hidden_dropout_prob": hidden_dropout_prob,
"layer_norm_eps": layer_norm_eps,
"add_pooling_layer": False,
"num_labels": self.num_labels,
}
)
self.transformer = AutoModel.from_pretrained(model_name, config=config)`
Hello, the above is my code snippet<|||||>I tried to reproduce the error, but I couldn't. @shilida would you please mention the `model_name` and which transformer version you have currently installed? <|||||>> I tried to reproduce the error, but I couldn't. @shilida would you please mention the `model_name` and which transformer version you have currently installed?
Thanks! The model_name is debert-v2-xlarge. And the transformer version is 4.11.3<|||||>You should try something like that (worked for me with debertaV3):
```
import torch.utils.checkpoint
config = AutoConfig.from_pretrained(model_name)
config.update(
{
"output_hidden_states": True,
"hidden_dropout_prob": hidden_dropout_prob,
"layer_norm_eps": layer_norm_eps,
"add_pooling_layer": False,
"num_labels": self.num_labels,
}
)
self.transformer = AutoModel.from_pretrained(model_name, config=config)
self.transformer.gradient_checkpointing_enable()
``` |
transformers | 15,958 | closed | Blenderbot 1.0B Distilled eats up memory over many inferences | Hi, I've noticed that over the course of many inferences, the Blenderbot 1.0B Distilled model continuously allocates GPU memory and eventually causes the GPU to crash. My project only uses single-turn inferences, and I was wondering how to prevent Blenderbot from continuously allocating memory. Thanks
<img width="1209" alt="Screen Shot 2022-03-06 at 11 27 45 PM" src="https://user-images.githubusercontent.com/12601917/156968067-425cf072-a182-4f85-87f7-64517ef2b93b.png">
! | 03-07-2022 04:28:02 | 03-07-2022 04:28:02 | It's hard to say anything without looking at the code. Also for such general question, [forum](https://discuss.huggingface.co/) would be the best place to ask. Thank you!<|||||>Hi, thank you! I'll definitely post in the forum. Also some extra information, the inferences work for the first few hundred posts, but then I run into the error above and I can't get any more inferences after.
My code is below. It's just basic generation (with somewhat high volume). Thanks!
<img width="1244" alt="Screen Shot 2022-03-07 at 4 21 31 PM" src="https://user-images.githubusercontent.com/12601917/157120053-fa9b94dd-2931-4c03-9c8a-4bc747bebba0.png">
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,957 | closed | mBART tokenizer not following expected target token order | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17
- Platform: linux
- Python version: 3.8.12
- PyTorch version (GPU?): 1.10.2+cu113
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
Maybe @patil-suraj @SaulLu @patrickvonplaten ?
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART: @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): mBART
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce & Expected behavior
```
from transformers import MBartTokenizerFast, MBartTokenizer
tokenizer = MBartTokenizer.from_pretrained('facebook/mbart-large-en-ro')
tokenizer.src_lang = 'en_XX'
tokenizer.tgt_lang = 'ro_RO'
print(tokenizer('haha'))
with tokenizer.as_target_tokenizer():
print(tokenizer('haha'))
```
which returns
```
{'input_ids': [22010, 2, 250004], 'attention_mask': [1, 1, 1]}
{'input_ids': [22010, 2, 250020], 'attention_mask': [1, 1, 1]}
```
However, per the [official doc](https://huggingface.co/transformers/v4.1.1/model_doc/mbart.html#training), the target sequence should follow the order of `[tgt_lang_code] X [eos]`, which is `[250020, 2, 22010]` in the case above.
| 03-07-2022 04:03:24 | 03-07-2022 04:03:24 | Hey @zijwang ! Thank you for reporting this.
Here actually the doc is wrong. Here the target tokens actually refer to `decoder_input_ids` for which the format should be
`[tgt_lang_code] x [eos]`
But it's not required to directly pass `decoder_input_ids`, we can instead just pass `labels` which is of the format `x [eos][tgt_lang_code]`. And the `decoder_input_ids` are prepared inside the model by shifting `labels` to the right, so we get the correct format for `decoder_input_ids` i.e `[tgt_lang_code] x [eos]`.
The `with tokenizer.as_target_tokenizer():` is used to prepare `labels`, so the order is correct. I will fix the docs and add better examples.
<|||||>Thanks, @patil-suraj . I see your point here. Having the docs updated will be helpful :)
Meanwhile, do we still need to set `decoder_start_token_id` (something like https://github.com/huggingface/transformers/blob/master/examples/pytorch/translation/run_translation_no_trainer.py#L365-L372) given `decoder_input_ids` is right-shifted input?<|||||>> do we still need to set `decoder_start_token_id`
It's not required to set it, if you pass `labels` those will be created inside the model.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,956 | closed | clean_up_tokenization_spaces=False does not behave correctly in AutoTokenizer's decode function | input example:
```python
tok_ids = [ 101, 2632, 1011, 7110, 1010, 2142, 5424, 14041, 2727, 1011, 2260, 1011, 5757, 102, 0]
```
if we do decode
```python
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
tokenizer.decode(tok_ids, clean_up_tokenization_spaces=False)
# output is
'[CLS] al - ain , united arab emirates 1996 - 12 - 06 [SEP] [PAD]'
# we can see here the ',' is separated from word 'ain'
```
if we do decode
```python
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
tokenizer.decode(tok_id, clean_up_tokenization_spaces=False)
# output is
'[CLS] al - ain, united arab emirates 1996 - 12 - 06 [SEP] [PAD]'
# the space between ',' and 'ain' is removed although we set clean_up_tokenization_spaces as False
```
following the above question, I find another inconsistent behavior:
```python
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
tokenizer.decode(tokenizer.encode("al-ain"))
# I expect decode result is "al-ain" but get "al - ain", so spaces are inserted
```
Is this designed as encoding and decoding the same word can lead to different output or it is a bug?
more precisely, if we do
```python
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
tokenizer.tokenize("al-ain")
# output is ['al', '-', 'ain']
```
but I expect it generates subwords not inserting spaces. | 03-06-2022 21:45:51 | 03-06-2022 21:45:51 | cc @SaulLu <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@SaulLu is this issue resolved or WIP ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @bugface,
> Is this designed as encoding and decoding the same word can lead to different output or it is a bug?
Short answer: yes :smiley:
In general, the `decode` method is a best-effort method because in the `text -> encode -> sequence of ids` operation there may be some loss of information (and this is the case for Bert's tokenizer by design). In general this is not a big deal as Bert is not a generative model and therefore does not need to decode a purely generated id sequence (I'd be very interested to know what use case you use decode for - as this is the subject of quite a few issues on our side! :pray: )
Also regarding the `clean_up_tokenization_spaces`, this argument targets specific cases as you can see in the method used to perform this cleaning:
https://github.com/huggingface/transformers/blob/f394a2a50d8729cd1ca9b368e330ec50664c3292/src/transformers/tokenization_utils_base.py#L3392-L3415
IMO, the `-` case is less "common" than the one listed in the current cleaning script.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,955 | closed | Unable to run PPLM potato example | I am following the instructions mentioned in the below page to run PPLM examples. Unfortunately the code is throwing error.
PPLM Page : https://github.com/huggingface/transformers/tree/master/examples/research_projects/pplm
**Environment:**
Python 3.9.7
pytorch-lightning==1.0.4
Error Log:
```
python run_pplm.py -B military --cond_text "The potato" --length 50 --gamma 1.5 --num_iterations 3 --num_samples 10 --stepsize 0.03 --window_length 5 --kl_scale 0.01 --gm_scale 0.99 --colorama --sample
= Prefix of sentence =
<|endoftext|>The potato
Using PPLM-BoW
0%| | 0/50 [00:00<?, ?it/s<|endoftext|>The potato has
2%|###3 | 1/50 [00:00<00:29, 1.63it/s]<|endoftext|>The potato has been
4%|######7 | 2/50 [00:00<00:21, 2.28it/s]<|endoftext|>The potato has been in
6%|##########1 | 3/50 [00:01<00:19, 2.39it/s]<|endoftext|>The potato has been in the
8%|#############5 | 4/50 [00:01<00:16, 2.73it/s]<|endoftext|>The potato has been in the news
10%|################9 | 5/50 [00:01<00:15, 2.89it/s]<|endoftext|>The potato has been in the news lately
12%|####################2 | 6/50 [00:02<00:14, 2.99it/s]<|endoftext|>The potato has been in the news lately for
14%|#######################6 | 7/50 [00:02<00:14, 3.01it/s]<|endoftext|>The potato has been in the news lately for its
16%|########################### | 8/50 [00:02<00:13, 3.01it/s]<|endoftext|>The potato has been in the news lately for its alleged
18%|##############################4 | 9/50 [00:03<00:13, 3.01it/s]<|endoftext|>The potato has been in the news lately for its alleged role
20%|#################################6 | 10/50 [00:03<00:13, 3.00it/s]<|endoftext|>The potato has been in the news lately for its alleged role in
22%|####################################9 | 11/50 [00:03<00:13, 2.89it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a
24%|########################################3 | 12/50 [00:04<00:13, 2.83it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly
26%|###########################################6 | 13/50 [00:04<00:13, 2.79it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague
28%|############################################### | 14/50 [00:05<00:13, 2.76it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that
30%|##################################################4 | 15/50 [00:05<00:12, 2.72it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed
32%|#####################################################7 | 16/50 [00:05<00:12, 2.70it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1
34%|#########################################################1 | 17/50 [00:06<00:12, 2.64it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,
36%|############################################################4 | 18/50 [00:06<00:12, 2.57it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400
38%|###############################################################8 | 19/50 [00:07<00:12, 2.53it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people
40%|###################################################################2 | 20/50 [00:07<00:11, 2.50it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across
42%|######################################################################5 | 21/50 [00:07<00:11, 2.47it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe
44%|#########################################################################9 | 22/50 [00:08<00:11, 2.44it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe.
46%|#############################################################################2 | 23/50 [00:08<00:11, 2.36it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It
48%|################################################################################6 | 24/50 [00:09<00:11, 2.31it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's
50%|#################################################################################### | 25/50 [00:09<00:11, 2.25it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a
52%|#######################################################################################3 | 26/50 [00:10<00:10, 2.23it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a potato
54%|##########################################################################################7 | 27/50 [00:10<00:10, 2.21it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a potato with
56%|############################################################################################## | 28/50 [00:11<00:10, 2.19it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a potato with an
58%|#################################################################################################4 | 29/50 [00:11<00:09, 2.14it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a potato with an unfortunate
60%|####################################################################################################8 | 30/50 [00:12<00:09, 2.08it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a potato with an unfortunate history
62%|########################################################################################################1 | 31/50 [00:12<00:10, 1.83it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a potato with an unfortunate history.
64%|###########################################################################################################5 | 32/50 [00:13<00:10, 1.73it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a potato with an unfortunate history.
66%|##############################################################################################################8 | 33/50 [00:13<00:09, 1.79it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a potato with an unfortunate history.
68%|##################################################################################################################2 | 34/50 [00:14<00:09, 1.72it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a potato with an unfortunate history.
In
70%|#####################################################################################################################6 | 35/50 [00:15<00:09, 1.61it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a potato with an unfortunate history.
In fact
72%|########################################################################################################################9 | 36/50 [00:15<00:08, 1.66it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a potato with an unfortunate history.
In fact,
74%|############################################################################################################################3 | 37/50 [00:16<00:08, 1.54it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a potato with an unfortunate history.
In fact, potatoes
76%|###############################################################################################################################6 | 38/50 [00:17<00:07, 1.62it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a potato with an unfortunate history.
In fact, potatoes were
78%|################################################################################################################################### | 39/50 [00:17<00:06, 1.66it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a potato with an unfortunate history.
In fact, potatoes were actually
80%|######################################################################################################################################4 | 40/50 [00:19<00:08, 1.21it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a potato with an unfortunate history.
In fact, potatoes were actually bred
82%|#########################################################################################################################################7 | 41/50 [00:19<00:07, 1.18it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a potato with an unfortunate history.
In fact, potatoes were actually bred to
84%|#############################################################################################################################################1 | 42/50 [00:21<00:07, 1.09it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a potato with an unfortunate history.
In fact, potatoes were actually bred to kill
86%|################################################################################################################################################4 | 43/50 [00:22<00:06, 1.01it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a potato with an unfortunate history.
In fact, potatoes were actually bred to kill people
88%|###################################################################################################################################################8 | 44/50 [00:22<00:05, 1.08it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a potato with an unfortunate history.
In fact, potatoes were actually bred to kill people.
90%|#######################################################################################################################################################2 | 45/50 [00:23<00:04, 1.06it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a potato with an unfortunate history.
In fact, potatoes were actually bred to kill people. But
92%|##########################################################################################################################################################5 | 46/50 [00:24<00:03, 1.12it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a potato with an unfortunate history.
In fact, potatoes were actually bred to kill people. But in
94%|#############################################################################################################################################################9 | 47/50 [00:25<00:02, 1.09it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a potato with an unfortunate history.
In fact, potatoes were actually bred to kill people. But in the
96%|#################################################################################################################################################################2 | 48/50 [00:26<00:01, 1.19it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a potato with an unfortunate history.
In fact, potatoes were actually bred to kill people. But in the early
98%|####################################################################################################################################################################6 | 49/50 [00:27<00:00, 1.17it/s]<|endoftext|>The potato has been in the news lately for its alleged role in a deadly plague that killed 1,400 people across Europe. It's a potato with an unfortunate history.
In fact, potatoes were actually bred to kill people. But in the early 1800
100%|########################################################################################################################################################################| 50/50 [00:27<00:00, 1.79it/s]
0%| | 0/50 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/Users/apple/PycharmProjects/streamlit-pplm/transformers/examples/research_projects/pplm/run_pplm.py", line 827, in <module>
run_pplm_example(**vars(args))
File "/Users/apple/PycharmProjects/streamlit-pplm/transformers/examples/research_projects/pplm/run_pplm.py", line 663, in run_pplm_example
unpert_gen_tok_text, pert_gen_tok_texts, _, _ = full_text_generation(
File "/Users/apple/PycharmProjects/streamlit-pplm/transformers/examples/research_projects/pplm/run_pplm.py", line 389, in full_text_generation
pert_gen_tok_text, discrim_loss, loss_in_time = generate_text_pplm(
File "/Users/apple/PycharmProjects/streamlit-pplm/transformers/examples/research_projects/pplm/run_pplm.py", line 499, in generate_text_pplm
pert_past, _, grad_norms, loss_this_iter = perturb_past(
File "/Users/apple/PycharmProjects/streamlit-pplm/transformers/examples/research_projects/pplm/run_pplm.py", line 115, in perturb_past
grad_accumulator = [(np.zeros(p.shape).astype("float32")) for p in past]
File "/Users/apple/PycharmProjects/streamlit-pplm/transformers/examples/research_projects/pplm/run_pplm.py", line 115, in <listcomp>
grad_accumulator = [(np.zeros(p.shape).astype("float32")) for p in past]
AttributeError: 'tuple' object has no attribute 'shape'
``` | 03-06-2022 18:48:36 | 03-06-2022 18:48:36 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,954 | closed | Make is_thing_map in Feature Extractor post_process_panoptic_segmentation defaults to all instances | # What does this PR do?
Following a discussion with @Narsil , argument `is_thing_map` in `FeatureExtractor.post_process_panoptic_segmentation` will default to consider all instances `thing`. Thus, it won't perform instances merging.
The user should always provide a correct `is_thing_map` if he wants to merge instances, currently, the code will default to `COCO` that may be unwanted. | 03-06-2022 12:24:44 | 03-06-2022 12:24:44 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15954). All of your documentation changes will be reflected on that endpoint.<|||||>> Not particularly fond of the names `thing` vs `stuff` but I imagine this is the standards ?
More or less :)
<|||||>Thanks @Narsil for the very nice feedback. Following your amazing set of examples, I believe `class_ids_to_fuse` (or `label_ids_to_fuse`) is the most descriptive name we can use.
As you said, a `Dict` is not necessary. My implementation followed more or less what was done by the authors, but we can improve it.
I'll update the code. |
transformers | 15,953 | closed | add simple multi gpu complet | # What does this PR do?
Multi-GPU evaluation
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-05-2022 22:01:30 | 03-05-2022 22:01:30 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15953). All of your documentation changes will be reflected on that endpoint. |
transformers | 15,952 | closed | Set scale_embedding to False in some TF tests | # What does this PR do?
This PR set `scale_embedding=False` in `TFSpeech2TextModelTester` to avoid `inputs_embeds` and the PT/TF difference being scaled by 4 - the objective is to keep a low tolerance `1e-5` in the PT/TF test, as we see several times this is a strong safe guard!
(This is not a real bug. It's also similar to #15684, where we got larger differences between PT/TF simply because the model weights are initialized with larger values).
TF: @gante @Rocketknight1
Speech: @patrickvonplaten
## More context
Set `scale_embedding` to `False` in some TF tests.
Current `Speech2TextConfig` has default [scale_embedding=True](https://github.com/huggingface/transformers/blob/9932ee4b4bca9045d941af6687ef69eedcf68483/src/transformers/models/speech_to_text/configuration_speech_to_text.py#L137). Therefore we have
- [self.embed_scale = tf.math.sqrt(float(embed_dim))](https://github.com/huggingface/transformers/blob/9932ee4b4bca9045d941af6687ef69eedcf68483/src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py#L742).
- The tests for `speech_to_text` has [hidden_size=16](https://github.com/huggingface/transformers/blob/9932ee4b4bca9045d941af6687ef69eedcf68483/tests/speech_to_text/test_modeling_tf_speech_to_text.py#L75), and therefore `inputs_embeds` will be scaled by 4.
https://github.com/huggingface/transformers/blob/9932ee4b4bca9045d941af6687ef69eedcf68483/src/transformers/models/speech_to_text/modeling_tf_speech_to_text.py#L843-L844
Since `inputs_embeds` here is obtained by some (conv.) layer instead of via look-up table, it contains some tiny difference between PT/TF. This difference is scaled by 4 through `self.embed_scale`.
This makes `TFSpeech2TextModel` **the only one** model that will fail the aggressive PT/TF test introduced in #15839 (with low tolerance `1e-5`). More precisely, the output tensors failed are `encoder_hidden_states_0` and `encoder_hidden_states_1`.
## Results
**With this PR, the tolerance `1e-5` works for all TF models' `test_pt_tf_model_equivalence` (in #15839), both on GPU / CPU, tested 100 times per model.** | 03-05-2022 16:37:26 | 03-05-2022 16:37:26 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15952). All of your documentation changes will be reflected on that endpoint. |
transformers | 15,951 | closed | Support modern list type hints in HfArgumentParser | # What does this PR do?
Support modern list type hint syntax ([PEP 585](https://www.python.org/dev/peps/pep-0585/), introduced in Python 3.9) in the HfArgumentParser. This change keeps backwards compatibility, e.g. the old `typing.List[...]` works just as well as the newer `list[...]`.
Fixes #15950
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-05-2022 15:19:16 | 03-05-2022 15:19:16 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15951). All of your documentation changes will be reflected on that endpoint. |
transformers | 15,950 | closed | HfArgumentParser doesn't recognize new list type hint syntax | Since Python 3.9 we can use generic types as type hints (e.g. `list[str]` instead of `typing.List[str]`), see also [PEP 585](https://www.python.org/dev/peps/pep-0585/). Currently this does not seem to work with the `HfArgumentParser`, but using the old `typing.List[str]` works.
Minimal example for an arguments data class:
```python
@dataclass
class Args:
my_boring_argument_list: typing.List[str] # works
my_cool_argument_list: list[str] # fails
``` | 03-05-2022 15:02:09 | 03-05-2022 15:02:09 | I was able to pretty easily find the relevant piece of code and submitted a PR to fix it #15951 |
transformers | 15,949 | closed | [Tests] Fix ViTMAE integration test | # What does this PR do?
This PR fixes the integration test of ViTMAE, which only passed on CPU.
ViTMAE uses randomness inside (it creates a random boolean mask to indicate which patches to mask). I used `torch.manual_seed(2)` to make this deterministic, however I [learned](https://discuss.pytorch.org/t/random-seed-that-spans-across-devices/19735) that using the same seed across CPU/GPU will not result in the same random numbers (as both use a different random number generator).
Hence, this PR sets the expected slice depending on the device.
Huge thanks to @ydshieh for helping me out finding this. | 03-05-2022 09:09:06 | 03-05-2022 09:09:06 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15949). All of your documentation changes will be reflected on that endpoint. |
transformers | 15,948 | closed | is_index_masked and is_index_global_attn in Longformer | I noticed that in [LongformerEncoder](https://github.com/huggingface/transformers/blob/master/src/transformers/models/longformer/modeling_longformer.py#L1255), is_index_masked and is_index_global_attn is set like this:
```
is_index_masked = attention_mask < 0
is_index_global_attn = attention_mask > 0
```
Can anyone help me understand what does these two variables do and why they are set in this way?
Note that in [LongformerModel](https://github.com/huggingface/transformers/blob/master/src/transformers/models/longformer/modeling_longformer.py#L1599), we have final attention_mask => 0 for no attention, 1 for local attention 2 for global attention.
Which means in LongformerEncoder, we will get is_index_masked as all False, and is_index_global_attn as all True except for no attention token. This is quite confusing to me.
It looks to me that the correct value should be like this, though I could be wrong:
```
is_index_masked = attention_mask == 0
is_index_global_attn = attention_mask == 2
```
Any help is appreciated! | 03-05-2022 07:36:28 | 03-05-2022 07:36:28 | cc @patrickvonplaten <|||||>Hey @lightxu,
What we usually do in `transformers` is to convert an attention_mask which looks like:
```python
attention_mask = [[1, 1, 1, 1], [1, 0, 0, 0]]
```
to something like:
```python
attention_mask = torch.broadcast_to([[0.0, 0.0, 0.0, 0.0], [0, -10000.0, -10000,0, -10000,0]], (1, 1, 4, 4))
```
since a `1` means that the vector at this time step of the `QK^T` matrix is **not** masked (+ 0.0) and `0` means that it is masked (+-10000.0). This happens for all models and is standard practice for transformer models.
Now Longformer is special because it requires2 types of attention mechanism inside meaning that in addition to `0.0` and `-10000.0` we also add `10000.0` which is why we have the statement:
```python
is_index_masked = attention_mask < 0
is_index_global_attn = attention_mask > 0
```
Does that make sense?<|||||>Hi @patrickvonplaten
This does make sense, since I also see corresponding comments in [LongformerSelfAttention](https://github.com/huggingface/transformers/blob/master/src/transformers/models/longformer/modeling_longformer.py#L561). I am trying to understand where does this conversion happen, is it happenining here in [LongformerModel](https://github.com/huggingface/transformers/blob/master/src/transformers/models/longformer/modeling_longformer.py#L1699)?
For the context, I am trying to adapt distilbert model to use longformer self attention. The structure of [distilbert model](https://github.com/huggingface/transformers/blob/v4.3.0/src/transformers/models/distilbert/modeling_distilbert.py#L474) looks pretty simple, and I couldn't find anything that broadcast the attention_mask.<|||||>Found the conversion in [`get_extended_attention_mask`](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_utils.py#L311) method. So I think the logic in Longformer is correct, I will try to make similar changes for my long version of distilbert model. Thanks! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.