repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
β | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 13,429 | closed | Illegal Instruction Error on `prepare_inputs_for_generation` -> gpt neo/ j | ## Environment info
- `transformers` version: 4.10.0
- Platform: Linux-5.8.0-63-generic-x86_64-with-glibc2.32
- Python version: 3.8.10
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
Models:
@StellaAthena
Library:
- text generation: @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): GPT-neo-2.7B / GPT-J-6B
The problem arises when using:
* [ ] the official example scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-2.7B")
input_ids = tokenizer.encode("the universe is most dense at", return_tensors="pt")
output = model.generate(input_ids, max_length=50)
output = tokenizer.decode(output[0], skip_special_tokens=True)
```
```
Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.
Fatal Python error: Illegal instruction
Current thread 0x00007f3ce013e640 (most recent call first):
File "/home/korny/.local/lib/python3.8/site-packages/transformers/models/gpt_neo/modeling_gpt_neo.py", line 929 in prepare_inputs_for_generation
File "/home/korny/.local/lib/python3.8/site-packages/transformers/generation_utils.py", line 1288 in greedy_search
File "/home/korny/.local/lib/python3.8/site-packages/transformers/generation_utils.py", line 989 in generate
File "/home/korny/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 15 in decorate_context
File "try.py", line 20 in worker
File "/usr/lib/python3.8/threading.py", line 870 in run
File "/usr/lib/python3.8/threading.py", line 932 in _bootstrap_inner
File "/usr/lib/python3.8/threading.py", line 890 in _bootstrap
Thread 0x00007f426b06b740 (most recent call first):
File "/usr/lib/python3.8/threading.py", line 302 in wait
File "/usr/lib/python3.8/queue.py", line 89 in join
File "try.py", line 36 in <module>
Illegal instruction (core dumped)
```
I'm getting the same error when using
```
>>> from transformers import pipeline
>>> generator = pipeline('text-generation', model='EleutherAI/gpt-neo-2.7B')
>>> generator("EleutherAI has", do_sample=True, min_length=50)
```
## Expected behavior
No errors
| 09-05-2021 07:16:27 | 09-05-2021 07:16:27 | this seems connected to `torch==1.6.0` - the generator works fine with `torch==1.9.0`
## BTW
> the universe is most dense at the center of the galaxy, and the density decreases with distance from the center.
> The density of the universe is measured in units of the critical density, which is the density at which the universe would collapse
## EDIT
feel free to close this if the version requirement was stated in the docs<|||||>cc @patil-suraj <|||||>Hi there! I just tried this on colab with torch 1.6 but couldn't reproduce, here's the colab if you want to take a look
https://colab.research.google.com/drive/1weoJYgJaVme7LgpiWRxprr-BY71GG3Cq?usp=sharing<|||||>Iβll try to reproduce in a fresh conda environment - not the first time having strange errors with it<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,428 | closed | Please add GPT Jaaye model ine right with Transformers website | Please add GPT Jaaye AI model to use in right with Transformers website to write text because this model has 6 billion parameters and you can scale it up to 40 billion parameters to find this model you can go to GitHub and search for GPTJ | 09-05-2021 05:16:35 | 09-05-2021 05:16:35 | GPT J is already part of the transformer library https://huggingface.co/EleutherAI/gpt-j-6B - you may need to install the master branch (`pip install git+https://github.com/huggingface/transformers` ) if it's not yet in a release<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,427 | closed | where processor should i put in a training code? | Hi @lycfight could you please open an issue with a minimal code snippet so we could take a look. Thanks :)
_Originally posted by @patil-suraj in https://github.com/huggingface/transformers/issues/11445#issuecomment-909943670_ | 09-05-2021 04:52:11 | 09-05-2021 04:52:11 | I have a problem during writing a train code in pytorch.I want to create a custom Dataset for coco image_caption dataset such as follow:
1. inherit torch.utils.data:
```
from torch.utils.data import Dataset
Class Image_textDataset(Dataset):
```
2.then overwrite __getiem__(self,idx) where i used processor to process a tuple of (image, text) sample
but it seems that CLIPProcessor could't process a tuple of (image, text) sample to same shape for dataloader to make a batch as follow:
```
def __getitem__(self, idx):
img_id = self.img_ids[idx]
# randomly pick one caption from the image captions
text = random.choice(self.img_id_to_captions[img_id])
img_filename = self.img_id_to_filename[img_id]
img_path = op.join(self.img_dir, img_filename)
img = Image.open(img_path)
inputs = processor(text = text, images = img, return_tensors="pt", padding="max_length",truncation=True)
return inputs
```
3.or __getitem__return a pair of raw (image, text) sample, and use processor in a custom collate_fn like:
```
def collate_fn(examples):
images = [example[0] for example in examples]
captions = [example[1] for example in examples]
inputs = processor(
text=captions,
images=images,
max_length=77,
padding="max_length",
truncation=True,
return_tensors="pt",
)
batch = {
"pixel_values": inputs["pixel_values"],
"input_ids": inputs["input_ids"],
"attention_mask": inputs["attention_mask"],
}
return batch
```
then pass collate_fn as a parameter to dataloader<|||||>This issue is not of relevance to transformers repository please post this in pytorch forums for quick help<|||||>> This issue is not of relevance to transformers repository please post this in pytorch forums for quick help
I think processor is a base method of transformers which should be concerned in transformers' tutorials<|||||>You could put the processor anywhere you want either in the dataset or collate_fn. If processing on the fly, then I would put it in the `collate_fn` as it would process the whole batch with single call, which is usually faster than processing single examples.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,426 | closed | convert pytorch checkpoints to TF1.x checkpoints (reverse of transformers-cli convert) | # π Feature request
Hello,
Some BERT models were pre-trained using Huggingface/Facebook/NIVIDA's implementations and only have `pytorch.bin`. I am wondering if I can convert the checkpoints to TF1.x (compatible with google's BERT https://github.com/google-research/bert).
| 09-05-2021 04:40:11 | 09-05-2021 04:40:11 | Hello! We don't have a conversion script capable of doing that. Is there a feature of google-research/bert we could implement to make your life easier with `transformers` ?<|||||>This is simply for comparison reasons. We want to see if there is a performance difference due to the implementation.
By the way, I also tried to convert the PyTorch checkpoint to TF2 using the `convert_pytorch_checkpoint_to_tf2.py` script. But I got this error:
```
$ python convert_pytorch_checkpoint_to_tf2.py
Traceback (most recent call last):
File "convert_pytorch_checkpoint_to_tf2.py", line 21, in <module>
from . import (
ImportError: attempted relative import with no known parent package
```
I am using transformers v4.10.0 and I have installed this package.
Thanks!
Li
<|||||>By the way, my python version is 3.8.11
I am using ubuntu 18.04<|||||>> Is there a feature of google-research/bert we could implement to make your life easier with transformers?
the `run_classifier.py` script supports input of tsv text. the `datasets` library of huggingface only supports csv format. tsv format is easier to parse than csv format. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,425 | closed | [Benchmark] | # π₯ Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
## Results
Put your results here! | 09-04-2021 23:05:41 | 09-04-2021 23:05:41 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,424 | closed | Error with T5 model: Output is always getting truncated with 20 tokens | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.10.0
- Platform: Google Colab
- Python version: 3.7.11
- PyTorch version (GPU?): 1.9.0+cu102
- Tensorflow version (GPU?): I am not using tensorflow
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patil-suraj @patrickvonplaten @sgugger
## Information
Model I am using (Bert, XLNet ...): T5
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
I have fine-tuned a T5-small model for key-phrase extraction with the Trainer API by passing it an input paragraph and training it to output the same paragraph, but with the key-phrases surrounded in β|||β (ex. |||George Washington||| was a president and ....). However, when I try to make a prediction with the model, the output always contains exactly 20 tokens, and as a result, the output is cut-off mid sentence. When tokenizing the training, validation, and testing sets, I set the ```max_length``` parameter to 512. I do not not know why every single output is only 20 tokens. (My input data is much longer). Aside from the output being chopped off, the model seems to be fine (the '|||' is showing up in some of the predicted outputs despite the output length being only 20).
Steps to reproduce the behavior:
The function I use to tokenize my custom dataset (this is not using the HuggingFace Dataset class):
```
def tokenize_dataset(dataset):
tokenized_dataset = []
for input, output in tqdm(dataset.items()):
processed_input = t5_tokenizer(f"input: {input} </s>", padding = 'max_length', truncation = True, return_tensors="pt", max_length=512)
processed_output = t5_tokenizer(f"output: {output} </s>",padding = 'max_length', truncation = True , return_tensors="pt", max_length=512)
labels = copy.deepcopy(processed_output['input_ids'].squeeze())
labels [labels==0] = -100
tokenized_dataset.append({'input_ids': processed_input['input_ids'].squeeze(), 'attention_mask': processed_input['attention_mask'].squeeze(),
'labels': labels})
return tokenized_dataset
```
This is my code to train the model:
```
training_args = TrainingArguments(output_dir="t5smallcav12", logging_dir = "t5smallcav12/runs", evaluation_strategy="steps", logging_strategy="steps", save_strategy="steps"
)
trainer = Trainer(
t5_model,
training_args,
train_dataset=tokenized_train,
eval_dataset=tokenized_val,
tokenizer=t5_tokenizer
)
trainer.train()
trainer.save_model("t5smallcav12")
mymod = T5ForConditionalGeneration.from_pretrained(pretrained_model_name_or_path="t5smallcav12/")
toker = T5Tokenizer.from_pretrained(pretrained_model_name_or_path="t5smallcav12/")
def tokenize_one(input):
processed_input = toker(f"input: {input} </s>", padding = 'max_length', truncation = True, return_tensors="pt", max_length=512)
processed_input.to('cuda')
return processed_input
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print ("device ",device)
mymod = mymod.to(device)
with torch.no_grad():
input, output = list(val_mapping.items())[302]
print(input)
print(output)
print(toker.decode(mymod.generate(input_ids = tokenize_one(input)['input_ids'], attention_mask = tokenize_one(input)['attention_mask'])[0], skip_special_tokens=True, clean_up_tokenization_spaces=True))
print(len(mymod.generate(input_ids = tokenize_one(input)['input_ids'], attention_mask = tokenize_one(input)['attention_mask'])[0]))
```
The output from the last line is always 20, no matter with sample I use from ```val_mapping```.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## EDIT
After looking at this [issue ](https://discuss.huggingface.co/t/t5-for-conditional-generation-getting-started/1284/9?u=nr1)on the HuggingFace forum I found out that my ```mymod.config.max_length``` was 20. After manually reassigning this value to 512, my problem was solved. However, I still have know idea why this was 20 in the first place (I didn't add this). Feel free to close this issue. I only leave it open so that the general issue could be addressed. | 09-04-2021 21:15:18 | 09-04-2021 21:15:18 | Glad that found the solution.
The default `max_length` for `generate` is set to 20. The `max_length` really depends on the task/problem, so it should be set either in `config` or passed to `generate`. |
transformers | 13,423 | closed | Huggingface Inference API | Hello,
I recently bought the startup subscription for my organization. However, it appears I cannot add large models to inference API. I am trying to serve GPT-J-6B model on inference API. When I emailed "[email protected]" it returns an error message:
```host aspmx.l.google.com[142.251.4.27] said:
The email account that you tried to reach does not exist.
Please try double-checking the recipient's email address for typos or unnecessary spaces.
```
What is the updated support email for startup plan users? | 09-04-2021 21:02:54 | 09-04-2021 21:02:54 | Hi, it is `[email protected]` β I think there was an incorrect reference to the other email address somewhere that we fixed recently (cc @jeffboudier)<|||||>I sent this address an email but have not received a response. Is there another way to contact the engineers in charge of setting up large models for inference api endpoints?<|||||>I'm just going to cancel the sub. Sorry for taking up anyone's time. |
transformers | 13,422 | closed | Fix scheduled tests for `SpeechEncoderDecoderModel` | ```
FAILED tests/test_modeling_speech_encoder_decoder.py::Wav2Vec2BertModelTest::test_real_model_save_load_from_pretrained
FAILED tests/test_modeling_speech_encoder_decoder.py::Speech2TextBertModelTest::test_real_model_save_load_from_pretrained
FAILED tests/test_modeling_speech_encoder_decoder.py::Wav2Vec2Speech2Text2::test_real_model_save_load_from_pretrained
```
**Problem**
`test_real_model_save_load_from_pretrained` was using a missing `self.get_inputs()` and failing for all three flavors of encoder-decoder models. CI logs: https://github.com/huggingface/transformers/actions/runs/1199765292
**Solution**
Pass pretrained models together with suitably-shaped inputs. | 09-04-2021 17:26:43 | 09-04-2021 17:26:43 | |
transformers | 13,421 | closed | Update setup.py | update classifiers with the new versions of python
| 09-04-2021 16:02:28 | 09-04-2021 16:02:28 | |
transformers | 13,420 | closed | [Flax] Addition of FlaxPegasus | # What does this PR do?
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. [link of PR](https://github.com/huggingface/transformers/pull/12402)
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
@patil-suraj | 09-04-2021 15:02:47 | 09-04-2021 15:02:47 | Hi @patil-suraj and @patrickvonplaten,
I was not able to figure out how to add `PegasusSinusoidalPositionalEmbedding` in the Flax Version and `QuestionAnswering` and `Classification` classes are not added yet since the original torch version don't have the classes. Shall we add it?
Please let me know your review on this PR. <|||||>You could find the flax version of `SinusoidalPositionalEmbedding` in the `FlaxMarian`
https://github.com/huggingface/transformers/blob/76c4d8bf26de3e4ab23b8afeed68479c2bbd9cbd/src/transformers/models/marian/modeling_flax_marian.py#L752
Also, Pegasus isn't really intended for QA and classification so it's okay to not add those heads yet.<|||||>Thanks a lot for more or less completing the PR - great job @bhadreshpsavani !
It seems like there are some small differences between the PyTorch & Flax Model. This could be due to slightly different activation functions or small differences with the position ids....
It would be awesome if you could try to debug layer by layer what might be the problem there @bhadreshpsavani
Another possibility is that there is no difference and it's just the framework that causes the difference. In this case, we'll just have to accept it and change the tolerance.<|||||>Sure @patrickvonplaten,
I will compare the code and debug it :)<|||||>Hi @patil-suraj and @patrickvonplaten,
Thanks for the review and suggestions. Please let me know if anything missing in the PR.<|||||>Thanks a lot for fixing the issues, looks good now. If you could give me access to this branch I would like to update the slow tests.<|||||>Done!
Please go ahead with the fix for slow tests.
Once this is merged, I will create another PR for that Typo Fix in BART and PEGASUS that I come across |
transformers | 13,419 | closed | JAX/Flax models should be `jax.jit`ed by default? Or code examples should use jax.jit (~200x speedup) | I noticed that the Flax models were running really slow, and it took me a little while to realise that it was simply because they hadn't been `jit`ed. I'm new to JAX, so I could be missing something here, but wouldn't it make sense for `Flax<ModelName>.from_pretrained(...)` to return a pre-`jit`ed model?
If not, I wonder if it'd be a good idea to update the code examples so that newbies like me know that the model hasn't been `jit`ed?
## Environment info
- `transformers` version: 4.10.0
- Platform: Linux-5.11.0-7620-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.10.0.dev20210622+cu111 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (gpu)
- Jax version: 0.2.17
- JaxLib version: 0.1.68
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patil-suraj @patrickvonplaten
## Information
Model I am using: FlaxDistilBertForMaskedLM, FlaxCLIPModel, and other Flax models.
The problem arises when using the official example scripts.
## To reproduce
```python
!pip install --upgrade pip
!pip install transformers
!pip install --upgrade "jax[cuda111]" flax -f https://storage.googleapis.com/jax-releases/jax_releases.html
from transformers import DistilBertTokenizer, FlaxDistilBertForMaskedLM
import jax
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = FlaxDistilBertForMaskedLM.from_pretrained('distilbert-base-uncased')
def test():
inputs = tokenizer("The capital of France is [MASK].", return_tensors='jax')
outputs = model(**inputs)
logits = outputs.logits
return logits
def test_jit():
inputs = tokenizer("The capital of France is [MASK].", return_tensors='jax')
outputs = jax.jit(model)(**inputs)
logits = outputs.logits
return logits
%timeit test()
# 623 ms Β± 1.92 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
%timeit test_jit()
# 2.12 ms Β± 329 Β΅s per loop (mean Β± std. dev. of 7 runs, 1 loop each)
```
```python
!pip install --upgrade pip
!pip install transformers Pillow requests
!pip install --upgrade "jax[cuda111]" flax -f https://storage.googleapis.com/jax-releases/jax_releases.html
import jax
from PIL import Image
import requests
from transformers import CLIPProcessor, FlaxCLIPModel
model = FlaxCLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
def test_jit():
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="jax", padding=True)
outputs = jax.jit(model)(**inputs)
logits_per_image = outputs.logits_per_image
probs = jax.nn.softmax(logits_per_image, axis=1)
return probs
def test():
inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="jax", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image
probs = jax.nn.softmax(logits_per_image, axis=1)
return probs
%timeit test()
# 2.5 s Β± 58.3 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
%timeit test_jit()
# 15.4 ms Β± 29.7 Β΅s per loop (mean Β± std. dev. of 7 runs, 100 loops each)
``` | 09-04-2021 10:29:01 | 09-04-2021 10:29:01 | Hi @josephrocca
Thanks for the detailed issue. The Flax models are not jitted by default. This is by design, the user should be aware of such jax transformations like `jit`, `pmap`, `grad` etc, rather than the library abstracting it away. This gives you maximum control over what part of the code you want to `jit`. Also `jit` is for single device, for distributed training `pmap` is required, so the user should be aware of that, hence flax models are not transformed by default. And usually, one would `jit/pmap` as big a chunk of code as possible, for example the complete training step instead of just forward call.
another reason is, the first call to jitted functions is usually slow which could, in turn, make the modeling tests really slow.
But you are right, the docs should be updated to make this clear and also provide examples of how to jit the model. Feel free to open a PR to update the docs if you are interested, happy to help with it :) <|||||>@patil-suraj Ah, I see, thank you! Would it make sense for me to simply replace all instances of:
```python
outputs = model(**inputs)
```
with something like
```python
outputs = model(**inputs) # or use jax.jit(model)(**inputs) for faster inference
```
for all the Flax model doc examples? Else could you suggest an appropriate way to edit the examples?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,418 | closed | Cannot Replicate xlm-roberta-large-xnli Results | ## Environment info
- `transformers` version: 4.9.0
- Platform: Linux-5.4.0-1055-aws-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.5.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
Models:
- albert, bert, xlm: @LysandreJik
- xlm-roberta-large-xnli @joeddav
## Information
Model I am using (Bert, XLNet ...): joeddav/xlm-roberta-large-xnli
The problem arises when using:
* [X] the official example scripts:
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import pipeline # no difference in manually encoding sentences and passing them through the model
import torch # the use of torch or tensorflow make absolutely no difference at all
device = 0 if torch.cuda.is_available() else -1
tokenizer = AutoTokenizer.from_pretrained("joeddav/xlm-roberta-large-xnli")
model = AutoModelForSequenceClassification.from_pretrained("joeddav/xlm-roberta-large-xnli")
sequence_to_classify = "Seriously, ANY possible sentence in any language."
candidate_labels = ["tecnologia", "cibo", "bevande", "finanza", "cinema", "giochi"]
classifier = pipeline("zero-shot-classification",
model=model, tokenizer=tokenizer, device=device)
#hypothesis_template = "This text is about {}." # within its addition, values are even more different
classifier(sequence_to_classify, candidate_labels, hypothesis_template, multi_class=False)
```
* [ ] my own modified scripts:
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset:
just trying to classify sentences with zero-shot classification on a not-finetuned model
## To reproduce
Steps to reproduce the behavior:
1. Use joeddav/xlm-roberta-large-xnli model for zero-shot classification (with or without pipeline, with any sentence in any language, using either torch or tensorflow)
2. Notice that results are different from **Hosted inference API** of that model, you can try them either using the model page interface or remote API calls
## Expected behavior
Shouldn't the results coming from the use of the model, correspond to the ones coming from the model web page?
| 09-04-2021 10:27:11 | 09-04-2021 10:27:11 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,417 | closed | No log output to console | when i debug a demo, like below:
```
from transformers import AutoTokenizer
import logging
logging.basicConfig(level=logging.INFO)
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
tokenizer('i love transformers', padding="max_length", truncation=True)
```
but it's No log output to console. What to do can the logs output to the consoleοΌ | 09-04-2021 08:39:24 | 09-04-2021 08:39:24 | Could you elaborate what log outputs you expect to get?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,416 | closed | RuntimeError: Unknown: CUDNN_STATUS_EXECUTION_FAILED | ## Environment info
(See reproduction steps for the docker image to get exact environment)
- `transformers` version: 4.10.0
- Platform: Linux-5.11.0-7620-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.4.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (gpu)
- Jax version: 0.2.17
- JaxLib version: 0.1.69
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patil-suraj
## Information
Model I am using: **FlaxCLIPModel**
The problem arises when using the official example script.
## To reproduce
Steps to reproduce the behavior:
### EDIT: Please use the more rigorous reproduction instructions in [my comment below](https://github.com/huggingface/transformers/issues/13416#issuecomment-912936835).
Start with a docker image like this one:
```bash
docker run --rm -it --gpus all tensorflow/tensorflow:2.4.0-gpu
```
Install `transformers` and jax/flax:
```bash
pip install --upgrade transformers jax flax jaxlib==0.1.69+cuda111 -f https://storage.googleapis.com/jax-releases/jax_releases.html
```
Run this code:
```python
import jax
from transformers import CLIPProcessor, FlaxCLIPModel
model = FlaxCLIPModel.from_pretrained("openai/clip-vit-base-patch32")
```
It produces the following error:
```
Downloading: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3.98k/3.98k [00:00<00:00, 4.33MB/s]
Downloading: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 605M/605M [00:10<00:00, 55.6MB/s]
INFO:absl:Starting the local TPU driver.
INFO:absl:Unable to initialize backend 'tpu_driver': Not found: Unable to find driver in registry given worker: local://
INFO:absl:Unable to initialize backend 'tpu': Invalid argument: TpuPlatform is not available.
2021-09-04 06:57:54.998764: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/gpu_conv_algorithm_picker.cc:691] Failed to determine best cudnn convolution algorithm: Internal: All algorithms tried for %custom-call = (f32[1,7,7,768]{2,1,3,0}, u8[0]{0}) custom-call(f32[1,224,224,3]{2,1,3,0} %copy.3, f32[32,32,3,768]{1,0,2,3} %copy.4), window={size=32x32 stride=32x32}, dim_labels=b01f_01io->b01f, custom_call_target="__cudnn$convForward", metadata={op_type="conv_general_dilated" op_name="conv_general_dilated[ batch_group_count=1\n dimension_numbers=ConvDimensionNumbers(lhs_spec=(0, 3, 1, 2), rhs_spec=(3, 2, 0, 1), out_spec=(0, 3, 1, 2))\n feature_group_count=1\n lhs_dilation=(1, 1)\n lhs_shape=(1, 224, 224, 3)\n padding=((0, 0), (0, 0))\n precision=None\n preferred_element_type=None\n rhs_dilation=(1, 1)\n rhs_shape=(32, 32, 3, 768)\n window_strides=(32, 32) ]"}, backend_config="{\"algorithm\":\"0\",\"tensor_ops_enabled\":false,\"conv_result_scale\":1,\"activation_mode\":\"0\",\"side_input_scale\":0}" failed. Falling back to default algorithm.
Convolution performance may be suboptimal.
2021-09-04 06:57:55.109797: E external/org_tensorflow/tensorflow/compiler/xla/pjrt/pjrt_stream_executor_client.cc:2040] Execution of replica 0 failed: Unknown: CUDNN_STATUS_EXECUTION_FAILED
in external/org_tensorflow/tensorflow/stream_executor/cuda/cuda_dnn.cc(3990): 'cudnnConvolutionForward( cudnn.handle(), alpha, input_nd.handle(), input_data.opaque(), filter_nd.handle(), filter_data.opaque(), conv.handle(), ToConvForwardAlgo(algorithm_desc), scratch_memory.opaque(), scratch_memory.size(), beta, output_nd.handle(), output_data.opaque())'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_flax_utils.py", line 343, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/clip/modeling_flax_clip.py", line 727, in __init__
super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_flax_utils.py", line 105, in __init__
random_params = self.init_weights(self.key, input_shape)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/clip/modeling_flax_clip.py", line 740, in init_weights
return self.module.init(rngs, input_ids, pixel_values, attention_mask, position_ids)["params"]
File "/usr/local/lib/python3.6/dist-packages/flax/linen/module.py", line 1000, in init
method=method, mutable=mutable, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/flax/linen/module.py", line 969, in init_with_output
{}, *args, rngs=rngs, method=method, mutable=mutable, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/flax/linen/module.py", line 939, in apply
)(variables, *args, **kwargs, rngs=rngs)
File "/usr/local/lib/python3.6/dist-packages/flax/core/scope.py", line 687, in wrapper
y = fn(root, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/flax/linen/module.py", line 1178, in scope_fn
return fn(module.clone(parent=scope), *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/flax/linen/module.py", line 275, in wrapped_module_method
y = fun(self, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/clip/modeling_flax_clip.py", line 1064, in __call__
return_dict=return_dict,
File "/usr/local/lib/python3.6/dist-packages/flax/linen/module.py", line 275, in wrapped_module_method
y = fun(self, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/clip/modeling_flax_clip.py", line 563, in __call__
hidden_states = self.embeddings(pixel_values)
File "/usr/local/lib/python3.6/dist-packages/flax/linen/module.py", line 275, in wrapped_module_method
y = fun(self, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/clip/modeling_flax_clip.py", line 217, in __call__
patch_embeds = self.patch_embedding(pixel_values)
File "/usr/local/lib/python3.6/dist-packages/flax/linen/module.py", line 275, in wrapped_module_method
y = fun(self, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/flax/linen/linear.py", line 279, in __call__
precision=self.precision)
File "/usr/local/lib/python3.6/dist-packages/jax/_src/lax/lax.py", line 633, in conv_general_dilated
preferred_element_type=preferred_element_type)
File "/usr/local/lib/python3.6/dist-packages/jax/core.py", line 264, in bind
out = top_trace.process_primitive(self, tracers, params)
File "/usr/local/lib/python3.6/dist-packages/jax/core.py", line 603, in process_primitive
return primitive.impl(*tracers, **params)
File "/usr/local/lib/python3.6/dist-packages/jax/interpreters/xla.py", line 249, in apply_primitive
return compiled_fun(*args)
File "/usr/local/lib/python3.6/dist-packages/jax/interpreters/xla.py", line 365, in _execute_compiled_primitive
out_bufs = compiled.execute(input_bufs)
RuntimeError: Unknown: CUDNN_STATUS_EXECUTION_FAILED
in external/org_tensorflow/tensorflow/stream_executor/cuda/cuda_dnn.cc(3990): 'cudnnConvolutionForward( cudnn.handle(), alpha, input_nd.handle(), input_data.opaque(), filter_nd.handle(), filter_data.opaque(), conv.handle(), ToConvForwardAlgo(algorithm_desc), scratch_memory.opaque(), scratch_memory.size(), beta, output_nd.handle(), output_data.opaque())'
```
## Other notes
Here's my `nvidia-smi`:
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.57.02 Driver Version: 470.57.02 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 On | N/A |
| N/A 57C P5 22W / N/A | 1229MiB / 7982MiB | 1% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
```
Does `FlaxCLIPModel` need more than ~7GB of GPU memory for some reason? I wouldn't have expected it to need any more than a GB or two at the most, given the CLIP model's parameter count.
Also worth noting that the model works fine when using the CPU on my machine, and it works fine with both TPU and GPU when running in a Google Colab notebook. I've also tested with the `ufoym/deepo:all-py36-cu111` docker image, but I get the same error. | 09-04-2021 07:34:25 | 09-04-2021 07:34:25 | Hi there,
> Does FlaxCLIPModel need more than ~7GB of GPU memory for some reason?
No, it does not, it takes ~600M in fp32.
Also, this does not look like a memory error. This most probably is related to JAX GPU installation, as you can find [here](https://github.com/google/jax#installation), JAX needs the right version of CUDA and CuDNN installed. So maybe there is a version mismatch between the docker image and the required version by JAX. Could you please verify if the right version of CUDA and CuDNN is available?<|||||>(**Edit**: Solved - please skip this and see follow-up comments)
@patil-suraj Thanks for your fast reply! Here are the exact reproduction steps I just took to confirm that the right versions of CUDA and CuDNN are available:
```
docker run --rm --gpus all -it --ipc=host ufoym/deepo:all-py36-cu111
```
```
nvcc --version
```
Output:
```
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Mon_Oct_12_20:09:46_PDT_2020
Cuda compilation tools, release 11.1, V11.1.105
Build cuda_11.1.TC455_06.29190527_0
```
For CUDA v11.1, CuDNN must be version 8 as specified in the instructions you linked:
```
cat /usr/include/cudnn_version.h | grep CUDNN_MAJOR -A 2
```
Output:
```
#define CUDNN_MAJOR 8
#define CUDNN_MINOR 0
#define CUDNN_PATCHLEVEL 5
```
Confirm that `/usr/local/cuda-11.1` exists per instructions. β
Install jax and jaxlib:
```
pip install --upgrade pip
pip install --upgrade "jax[cuda111]" -f https://storage.googleapis.com/jax-releases/jax_releases.html
```
Install `transformers` and `flax`:
```
pip install --upgrade transformers flax
```
Run `python3` and then paste this:
```python
import jax
from transformers import CLIPProcessor, FlaxCLIPModel
model = FlaxCLIPModel.from_pretrained("openai/clip-vit-base-patch32")
```
And I get the following error (pasting again in case there are any slight but important differences):
```
Downloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3.98k/3.98k [00:00<00:00, 4.07MB/s]
Downloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 605M/605M [00:11<00:00, 53.6MB/s]
INFO:absl:Starting the local TPU driver.
INFO:absl:Unable to initialize backend 'tpu_driver': Not found: Unable to find driver in registry given worker: local://
INFO:absl:Unable to initialize backend 'tpu': Invalid argument: TpuPlatform is not available.
2021-09-04 08:46:47.901780: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/gpu_conv_algorithm_picker.cc:691] Failed to determine best cudnn convolution algorithm: Internal: All algorithms tried for %custom-call = (f32[1,7,7,768]{2,1,3,0}, u8[0]{0}) custom-call(f32[1,224,224,3]{2,1,3,0} %copy.3, f32[32,32,3,768]{1,0,2,3} %copy.4), window={size=32x32 stride=32x32}, dim_labels=b01f_01io->b01f, custom_call_target="__cudnn$convForward", metadata={op_type="conv_general_dilated" op_name="conv_general_dilated[ batch_group_count=1\n dimension_numbers=ConvDimensionNumbers(lhs_spec=(0, 3, 1, 2), rhs_spec=(3, 2, 0, 1), out_spec=(0, 3, 1, 2))\n feature_group_count=1\n lhs_dilation=(1, 1)\n lhs_shape=(1, 224, 224, 3)\n padding=((0, 0), (0, 0))\n precision=None\n preferred_element_type=None\n rhs_dilation=(1, 1)\n rhs_shape=(32, 32, 3, 768)\n window_strides=(32, 32) ]"}, backend_config="{\"algorithm\":\"0\",\"tensor_ops_enabled\":false,\"conv_result_scale\":1,\"activation_mode\":\"0\",\"side_input_scale\":0}" failed. Falling back to default algorithm.
Convolution performance may be suboptimal.
2021-09-04 08:46:48.011984: E external/org_tensorflow/tensorflow/compiler/xla/pjrt/pjrt_stream_executor_client.cc:2036] Execution of replica 0 failed: Unknown: CUDNN_STATUS_EXECUTION_FAILED
in external/org_tensorflow/tensorflow/stream_executor/cuda/cuda_dnn.cc(3956): 'cudnnConvolutionForward( cudnn.handle(), alpha, input_nd.handle(), input_data.opaque(), filter_nd.handle(), filter_data.opaque(), conv.handle(), ToConvForwardAlgo(algorithm_desc), scratch_memory.opaque(), scratch_memory.size(), beta, output_nd.handle(), output_data.opaque())'
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_flax_utils.py", line 343, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/clip/modeling_flax_clip.py", line 727, in __init__
super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_flax_utils.py", line 105, in __init__
random_params = self.init_weights(self.key, input_shape)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/clip/modeling_flax_clip.py", line 740, in init_weights
return self.module.init(rngs, input_ids, pixel_values, attention_mask, position_ids)["params"]
File "/usr/local/lib/python3.6/dist-packages/flax/linen/module.py", line 1000, in init
method=method, mutable=mutable, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/flax/linen/module.py", line 969, in init_with_output
{}, *args, rngs=rngs, method=method, mutable=mutable, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/flax/linen/module.py", line 939, in apply
)(variables, *args, **kwargs, rngs=rngs)
File "/usr/local/lib/python3.6/dist-packages/flax/core/scope.py", line 687, in wrapper
y = fn(root, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/flax/linen/module.py", line 1178, in scope_fn
return fn(module.clone(parent=scope), *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/flax/linen/module.py", line 275, in wrapped_module_method
y = fun(self, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/clip/modeling_flax_clip.py", line 1064, in __call__
return_dict=return_dict,
File "/usr/local/lib/python3.6/dist-packages/flax/linen/module.py", line 275, in wrapped_module_method
y = fun(self, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/clip/modeling_flax_clip.py", line 563, in __call__
hidden_states = self.embeddings(pixel_values)
File "/usr/local/lib/python3.6/dist-packages/flax/linen/module.py", line 275, in wrapped_module_method
y = fun(self, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/clip/modeling_flax_clip.py", line 217, in __call__
patch_embeds = self.patch_embedding(pixel_values)
File "/usr/local/lib/python3.6/dist-packages/flax/linen/module.py", line 275, in wrapped_module_method
y = fun(self, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/flax/linen/linear.py", line 279, in __call__
precision=self.precision)
File "/usr/local/lib/python3.6/dist-packages/jax/_src/lax/lax.py", line 633, in conv_general_dilated
preferred_element_type=preferred_element_type)
File "/usr/local/lib/python3.6/dist-packages/jax/core.py", line 264, in bind
out = top_trace.process_primitive(self, tracers, params)
File "/usr/local/lib/python3.6/dist-packages/jax/core.py", line 603, in process_primitive
return primitive.impl(*tracers, **params)
File "/usr/local/lib/python3.6/dist-packages/jax/interpreters/xla.py", line 249, in apply_primitive
return compiled_fun(*args)
File "/usr/local/lib/python3.6/dist-packages/jax/interpreters/xla.py", line 365, in _execute_compiled_primitive
out_bufs = compiled.execute(input_bufs)
RuntimeError: Unknown: CUDNN_STATUS_EXECUTION_FAILED
in external/org_tensorflow/tensorflow/stream_executor/cuda/cuda_dnn.cc(3956): 'cudnnConvolutionForward( cudnn.handle(), alpha, input_nd.handle(), input_data.opaque(), filter_nd.handle(), filter_data.opaque(), conv.handle(), ToConvForwardAlgo(algorithm_desc), scratch_memory.opaque(), scratch_memory.size(), beta, output_nd.handle(), output_data.opaque())'
```<|||||>I also just tried `distilbert-base-uncased` using that exact same environment (the same docker container instance, I mean) and got a `RuntimeError: CUDA operation failed: out of memory` despite having around 7GB of memory free according to `nvidia-smi`:
```python
from transformers import DistilBertTokenizer, FlaxDistilBertForMaskedLM
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
model = FlaxDistilBertForMaskedLM.from_pretrained('distilbert-base-uncased')
```
Output:
```
2021-09-04 08:51:12.465635: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
Downloading: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 232k/232k [00:00<00:00, 337kB/s]
Downloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 28.0/28.0 [00:00<00:00, 13.8kB/s]
Downloading: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 466k/466k [00:01<00:00, 397kB/s]
Downloading: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 483/483 [00:00<00:00, 536kB/s]
Downloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 268M/268M [00:05<00:00, 52.6MB/s]
INFO:absl:Starting the local TPU driver.
INFO:absl:Unable to initialize backend 'tpu_driver': Not found: Unable to find driver in registry given worker: local://
INFO:absl:Unable to initialize backend 'tpu': Invalid argument: TpuPlatform is not available.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_flax_utils.py", line 343, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/distilbert/modeling_flax_distilbert.py", line 438, in __init__
super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_flax_utils.py", line 105, in __init__
random_params = self.init_weights(self.key, input_shape)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/distilbert/modeling_flax_distilbert.py", line 445, in init_weights
params_rng, dropout_rng = jax.random.split(rng)
File "/usr/local/lib/python3.6/dist-packages/jax/_src/random.py", line 262, in split
return _split(key, int(num)) # type: ignore
File "/usr/local/lib/python3.6/dist-packages/jax/_src/traceback_util.py", line 183, in reraise_with_filtered_traceback
return fun(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/jax/_src/api.py", line 427, in cache_miss
donated_invars=donated_invars, inline=inline)
File "/usr/local/lib/python3.6/dist-packages/jax/core.py", line 1560, in bind
return call_bind(self, fun, *args, **params)
File "/usr/local/lib/python3.6/dist-packages/jax/core.py", line 1551, in call_bind
outs = primitive.process(top_trace, fun, tracers, params)
File "/usr/local/lib/python3.6/dist-packages/jax/core.py", line 1563, in process
return trace.process_call(self, fun, tracers, params)
File "/usr/local/lib/python3.6/dist-packages/jax/core.py", line 606, in process_call
return primitive.impl(f, *tracers, **params)
File "/usr/local/lib/python3.6/dist-packages/jax/interpreters/xla.py", line 595, in _xla_call_impl
return compiled_fun(*args)
File "/usr/local/lib/python3.6/dist-packages/jax/interpreters/xla.py", line 893, in _execute_compiled
out_bufs = compiled.execute(input_bufs)
jax._src.traceback_util.UnfilteredStackTrace: RuntimeError: CUDA operation failed: out of memory
The stack trace below excludes JAX-internal frames.
The preceding is the original exception that occurred, unmodified.
--------------------
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_flax_utils.py", line 343, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/distilbert/modeling_flax_distilbert.py", line 438, in __init__
super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype)
File "/usr/local/lib/python3.6/dist-packages/transformers/modeling_flax_utils.py", line 105, in __init__
random_params = self.init_weights(self.key, input_shape)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/distilbert/modeling_flax_distilbert.py", line 445, in init_weights
params_rng, dropout_rng = jax.random.split(rng)
File "/usr/local/lib/python3.6/dist-packages/jax/_src/random.py", line 262, in split
return _split(key, int(num)) # type: ignore
File "/usr/local/lib/python3.6/dist-packages/jax/interpreters/xla.py", line 893, in _execute_compiled
out_bufs = compiled.execute(input_bufs)
RuntimeError: CUDA operation failed: out of memory
```<|||||>I'm not sure if they're related, but there are mentions of the `CUDNN_STATUS_EXECUTION_FAILED` error here:
* https://github.com/google/jax/discussions/6332
* https://github.com/google/jax/issues/6039
In the latter issue hawkinsp mentions that 2GB og GPU memory is too little:
> The issue is that your GPU doesn't have very much memory. Both CuDNN and JAX need some memory to work, and by default JAX allocates too much. See: https://jax.readthedocs.io/en/latest/gpu_memory_allocation.html for more details. We might be able to tweak the defaults to make things work a little better on low-memory configurations, but it's a niche use case (2GB is pretty small for a current GPU).
So I wonder if ~6.5 GB is also too little? Seems unlikely, but maybe @hawkinsp could comment?
**Edit**: Oh, setting [`XLA_PYTHON_CLIENT_MEM_FRACTION`](https://jax.readthedocs.io/en/latest/gpu_memory_allocation.html ) to something like 0.7 solves it! By default JAX pre-allocates 90% of memory.
```
$ export XLA_PYTHON_CLIENT_MEM_FRACTION=.7
$ python3
>>> import jax
>>> from transformers import CLIPProcessor, FlaxCLIPModel
>>> model = FlaxCLIPModel.from_pretrained("openai/clip-vit-base-patch32")
(no errors!)
```
Unless this error can be displayed (or averted) in a more user-friendly/helpful way, I think this issue can be closed. I guess it's probably something that would need to be done in JAX rather than transformers anyway? |
transformers | 13,415 | closed | 13134 | 13134
_Originally posted by @Hecim1984 in https://github.com/huggingface/transformers/issues/13134#issuecomment-912878040_ | 09-04-2021 00:47:51 | 09-04-2021 00:47:51 | |
transformers | 13,414 | closed | Fixed the MultilabelTrainer document, which would cause a potential bug when executing the code originally documented. | if train with the MultilabelTrainer documented in the original document
```
from torch import nn
from transformers import Trainer
class MultilabelTrainer(Trainer):
def compute_loss(self, model, inputs, return_outputs=False):
labels = inputs.pop("labels")
outputs = model(**inputs)
logits = outputs.logits
loss_fct = nn.BCEWithLogitsLoss()
loss = loss_fct(logits.view(-1, self.model.config.num_labels),
labels.float().view(-1, self.model.config.num_labels))
return (loss, outputs) if return_outputs else loss
```
a bug that looks like this would appear:
`
File "~/anaconda3/lib/python3.8/site-packages/transformers/trainer.py", in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "~/anaconda3/lib/python3.8/site-packages/transformers/trainer.py", in _maybe_log_save_evaluate
metrics = self.evaluate()
File "~/anaconda3/lib/python3.8/site-packages/transformers/trainer.py", in evaluate
output = self.prediction_loop(
File "~/anaconda3/lib/python3.8/site-packages/transformers/trainer.py", in prediction_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "~/anaconda3/lib/python3.8/site-packages/transformers/trainer.py", in prediction_step
labels = nested_detach(tuple(inputs.get(name) for name in self.label_names))
File "~/anaconda3/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", in nested_detach
return type(tensors)(nested_detach(t) for t in tensors)
File "~/anaconda3/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", in <genexpr>
return type(tensors)(nested_detach(t) for t in tensors)
File "~/anaconda3/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", in nested_detach
return tensors.detach()
AttributeError: 'NoneType' object has no attribute 'detach'
`
change the original code to below would effectively avoid this bug.
```
from torch import nn
from transformers import Trainer
class MultilabelTrainer(Trainer):
def compute_loss(self, model, inputs, return_outputs=False):
labels = inputs.get("labels")
outputs = model(**inputs)
logits = outputs.get('logits')
loss_fct = nn.BCEWithLogitsLoss()
loss = loss_fct(logits.view(-1, self.model.config.num_labels),
labels.float().view(-1, self.model.config.num_labels))
return (loss, outputs) if return_outputs else loss
```
| 09-04-2021 00:13:43 | 09-04-2021 00:13:43 | |
transformers | 13,413 | closed | Possibility of disabling add_pooling_layer that works for all models | # π Feature request
A way to ensure that no additional pooling layers are added that works for all model types.
## Motivation
I just want a barebone transformer and I need the per-token representation. Hence the pooling layer that is added in BERT-based models is not necessary for me, but it still occupies memory and forward pass time. Now, I think this can be disabled via `AutoModel.from_pretrained(name, add_pooling_layer=False)` (right?), but I don't think this would work for all models since some don't have this flag (e.g., GPT-2). | 09-04-2021 00:02:13 | 09-04-2021 00:02:13 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Commenting for activity<|||||>The GPT-2 doesn't have a pooling layer, so what would you like to disable?<|||||>The issue is that if I pass `add_pooling_layer=False` to GPT-2, it will break. But I want a model-agnostic way to create a base model and ensure that there is no pooling layer added.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Commenting for activity<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,412 | closed | Sentencepiece Unigram tokenizer add tokens | I use sentencepiece unigram tokenizer that trained by `tokenizers` library.
I'm trying to combine the two languages of tokenizer.
So I tried like following:
```python
from transformers import AutoTokenizer
tokenizer1 = AutoTokenizer.from_pretrained('TOKENIZER1')
tokenizer2 = AutoTokenizer.from_pretrained('TOKENIZER2')
vocab1 = tokenizer1.vocab.keys()
vocab2 = tokenizer2.vocab.keys()
new_tokens = list()
for v in vocab2:
if v not in vocab1:
new_tokens.append(v)
tokenizer1.add_tokens(new_tokens)
tokenizer1.save_pretrained('NEW_TOKENIZER')
```
Will there be a problem when combined like this?
Thank you. | 09-03-2021 21:22:19 | 09-03-2021 21:22:19 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,411 | closed | AttributeError: type object 'Wav2Vec2ForCTC' has no attribute 'from_pretrained' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.5
- PyTorch version (GPU?): 1.9.0 (False)
- Tensorflow version (GPU?): 2.3.1 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: distributed
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): wav2vec2
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Note: following the tutorial: https://huggingface.co/blog/fine-tune-wav2vec2-english
1. Follow all the steps in the tutorial until you get to 'Set-up Trainer' under 'Training & Evaluation'
2. Follow the code written until you load the pretrained Wav2Vec2 checkpoint
3. Run this code segment
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Code segment:
```python
from transformers import Wav2Vec2ForCTC
model = Wav2Vec2ForCTC.from_pretrained(
"facebook/wav2vec2-base",
gradient_checkpointing=True,
ctc_loss_reduction="mean",
pad_token_id=processor.tokenizer.pad_token_id,
)
```
Stack trace:
```python
AttributeError: type object 'Wav2Vec2ForCTC' has no attribute 'from_pretrained'
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/wp/3f0wwp4n2yzg421rx1jgph980000gn/T/ipykernel_18308/401638209.py in <module>
1 from transformers import Wav2Vec2ForCTC
2
----> 3 model = Wav2Vec2ForCTC.from_pretrained(
4 "facebook/wav2vec2-base",
5 gradient_checkpointing=True,
AttributeError: type object 'Wav2Vec2ForCTC' has no attribute 'from_pretrained'
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Code runs without error and produces the following Log Output:
```python
Some weights of Wav2Vec2ForCTC were not initialized from the model checkpoint at facebook/wav2vec2-base and are newly initialized: ['lm_head.weight', 'lm_head.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
| 09-03-2021 18:26:53 | 09-03-2021 18:26:53 | Also having this issue!<|||||>Hey @margotwagner - could you provide a google colab that reproduces the error? Usually it's a missing dependency that triggers that problem. You can try to reinstall `transformers` with
```
pip install transformers -e ".[speech]"
```
and I think it should work<|||||>This fixed it, thanks! |
transformers | 13,410 | closed | Make data shuffling in `run_clm_flax.py` respect global seed | # What does this PR do?
Use `jax.random.permutation` instead of `np.random.permutation` in the `data_loader` function of `run_clm_flax.py` to make it use the global seed. Currently batch order would probably vary across runs, regardless of the global seed.
Also changes `np.arange` to `jnp.arange` and `np.array` to `jnp.array` and removes the numpy import, although that would not be strictly necessary.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patil-suraj
| 09-03-2021 17:12:12 | 09-03-2021 17:12:12 | Thanks a lot for the PR @bminixhofer, good catch!
However, initially, we actually used `jax.random.permutation` and `jax.arange` in the data loader but then switched to numpy as we observed it was causing some issues with JAX's asynchronous dispatch. Since JAX by default puts everything on the device it could cause some issues (especially on TPU) if the dataloader /collator is used with multiple threads to do background fetching. This also leads to major slowdowns. So all flax examples now don't use JAX functions in pre-processing, loading/collating etc. With this, the TPU can be busy all the time doing the actual computation and won't be blocked by processing and loading.
But I see the problem, so maybe we could use the seed with numpy to make data shuffling reproducible.
cc @patrickvonplaten .<|||||>Interesting, what about `run_mlm_flax.py`? I was having a look prior to this PR, and it seems `jax.random.permutation` is also used for data shuffling, or am I missing something?
https://github.com/huggingface/transformers/blob/76c4d8bf26de3e4ab23b8afeed68479c2bbd9cbd/examples/flax/language-modeling/run_mlm_flax.py#L624-L626<|||||>That should also be changed. All flax examples are still not completely consistent with each other, which needs to be fixed.<|||||>@patil-suraj @patrickvonplaten Is this something you want to change or should I close this PR?<|||||>Hey @bminixhofer - sorry for being so slow here. @patil-suraj I'm happy to merge the PR. Think it's good to have 100% reproduciblity with JAX's random seed. I don't think this slows down the script as it's called just once per epoch. If you're ok with the changes feel free to merge <|||||>@patil-suraj - can you take a look here and leave your opinion so that we can resolve the PR? :-)<|||||>ping @patil-suraj again<|||||>ping @patil-suraj again <|||||>Thanks for merging this! I vaguely remember having problems with batch order with the code as it was previously, but I am not completely sure (it's been some time :sweat_smile: ). |
transformers | 13,409 | closed | git.exc.InvalidGitRepositoryError when running finetune_rag.py | ## Environment info
```
- `transformers` version: 4.10.0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.11
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensorflow version (GPU?): 2.6.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
```
I'm on Colab.
### Who can help
research_projects/rag: @patrickvonplaten, @lhoestq
## To reproduce
Steps to reproduce the behavior:
1.
```
!python transformers/examples/research_projects/rag/consolidate_rag_checkpoint.py \
--model_type rag_token \
--generator_name_or_path facebook/mbart-large-cc25 \
--question_encoder_name_or_path voidful/dpr-question_encoder-bert-base-multilingual \
--dest /content/checkpoint
```
2.
```
!python transformers/examples/research_projects/rag/finetune_rag.py \
--data_dir /content/transformers/examples/research_projects/rag-end2end-retriever/test_run/dummy-train-data \
--output_dir /content/finetune_output \
--model_name_or_path /content/checkpoint \
--model_type rag_token \
--fp16 \
--use_dummy_dataset True
```
# Error
```
loading configuration file /content/checkpoint/config.json
Model config RagConfig {
"architectures": [
"RagTokenForGeneration"
],
"dataset": "wiki_dpr",
"dataset_split": "train",
"do_deduplication": true,
"do_marginalize": false,
"doc_sep": " // ",
"exclude_bos_score": false,
"forced_eos_token_id": 2,
"generator": {
"_name_or_path": "",
"_num_labels": 3,
"activation_dropout": 0.0,
"activation_function": "gelu",
"add_bias_logits": false,
"add_cross_attention": false,
"add_final_layer_norm": true,
"architectures": [
"MBartForConditionalGeneration"
],
"attention_dropout": 0.0,
"bad_words_ids": null,
"bos_token_id": 0,
"chunk_size_feed_forward": 0,
"classif_dropout": 0.0,
"classifier_dropout": 0.0,
"d_model": 1024,
"decoder_attention_heads": 16,
"decoder_ffn_dim": 4096,
"decoder_layerdrop": 0.0,
"decoder_layers": 12,
"decoder_start_token_id": null,
"diversity_penalty": 0.0,
"do_sample": false,
"dropout": 0.1,
"early_stopping": false,
"encoder_attention_heads": 16,
"encoder_ffn_dim": 4096,
"encoder_layerdrop": 0.0,
"encoder_layers": 12,
"encoder_no_repeat_ngram_size": 0,
"eos_token_id": 2,
"finetuning_task": null,
"forced_bos_token_id": null,
"forced_eos_token_id": 2,
"gradient_checkpointing": false,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2"
},
"init_std": 0.02,
"is_decoder": false,
"is_encoder_decoder": true,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_2": 2
},
"length_penalty": 1.0,
"max_length": 1024,
"max_position_embeddings": 1024,
"min_length": 0,
"model_type": "mbart",
"no_repeat_ngram_size": 0,
"normalize_before": true,
"normalize_embedding": true,
"num_beam_groups": 1,
"num_beams": 5,
"num_hidden_layers": 12,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"output_scores": false,
"pad_token_id": 1,
"prefix": null,
"problem_type": null,
"pruned_heads": {},
"remove_invalid_values": false,
"repetition_penalty": 1.0,
"return_dict": true,
"return_dict_in_generate": false,
"scale_embedding": true,
"sep_token_id": null,
"static_position_embeddings": false,
"task_specific_params": {
"translation_en_to_ro": {
"decoder_start_token_id": 250020
}
},
"temperature": 1.0,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenizer_class": null,
"top_k": 50,
"top_p": 1.0,
"torch_dtype": null,
"torchscript": false,
"transformers_version": "4.10.0",
"use_bfloat16": false,
"use_cache": true,
"vocab_size": 250027
},
"index_name": "exact",
"index_path": null,
"is_encoder_decoder": true,
"label_smoothing": 0.0,
"max_combined_length": 300,
"model_type": "rag",
"n_docs": 5,
"output_retrieved": false,
"passages_path": null,
"question_encoder": {
"_name_or_path": "",
"add_cross_attention": false,
"architectures": [
"DPRQuestionEncoder"
],
"attention_probs_dropout_prob": 0.1,
"bad_words_ids": null,
"bos_token_id": null,
"chunk_size_feed_forward": 0,
"decoder_start_token_id": null,
"diversity_penalty": 0.0,
"do_sample": false,
"early_stopping": false,
"encoder_no_repeat_ngram_size": 0,
"eos_token_id": null,
"finetuning_task": null,
"forced_bos_token_id": null,
"forced_eos_token_id": null,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"language": "multilingual",
"layer_norm_eps": 1e-12,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 512,
"min_length": 0,
"model_type": "dpr",
"name": "DPRQuestionEncoder",
"no_repeat_ngram_size": 0,
"num_attention_heads": 12,
"num_beam_groups": 1,
"num_beams": 1,
"num_hidden_layers": 12,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_scores": false,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"prefix": null,
"problem_type": null,
"projection_dim": 0,
"pruned_heads": {},
"remove_invalid_values": false,
"repetition_penalty": 1.0,
"return_dict": true,
"return_dict_in_generate": false,
"revision": null,
"sep_token_id": null,
"task_specific_params": null,
"temperature": 1.0,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenizer_class": null,
"top_k": 50,
"top_p": 1.0,
"torch_dtype": null,
"torchscript": false,
"transformers_version": "4.10.0",
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 119547
},
"reduce_loss": false,
"retrieval_batch_size": 8,
"retrieval_vector_size": 768,
"title_sep": " / ",
"torch_dtype": "float32",
"transformers_version": null,
"use_cache": true,
"use_dummy_dataset": false,
"vocab_size": null
}
Could not locate the tokenizer configuration file, will try to use the model config instead.
Didn't find file /content/checkpoint/question_encoder_tokenizer/added_tokens.json. We won't load it.
loading file /content/checkpoint/question_encoder_tokenizer/vocab.txt
loading file /content/checkpoint/question_encoder_tokenizer/tokenizer.json
loading file None
loading file /content/checkpoint/question_encoder_tokenizer/special_tokens_map.json
loading file /content/checkpoint/question_encoder_tokenizer/tokenizer_config.json
Could not locate the tokenizer configuration file, will try to use the model config instead.
Didn't find file /content/checkpoint/generator_tokenizer/sentencepiece.bpe.model. We won't load it.
Didn't find file /content/checkpoint/generator_tokenizer/added_tokens.json. We won't load it.
loading file None
loading file /content/checkpoint/generator_tokenizer/tokenizer.json
loading file None
loading file /content/checkpoint/generator_tokenizer/special_tokens_map.json
loading file /content/checkpoint/generator_tokenizer/tokenizer_config.json
Assigning ['ar_AR', 'cs_CZ', 'de_DE', 'en_XX', 'es_XX', 'et_EE', 'fi_FI', 'fr_XX', 'gu_IN', 'hi_IN', 'it_IT', 'ja_XX', 'kk_KZ', 'ko_KR', 'lt_LT', 'lv_LV', 'my_MM', 'ne_NP', 'nl_XX', 'ro_RO', 'ru_RU', 'si_LK', 'tr_TR', 'vi_VN', 'zh_CN'] to the additional_special_tokens key of the tokenizer
Loading passages from wiki_dpr
WARNING:datasets.builder:Using custom data configuration dummy.psgs_w100.nq.no_index-dummy=True,with_index=False
WARNING:datasets.builder:Reusing dataset wiki_dpr (/root/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.no_index-dummy=True,with_index=False/0.0.0/91b145e64f5bc8b55a7b3e9f730786ad6eb19cd5bc020e2e02cdf7d0cb9db9c1)
loading weights file /content/checkpoint/pytorch_model.bin
All model checkpoint weights were used when initializing RagTokenForGeneration.
All the weights of RagTokenForGeneration were initialized from the model checkpoint at /content/checkpoint.
If your task is similar to the task the model of the checkpoint was trained on, you can already use RagTokenForGeneration for predictions without further training.
loading configuration file /content/checkpoint/config.json
Model config RagConfig {
"architectures": [
"RagTokenForGeneration"
],
"dataset": "wiki_dpr",
"dataset_split": "train",
"do_deduplication": true,
"do_marginalize": false,
"doc_sep": " // ",
"exclude_bos_score": false,
"forced_eos_token_id": 2,
"generator": {
"_name_or_path": "",
"_num_labels": 3,
"activation_dropout": 0.0,
"activation_function": "gelu",
"add_bias_logits": false,
"add_cross_attention": false,
"add_final_layer_norm": true,
"architectures": [
"MBartForConditionalGeneration"
],
"attention_dropout": 0.0,
"bad_words_ids": null,
"bos_token_id": 0,
"chunk_size_feed_forward": 0,
"classif_dropout": 0.0,
"classifier_dropout": 0.0,
"d_model": 1024,
"decoder_attention_heads": 16,
"decoder_ffn_dim": 4096,
"decoder_layerdrop": 0.0,
"decoder_layers": 12,
"decoder_start_token_id": null,
"diversity_penalty": 0.0,
"do_sample": false,
"dropout": 0.1,
"early_stopping": false,
"encoder_attention_heads": 16,
"encoder_ffn_dim": 4096,
"encoder_layerdrop": 0.0,
"encoder_layers": 12,
"encoder_no_repeat_ngram_size": 0,
"eos_token_id": 2,
"finetuning_task": null,
"forced_bos_token_id": null,
"forced_eos_token_id": 2,
"gradient_checkpointing": false,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2"
},
"init_std": 0.02,
"is_decoder": false,
"is_encoder_decoder": true,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_2": 2
},
"length_penalty": 1.0,
"max_length": 1024,
"max_position_embeddings": 1024,
"min_length": 0,
"model_type": "mbart",
"no_repeat_ngram_size": 0,
"normalize_before": true,
"normalize_embedding": true,
"num_beam_groups": 1,
"num_beams": 5,
"num_hidden_layers": 12,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_past": true,
"output_scores": false,
"pad_token_id": 1,
"prefix": null,
"problem_type": null,
"pruned_heads": {},
"remove_invalid_values": false,
"repetition_penalty": 1.0,
"return_dict": true,
"return_dict_in_generate": false,
"scale_embedding": true,
"sep_token_id": null,
"static_position_embeddings": false,
"task_specific_params": {
"translation_en_to_ro": {
"decoder_start_token_id": 250020
}
},
"temperature": 1.0,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenizer_class": null,
"top_k": 50,
"top_p": 1.0,
"torch_dtype": null,
"torchscript": false,
"transformers_version": "4.10.0",
"use_bfloat16": false,
"use_cache": true,
"vocab_size": 250027
},
"index_name": "exact",
"index_path": null,
"is_encoder_decoder": true,
"label_smoothing": 0.0,
"max_combined_length": 300,
"model_type": "rag",
"n_docs": 5,
"output_retrieved": false,
"passages_path": null,
"question_encoder": {
"_name_or_path": "",
"add_cross_attention": false,
"architectures": [
"DPRQuestionEncoder"
],
"attention_probs_dropout_prob": 0.1,
"bad_words_ids": null,
"bos_token_id": null,
"chunk_size_feed_forward": 0,
"decoder_start_token_id": null,
"diversity_penalty": 0.0,
"do_sample": false,
"early_stopping": false,
"encoder_no_repeat_ngram_size": 0,
"eos_token_id": null,
"finetuning_task": null,
"forced_bos_token_id": null,
"forced_eos_token_id": null,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1"
},
"initializer_range": 0.02,
"intermediate_size": 3072,
"is_decoder": false,
"is_encoder_decoder": false,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1
},
"language": "multilingual",
"layer_norm_eps": 1e-12,
"length_penalty": 1.0,
"max_length": 20,
"max_position_embeddings": 512,
"min_length": 0,
"model_type": "dpr",
"name": "DPRQuestionEncoder",
"no_repeat_ngram_size": 0,
"num_attention_heads": 12,
"num_beam_groups": 1,
"num_beams": 1,
"num_hidden_layers": 12,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_scores": false,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"prefix": null,
"problem_type": null,
"projection_dim": 0,
"pruned_heads": {},
"remove_invalid_values": false,
"repetition_penalty": 1.0,
"return_dict": true,
"return_dict_in_generate": false,
"revision": null,
"sep_token_id": null,
"task_specific_params": null,
"temperature": 1.0,
"tie_encoder_decoder": false,
"tie_word_embeddings": true,
"tokenizer_class": null,
"top_k": 50,
"top_p": 1.0,
"torch_dtype": null,
"torchscript": false,
"transformers_version": "4.10.0",
"type_vocab_size": 2,
"use_bfloat16": false,
"vocab_size": 119547
},
"reduce_loss": false,
"retrieval_batch_size": 8,
"retrieval_vector_size": 768,
"title_sep": " / ",
"torch_dtype": "float32",
"transformers_version": null,
"use_cache": true,
"use_dummy_dataset": false,
"vocab_size": null
}
Could not locate the tokenizer configuration file, will try to use the model config instead.
Didn't find file /content/checkpoint/question_encoder_tokenizer/added_tokens.json. We won't load it.
loading file /content/checkpoint/question_encoder_tokenizer/vocab.txt
loading file /content/checkpoint/question_encoder_tokenizer/tokenizer.json
loading file None
loading file /content/checkpoint/question_encoder_tokenizer/special_tokens_map.json
loading file /content/checkpoint/question_encoder_tokenizer/tokenizer_config.json
Could not locate the tokenizer configuration file, will try to use the model config instead.
Didn't find file /content/checkpoint/generator_tokenizer/sentencepiece.bpe.model. We won't load it.
Didn't find file /content/checkpoint/generator_tokenizer/added_tokens.json. We won't load it.
loading file None
loading file /content/checkpoint/generator_tokenizer/tokenizer.json
loading file None
loading file /content/checkpoint/generator_tokenizer/special_tokens_map.json
loading file /content/checkpoint/generator_tokenizer/tokenizer_config.json
Assigning ['ar_AR', 'cs_CZ', 'de_DE', 'en_XX', 'es_XX', 'et_EE', 'fi_FI', 'fr_XX', 'gu_IN', 'hi_IN', 'it_IT', 'ja_XX', 'kk_KZ', 'ko_KR', 'lt_LT', 'lv_LV', 'my_MM', 'ne_NP', 'nl_XX', 'ro_RO', 'ru_RU', 'si_LK', 'tr_TR', 'vi_VN', 'zh_CN'] to the additional_special_tokens key of the tokenizer
Traceback (most recent call last):
File "transformers/examples/research_projects/rag/finetune_rag.py", line 617, in <module>
main(args)
File "transformers/examples/research_projects/rag/finetune_rag.py", line 554, in main
model: GenerativeQAModule = GenerativeQAModule(args)
File "transformers/examples/research_projects/rag/finetune_rag.py", line 157, in __init__
save_git_info(self.hparams.output_dir)
File "/content/transformers/examples/research_projects/rag/utils_rag.py", line 145, in save_git_info
repo_infos = get_git_info()
File "/content/transformers/examples/research_projects/rag/utils_rag.py", line 160, in get_git_info
repo = git.Repo(search_parent_directories=True)
File "/usr/local/lib/python3.7/dist-packages/git/repo/base.py", line 220, in __init__
self.working_dir = self._working_tree_dir or self.common_dir # type: Optional[PathLike]
File "/usr/local/lib/python3.7/dist-packages/git/repo/base.py", line 303, in common_dir
raise InvalidGitRepositoryError()
git.exc.InvalidGitRepositoryError
```
## Expected behavior
Not having this error.
| 09-03-2021 16:05:12 | 09-03-2021 16:05:12 | A workaround could be creating a git repository and making at at least one commit.<|||||>Hi ! Can you try running the command from inside the `transformers` directory ? This way it will be able to find the git info of the `transformers` repo |
transformers | 13,408 | closed | Add TAPAS MLM-only models | # What does this PR do?
As requested by #12916, I've converted the TAPAS checkpoints which were pre-trained on masked language modeling (MLM) only. `TapasForMaskedLM` was previously defined, but there were no actual checkpoints available with the language modeling head. They're available now on the hub:
* [google/tapas-large-masklm](https://huggingface.co/google/tapas-large-masklm)
* [google/tapas-base-masklm](https://huggingface.co/google/tapas-base-masklm)
* [google/tapas-medium-masklm](https://huggingface.co/google/tapas-medium-masklm)
* [google/tapas-small-masklm](https://huggingface.co/google/tapas-small-masklm)
* [google/tapas-mini-masklm](https://huggingface.co/google/tapas-mini-masklm)
* [google/tapas-tiny-masklm](https://huggingface.co/google/tapas-tiny-masklm).
It also cleans up the conversion script of TAPAS a bit.
Fixes #12916 | 09-03-2021 14:36:21 | 09-03-2021 14:36:21 | Thanks for this @NielsRogge ! |
transformers | 13,407 | closed | How to use multiple PreTrainedModel models in a custom model? | ## Details
I am using the Trainer to train a custom model, like this:
```python
class MyModel(nn.Module):
def __init__(self,):
super(MyModel, self).__init__()
# I want the code to be clean so I load the pretrained model like this
self.bert_layer_1 = transformers.AutoModel.from_pretrained("hfl/chinese-roberta-wwm-ext")
self.bert_layer_2 = transformers.AutoModel.from_pretrained("bert-base-chinese")
self.other_layers = ... # not important
def forward(self,):
pass # not important
```
When running `trainer.save_model()`, it will only save the model's state, as the custom model is not a `PreTrainedModel`(as the terminal shown below).
```shell
Trainer.model is not a `PreTrainedModel`, only saving its state dict.
```
And when reloading the saved model on production, I need to initialize a new `MyModel` and load its states, which is not so convenient. I hope to load this model using `transformers.AutoModel.from_pretrained('MODEL_PATH')` like other `PreTrainedModel`s.
I tried to change `class MyModel(nn.Module)` to `class MyModel(PreTrainedModel)`, but the `PreTrainedModel` needs a `PretrainedConfig` when initialized. I don't have one in the current implementation, I don't know how to manage the config when using multiple PreTrainedModel models. I want to keep the `self.bert_layer_1` and `self.bert_layer_2` as simple as `from_pretrained`, not` = BertModel(config)`.
Is there a way to do that?
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.2
- Platform: macOS / Ubuntu
- Python version: 3.8.6
- PyTorch version (GPU?): 1.8.1 (False) / (yes)
- Tensorflow version (GPU?): 2.4.1 (False) / (yes)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: parallel
| 09-03-2021 13:19:21 | 09-03-2021 13:19:21 | Please help. @LysandreJik @sgugger <|||||>A model that is not inside the `transformers` library won't work with the AutoModel API.
To properly use the save/from pretrained methods, why not subclassing `PreTrainedModel` instead of `nn.Module`?<|||||>Thanks for your reply! I will try.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> A model that is not inside the `transformers` library won't work with the AutoModel API. To properly use the save/from pretrained methods, why not subclassing `PreTrainedModel` instead of `nn.Module`?
@sgugger Could you give an example on how to subclass PreTrainedModel? I would also like to integrate my model at https://huggingface.co/maxpe/twitter-roberta-base_semeval18_emodetection better with the transformer library:
def loss_fn(outputs, targets):
return torch.nn.BCEWithLogitsLoss()(outputs, targets)
class RobertaClass(torch.nn.Module):
def __init__(self):
super(RobertaClass, self).__init__()
self.l1 = AutoModel.from_pretrained("cardiffnlp/twitter-roberta-base",return_dict=False)
self.l2 = torch.nn.Dropout(0.3)
self.l3 = torch.nn.Linear(768, 11)
def forward(self, input_ids, attention_mask,labels):
_, output_1= self.l1(input_ids=input_ids, attention_mask=attention_mask)
output_2 = self.l2(output_1)
output = self.l3(output_2)
return (loss_fn(labels.float(),output),output)
model=RobertaClass()
model.train()
...
model=RobertaClass()
model.load_state_dict(torch.load(path))
model.eval()
My attempt with `PyTorchModelHubMixin` didn't work well.<|||||>@iamlockelightning did you save the model properly?? |
transformers | 13,406 | closed | Fix tests without any real effect in EncoderDecoderMixin | # What does this PR do?
In `test_modeling_encoder_decoder.py`, there are 2 places like
```
enc_dec_model.save_pretrained(tmpdirname)
EncoderDecoderModel.from_pretrained(tmpdirname)
after_outputs = enc_dec_model(...)
```
Therefore `after_outputs` will be exactly the same `outputs`, since `enc_dec_model` doesn't get the new value.
For the testing purpose, we need to do
```
enc_dec_model = EncoderDecoderModel.from_pretrained(tmpdirname)
```
(Hope I get it right)
## Who can review?
@patrickvonplaten @sgugger
| 09-03-2021 12:56:34 | 09-03-2021 12:56:34 | Thanks a lot @ydshieh !<|||||>The model should have been cast to the `torch_device` as otherwise the inputs/model are on different devices :)
https://github.com/huggingface/transformers/runs/3524847938?check_suite_focus=true<|||||>I'll take care of it :-) <|||||>Hey, thank you guys! Sorry about the device issue, I am less familiar with PyTorch, and it seems the checks I had when I opened this PR don't have that test above (probably it only runs when we merge to master?)<|||||>Don't worry about it @ydshieh :-) |
transformers | 13,405 | closed | about 'text-generation': how can I generate sentences with multiply words? | For example. Given "apple", "table", I want to generate a sentence like " an apple on the table"
| 09-03-2021 12:36:55 | 09-03-2021 12:36:55 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,403 | closed | Fix scheduled TF Speech tests | CI logs: https://github.com/huggingface/transformers/runs/3501334018?check_suite_focus=true
**Context**
Doing `import soundfile as sf` was causing `OSError: sndfile library not found` on all integration tests for `TFWav2Vec2` and `TFHubert`
**Solution**
This PR adds `libsndfile` as a dependency for the TF jobs, like it was done for the Pytorch counterparts.
CC @patrickvonplaten | 09-03-2021 12:23:48 | 09-03-2021 12:23:48 | Thanks a lot for taking care of it |
transformers | 13,402 | closed | TrainingArguments default parameters throw error (evaluation_strategy, save_strategy) |
So `evaluation_strategy` default value is `"no"` while the default for `evaluation_strategy` is `"steps"`. This will result in an error:
https://github.com/huggingface/transformers/blob/76c4d8bf26de3e4ab23b8afeed68479c2bbd9cbd/src/transformers/training_args.py#L684
### Who can help
- trainer: @sgugger
I could make a PR but changing the default values of something is a breaking change, I don't know if you have any specific procedure to do this. | 09-03-2021 11:23:34 | 09-03-2021 11:23:34 | That's only when using `load_best_model_at_end` though, so not the real defaults. We can't change the defaults of other arguments without making a breaking change, so yes, when using `load_best_model_at_end=True` you need to set an `evaluation_strategy` and a `save_strategy` that match.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,401 | closed | Better error raised when cloned without lfs | Raise a better error when cloning a repository without having git-lfs installed.
The pointer files contain data such as the following:
```
version https://git-lfs.github.com/spec/v1
oid sha256:c393f632913cdc64c13fcd1b039a74f17dd83cc3029c556802c0f2f8792b46f9
size 557941479
```
This tries to parse the file and check that the first string is `version`; if so, it detects it as a pointer file.
Closes https://github.com/huggingface/transformers/issues/8497 | 09-03-2021 10:26:02 | 09-03-2021 10:26:02 | |
transformers | 13,400 | closed | Fixing #13381 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #13381
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 09-03-2021 10:11:58 | 09-03-2021 10:11:58 | |
transformers | 13,399 | open | Unified freezing interface | # π Feature request
We can freeze the backbone of individual models by finding the field name of the backbone (e.g., `bert` in `BertForSequenceClassification`) and setting `.requires_grad`. However, because the name of this field differs for different models, I don't think there's currently an easy way to do this that works for all models. This contradicts the philosophy that `AutoModel(ForXXX)` should hide all the implementation details.
Ideally, we can pass a `trainable_backbone` flag in the config or to the `__init__` function directly that controls this behavior. | 09-03-2021 07:13:01 | 09-03-2021 07:13:01 | Hello, thanks for opening an issue! You're right that this is not as accessible as it could/should be. As a workaround, in PyTorch the base model name should always be set according to the `base_model_prefix`. In order to retrieve the base model, it should be possible to do:
```py
from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased")
base_model = getattr(model, model.base_model_prefix)
```
If you want this to work with `AutoModel` too where the model is the backbone, you can do this instead:
```py
base_model = getattr(model, model.base_model_prefix, model)
```<|||||>Thanks! I will keep this issue open since I think this can be more elegantly supported, or at least this logic can be encapsulated with a flag. But feel free to close it.<|||||>I find this super relevant honestly.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>It would be nice to have a property to access the base model, similar to the `get_input_embeddings` and `get_output_embeddings` methods. Putting this as a "good second issue" if anyone wants to work on it!<|||||>I believe I can take a shot at it if its OK :)<|||||>Bringing awareness to a related issue #13413, which argues for a unified interface for something else -- if all models will be edited for this issue anyway, maybe both can be fixed at the same time.<|||||>I think the two are separate issues - let's start with this issue! Feel free to go ahead and open a PR @shabie.<|||||>Hey @LysandreJik
So I started to see how it could be implemented and saw this:
https://github.com/huggingface/transformers/blob/91758e399f8c4bf81820a8af6a257682ccea0223/src/transformers/modeling_utils.py#L547-L552
Doesn't this do what you mentioned?<|||||>Indeed, good catch @shabie, my bad! <|||||>So nothing for me to do actually :) and it does solve your problem @ZhaofengWu doesn't it?
On another note, I find the `base_model` still a bit problematic. Take for example a custom token classifier class (it is borrowed from the upcoming book by @lewtun I hope he doesn't mind me quoting this) I am defining:
```python3
class XLMRobertaForTokenClassification(RobertaPreTrainedModel):
config_class = XLMRobertaConfig
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
self.roberta = RobertaModel(config, add_pooling_layer=False)
...
self.init_weights()
```
Now if the variable name of the base model is not defined like this `self.roberta` then pre-trained weights are not loaded correctly precisely due to how `base_model` works.
I did make this mistake while I was trying it out and was wondering if it would be possible to give people heads up that if you are initializing this class as a backbone, then make sure the attribute is named like this.
This may be accomplished using perhaps the [`__init_subclass__`](https://docs.python.org/3/reference/datamodel.html#object.__init_subclass__) or [`__new__`](https://stackoverflow.com/a/674369/7996306). I am not sure at this point. I can dig deeper if this is genuinely seen as a problem.
<|||||>Yes, this method does solve my problem -- thanks! |
transformers | 13,398 | closed | PreTrainedTokenizerFast to BertTokenizer | Hi! I have a question about tokenizers.
I made a unigram sentencepiece tokenizer using `tokenizers`.
I use this tokenizer like following:
```python
import json
from transformers import PreTrainedTokenizerFast
from tokenizers import SentencePieceUnigramTokenizer
tokens = list()
test_input = 'This is test input.'
# unigram.json is the file that saved by tokenizers library
# tokenizer.train()
# tokenizer.save_model()
with open('unigram.json', encoding='utf-8-sig') as f:
json_file = json.load(f)
vocab = json_file['vocab']
for idx, v in enumerate(vocab):
vocab[idx] = tuple(v)
tokenizer = SentencePieceUnigramTokenizer(vocab)
tokenizer = PreTrainedTokenizerFast(tokenizer_object=tokenizer)
print(tokenizer.tokenize(test_input))
```
- Output
```
['βT', 'h', 'is', 'β', 'is', 'β', 't', 'e', 'st', 'β', 'in', 'p', 'u', 't', '.']
```
I think the tokenizer's working. But I want the following format. (bert format)
```
['[CLS]', 'βT', 'h', 'is', 'β', 'is', 'β', 't', 'e', 'st', 'β', 'in', 'p', 'u', 't', '.', '[SEP]']
```
I want to convert it to be easy to use, what can I do?
Thank you. | 09-03-2021 06:57:28 | 09-03-2021 06:57:28 | I tried like this:
```
from transformers import BertTokenizer
# Load my sentencepiece tokenizer
tokenizer = AutoTokenizer.from_pretrained('my_tokenizer_model')
tokenizer.build_inputs_with_special_tokens = BertTokenizer.build_inputs_with_special_tokens
print(tokenizer.tokenize('ν
μ€νΈ λ¬Έμ₯μ
λλ€.', add_special_tokens=True))
# ['βν
', 'μ€νΈ', 'βλ¬Έ', 'μ₯', 'μ
λλ€']
```
I expected that if I change the `build_inputs_with_special_tokens` method, tokenizer would add [CLS] and [SEP].<|||||>You should be able to do so by adding a `post_processor` to your tokenizer: https://huggingface.co/docs/tokenizers/python/latest/quicktour.html#post-processing<|||||>@LysandreJik Thank you. But, this is not work for me.
```
from tokenizers.processors import TemplateProcessing
tokenizer.post_processor = TemplateProcessing(
single="[CLS] $A [SEP]",
pair="[CLS] $A [SEP] $B:1 [SEP]:1",
special_tokens=[
("[CLS]", tokenizer.vocab["[CLS]"]),
("[SEP]", tokenizer.vocab["[SEP]"]),
],
)
tokenizer.tokenize('Hi?')
# ['β', 'H', 'i', '?']
```<|||||>The tokenizer is instance of `PreTrainedTokenizerFast `<|||||>Can you try doing it on the `SentencePieceUnigramTokenizer` instance (before it's converted to a `transformers` tokenizer?)<|||||>It works! Thank you! @LysandreJik |
transformers | 13,397 | closed | T5: relative position embeddings | Hello,
In the paper [Exploring the limits of Transformer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) it says that they share the position embedding parameters across all layers.
<img width="967" alt="Screen Shot 2021-09-03 at 12 35 38 PM" src="https://user-images.githubusercontent.com/19749157/131946859-b23e5ff6-a1ae-4295-8511-4dc8fe273f7f.png">
But as shown below, the current implementation seems to use relative position embedding in the first layer only.
https://github.com/huggingface/transformers/blob/c1c2d68d37d4d8372729038ca06246e31859beaa/src/transformers/models/t5/modeling_t5.py#L807
I'm not sure how it changes the performance, but are there any particular reason for such implementation? | 09-03-2021 03:37:20 | 09-03-2021 03:37:20 | Hi @hyukyu ,
T5 implementation does share the relative embedddings. The `has_relative_attention_bias` specifies which layer should store the embedding, which is the first layer in T5. Then the `position_bias` is computed in the first layer and shared with all other layers.
See
https://github.com/huggingface/transformers/blob/76c4d8bf26de3e4ab23b8afeed68479c2bbd9cbd/src/transformers/models/t5/modeling_t5.py#L487-L496
https://github.com/huggingface/transformers/blob/76c4d8bf26de3e4ab23b8afeed68479c2bbd9cbd/src/transformers/models/t5/modeling_t5.py#L1019-L1022<|||||>Hello @patil-suraj,
I misunderstood the code, but now it is very clear that they do use relative embeddings in the other layers too.
I really appreciate your help. Thank you! |
transformers | 13,396 | closed | [SpeechEncoderDecoder] Fix final test | Fix failing test: tests/test_modeling_speech_encoder_decoder.py::Wav2Vec2Speech2Text2::test_encoder_decoder_model_output_attentions | 09-02-2021 16:46:56 | 09-02-2021 16:46:56 | |
transformers | 13,395 | closed | [Tests] Fix SpeechEncoderDecoder tests | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR fixes the flaky tests:
- tests/test_modeling_speech_encoder_decoder.py::Wav2Vec2Speech2Text2::test_encoder_decoder_model_output_attentions
- tests/test_modeling_speech_encoder_decoder.py::Wav2Vec2BertModelTest::test_encoder_decoder_model_output_attentions
the problem was that for tests in 10% of the times layerdrop was used meaning that the attention_output was set to None. This PR corrects this by putting the model in `.eval()` mode.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-02-2021 16:09:51 | 09-02-2021 16:09:51 | |
transformers | 13,394 | closed | LayoutLMv2Processor padding/truncation issues | Hi there,
I had a troubleshooting concerning LayoutLMv2 during inference. I am using the LayoutLMv2 Processor to encode my inputs, feeding the processor with the inputs (the full image, texts and bounding boxes) in order to encode them and call the LayoutLMv2 model. The issue is that the processor doesn't seem to respond to my padding/truncation strategies and fails to batch encode the inputs as they are not of the same size.
## Environment info
- `transformers` version: 4.10.0
- Platform: Linux-4.15.0-143-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.11
- PyTorch version (GPU?): 1.8.0 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
## Information
Model I am using LayoutLMv2:
The problem arises when using:
* [x] the official example scripts:
```python
from transformers import LayoutLMv2Processor, AutoModelForTokenClassification, AutoConfig, AutoTokenizer
[...]
self.processor = LayoutLMv2Processor.from_pretrained(model_path, revision="no_ocr")
[...]
encoded_inputs = self.processor(image.convert('RGB'),
ocrs,
boxes=bboxes,
word_labels=self.config["kie_classes"],
padding="max_length",
truncation=True,
return_tensors="pt",
)
```
The task I am working on is:
* [x] an official GLUE/SQUaD task: Token classification / NER
## To reproduce
Steps to reproduce the behavior:
1. Get a Pillow RGB document image.
2. Given the bounding boxes of this image (during inference), get the OCR transcripts of bboxes.
3. Load the LayoutLMv2 processor and model.
4. Try encoding the inputs (image, ocrs, bboxes) with the processor (using the script provided above.
* [x] Stack traces:
```
File "/src/tools/miniconda/envs/myenv/lib/python3.7/site-packages/transformers/models/layoutlmv2/processing_layoutlmv2.py", line 201, in __call__
**kwargs,
File "/src/tools/miniconda/envs/myenv/lib/python3.7/site-packages/transformers/models/layoutlmv2/tokenization_layoutlmv2_fast.py", line 313, in __call__
**kwargs,
File "/src/tools/miniconda/envs/myenv/lib/python3.7/site-packages/transformers/models/layoutlmv2/tokenization_layoutlmv2_fast.py", line 449, in encode_plus
**kwargs,
File "/src/tools/miniconda/envs/myenv/lib/python3.7/site-packages/transformers/models/layoutlmv2/tokenization_layoutlmv2_fast.py", line 658, in _encode_plus
**kwargs,
File "/src/tools/miniconda/envs/myenv/lib/python3.7/site-packages/transformers/models/layoutlmv2/tokenization_layoutlmv2_fast.py", line 607, in _batch_encode_plus
return BatchEncoding(sanitized_tokens, sanitized_encodings, tensor_type=return_tensors)
File "/src/tools/miniconda/envs/myenv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 210, in __init__
self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
File "/src/tools/miniconda/envs/myenv/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 722, in convert_to_tensors
"Unable to create tensor, you should probably activate truncation and/or padding "
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.
```
## Expected behavior
It is expected from `LayoutLMv2Processor` to provide encoded inputs in order to feed the LayoutLMv2 model. However, even though the `padding` and `truncation` are set to True (or to a suitable `PaddingStrategy` or `TruncationStrategy`) the processor seems to not take that into account.
Many thanks in advance for your help. | 09-02-2021 16:09:35 | 09-02-2021 16:09:35 | Hi,
Can you provide a Colab notebook or code snippet that reproduces your issue? I see that you're only providing a single image to the processor, so that's actually not a batch of examples, but just a single example.
Also, you need to make sure that the words, boxes and word labels you provide are all of equal length.<|||||>Hi,
Thank you for your reply. I agree I am not performing a batch encoding since there is only one image.
Actually, I followed your guide in `https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/FUNSD/Inference_with_LayoutLMv2ForTokenClassification.ipynb` but didn't understand why we needed word labels during inference (perhaps only for evaluating predictions against ground truth ?).
I debugged my code on Colab, and it was indeed due to the length of the word labels. Thank you very much !
Cheers. |
transformers | 13,393 | closed | Tapas tf | # What does this PR do?
TF Tapas
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@nielsrogge @sgugger @LysandreJik @Rocketknight1 | 09-02-2021 15:51:54 | 09-02-2021 15:51:54 | 
<|||||>pending
- [x] adding unit test cases
- [x] adding model to TFAutoTableQuestionAnswering pipeline
- [x] adding unit test for the pipeline
- [x] fix all the copy pasted code comments from pt->tf
- [x] update `tapas.rst` with TF sample code
- [x] add `#Copied` Comments`
- [x] push tf weights to official model hub - help needed.
- [x] update MLM model, https://github.com/huggingface/transformers/pull/13408
- [ ] ...<|||||>@LysandreJik
ready for review π€<|||||>Great news @kamalkraj!
@NielsRogge and @Rocketknight1, could you take a look at this?<|||||>Hi, I'd like to apologize for not getting to this sooner! It's a huge PR, but I'll try to get through it today or tomorrow and give feedback where I can.<|||||>@LysandreJik @sgugger @Rocketknight1 @NielsRogge
Thanks for the review.
Done changes according to review.
The only pending part is `pushing TF weights to official model hub` -
After that, I can remove the model loading `from_pt` argument from all the tests<|||||>Hi,
While restoring model weights from `tf_model.h5` something is going wrong.
Could you please check this notebook and confirm I am using correct save and load methods?
https://colab.research.google.com/drive/1GypOH9_70xhMCZvZhE0RSCWtx4r_RIzg?usp=sharing
@NielsRogge @Rocketknight1 <|||||>I can work on the PyTorch change first and submit a PR. <|||||>I investigated the notebook and something very odd is happening - the output logits being masked (with addition of -10000) are not the same in the two models. I can't understand how a change in weights would cause that, so I'm guessing there's some deeper cause. I'll need to step through the model execution to figure out where that bug is creeping in, though.<|||||>@Rocketknight1
https://github.com/huggingface/transformers/blob/408b2d2bd08f667cf4154730cc323c4e49657eed/src/transformers/modeling_tf_utils.py#L497-L514
Only the saving model layers are getting restored.

Other parameters are not listed in the `tf_model.h5` file also
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@Rocketknight1 could you take a look at this?<|||||>Hey @kamalkraj, I took a look at the issue. From what I'm understanding, the issue comes from the `compute_token_logits` method: using your notebook, I have compared the outputs of two models, one being saved and reloaded. I have verified that all the hidden states remain the exact same across the two model instances. From what I'm perceiving, the divergence appears in that `compute_token_logits` method.
In that method, you're doing a `tf.einsum` operation, using the `self.column_output_weights` and `self.column_output_bias` weights. However, it seems that when saving the model, these two weights are not saved. I identified this as when reloading the model weights from the TensorFlow checkpoint, the bias will be set to `0` as it is initialized from the default `tf.zeros_initializer`, rather than from the layer that we have just saved.
From what I have seen, this affects only the question answering head.
Is there a reason you used the `tf.einsum` approach instead of the `tf.EinsumDense` layer? It seems to be doing the same thing, yet the latter is a layer that would be saved when saving the model.
Thank you!<|||||>Hi @LysandreJik,
Thanks for the information. I had the same observation
> @Rocketknight1
>
> https://github.com/huggingface/transformers/blob/408b2d2bd08f667cf4154730cc323c4e49657eed/src/transformers/modeling_tf_utils.py#L497-L514
>
> Only the saving model layers are getting restored. 
>
> Other parameters are not listed in the `tf_model.h5` file also
I can't use [tf.EinsumDense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/EinsumDense), because then it won't be an equivalent implementation to the PyTorch version and hence the PT_TF cross-test will fail.

<|||||>Hi @kamalkraj, sorry for the delay. I did a full investigation here and here's what I found:
- The weights that are attached to the base `Model` with `self.add_weight()` are the only weights that are getting loaded incorrectly. All weights inside layers are loaded correctly.
- Even though attaching weights to the base model works in Keras in general, it doesn't work for our models because of how we build them. I think some part of the build process overwrites the loaded weights with randomly initialized ones.
- All of our other models resolve this issue by moving the weights from the main model class to other layers, e.g. a `QuestionAnsweringHead` Layer, but this may create issues with cross-loading weights from PyTorch. Even so, this is probably the solution we need here. I'll investigate the cross-loading code and try to find a solution there. Are you okay with moving those weights to a separate Layer?<|||||>> * I'll investigate the cross-loading code and try to find a solution there. Are you okay with moving those weights to a separate Layer?
Yes @Rocketknight1
Thanks
<|||||>@kamalkraj I checked the weight crossloading function, and we should be able to move the weights to another layer and as long as we name it correctly weight porting should work.<|||||>Thanks, @Rocketknight1
Fixed the issue.
@NielsRogge
Could you please help me to upload the model weights to the official hub?
<|||||>@kamalkraj sure!
Btw I just discovered a (tiny) bug in the forward pass of TAPAS (when `config.select_one_column` is set to `False`). Let me open a PR for it first, such that you can include the fix in this PR.<|||||>@kamalkraj I'm uploading all TAPAS TF checkpoints to the hub.
Can you resolve the conflict shown above? Also, can you confirm the `test_pt_tf_model_equivalence` tests pass? They don't seem to pass on CI.<|||||>@NielsRogge

<|||||>@NielsRogge
Test in CI failed due to version conflict of `Tensorflow Probability` with `Tensorflow`, Should I pin the version ?<|||||>The TF version should not be pinned, but the TF probability version can be pinned.<|||||>@NielsRogge
In the setup.py the Tensorflow version is specified >= 2.3
https://github.com/kamalkraj/transformers/blob/fbad9bb56e8f67dca2c29fb21a5a017c823c57b7/setup.py#L155-L156
But in the CI Tensorflow installed is 2.6.2, Any idea
https://app.circleci.com/pipelines/github/huggingface/transformers/30600/workflows/a636c0a9-a0f0-4fbe-8124-276b7ec5d6c5/jobs/312697?invite=true#step-111-4303
Current TF latest version is 2.7<|||||>It seems that this is due to pip not upgrading itself correctly:
```
WARNING: You are using pip version 21.2.4; however, version 21.3.1 is available.
You should consider upgrading via the '/usr/local/bin/python -m pip install --upgrade pip' command.
```
Could you try to update the following in your PR: https://github.com/huggingface/transformers/blob/25156eb296ae88c7b810235a368c953b7a4b9af9/.circleci/config.yml#L82
to
```
/usr/local/bin/python -m pip install --upgrade pip
```
to check if it changes anything? Thanks, @kamalkraj
<|||||>Thanks for trying it out, it seems like it didn't work out. I'll try a few things and come back to you.<|||||>Found the error! TensorFlow 2.7 does not support Python 3.6 anymore (cc @sgugger, @Rocketknight1, @patrickvonplaten, @patil-suraj).
Could you update this line: https://github.com/huggingface/transformers/blob/master/.circleci/config.yml#L68
to have `circleci/python:3.7` as an image?<|||||>Thanks @LysandreJik
Should I revert this commit f18cfa9?
I have changed the python version only here ` run_tests_torch_and_tf `.
<|||||>@NielsRogge
Thank you so much for uploading all the TF models to the hub.
I have updated the tests to load the model directly from the hub, rather than using `from_pt`
Ready to merge π€
<|||||>Indeed, if you can revert the pip commit then we're ready to go! We can also merge it and revert it afterwards, do you want to take care of that @NielsRogge?<|||||>Fantastic @kamalkraj, let's merge it once it's all green |
transformers | 13,392 | closed | TorchScript warning | # π Migration
## Information
<!-- Important information -->
Model I am using (**Bert**, XLNet ...):
Language I am using the model on (English, **Chinese** ...):
The problem arises when using:
* [yes] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [yes] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## Details
when i use official script to create serializable and optimizable models from PyTorch, i meed these warnings.
https://huggingface.co/transformers/torchscript.html
I set torchscript=True, it seems doesn't work. And when i use torchscript in my model, the results seems different from the results before i serialize my model.
> /opt/anaconda3/envs/py37/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py:955: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length]
/opt/anaconda3/envs/py37/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py:201: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
position_ids = self.position_ids[:, past_key_values_length : seq_length + past_key_values_length]
/opt/anaconda3/envs/py37/lib/python3.7/site-packages/transformers/modeling_utils.py:2169: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
input_tensor.shape[chunk_dim] == tensor_shape for input_tensor in input_tensors
## Environment info
<!-- You can run the command `python transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:4.10.0
- Platform:linux
- Python version:3.7.11
- PyTorch version (GPU?):1.7.1(cpu)
- Using GPU in script?:No
- Using distributed or parallel set-up in script?:No
<!-- IMPORTANT: which version of the former library do you use? -->
* `pytorch-transformers` or `pytorch-pretrained-bert` version (or branch):
## Checklist
- [ yes] I have read the migration guide in the readme.
([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers);
[pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers))
- [ yes] I checked if a related official extension example runs on my machine.
| 09-02-2021 14:24:47 | 09-02-2021 14:24:47 | Hello! Do you have a reproducer showing different results when using torchscript?<|||||>> Hello! Do you have a reproducer showing different results when using torchscript?
Sorry It's my mistake. This warning doesn't affect the results. Thank you for your answer.
But i meet another problem. When i load a torchscript model, and feed same input for 100 times, the time cost is different. It cost more at beginning.
Code like this:
```
model.to(device)
input_ids = input_ids.to(device)
attention_mask = attention_mask.to(device)
token_type_ids = token_type_ids.to(device)
for i in range(100):
begin = time.time()
model(input_ids, attention_mask, token_type_ids)
print(time.time()-begin)
```
The time cost like that:
> 0.18257904052734375
0.1774275302886963
0.24630260467529297
1.8313817977905273
0.008061647415161133
0.007055997848510742
0.006993532180786133
0.007005929946899414
0.00698542594909668
0.006965160369873047
0.0069713592529296875
0.006967782974243164
...
Is it the feature of torchscript? It seems weired..<|||||>I suspect this is because torchscript is tracing your model the first times, hence why it takes longer. Once the tracing is done, the times should stabilize.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,391 | closed | How to build a custom dataset for LayoutLMv2ForSequenceClassification? | ## Environment info
- `transformers` version: 4.10.0
- Platform: Linux
- Python version: 3.8.8
- PyTorch version (GPU?): 1.8.0+cu101 (True)
- Tensorflow version (GPU?): 2.2.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
Documentation: @sgugger
## Information
Model I am using (Bert, XLNet ...): LayoutLMv2ForSequenceClassification
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
I am trying to build a custom dataset to fine tune LayoutLMv2ForSequenceClassification.
For that I am building a torch.utils.data.Dataset, with the following getitem function:
```python
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
img = Image.open(self.files[idx]).convert('RGB')
label = self.labels[idx]
if self.transforms is not None:
img = self.transforms(img)
encoding = self.processor(img, return_tensors="pt")
encoding['input_ids'] = encoding['input_ids'][:,:512]
encoding['token_type_ids'] = encoding['token_type_ids'][:,:512]
encoding['attention_mask'] = encoding['attention_mask'][:,:512]
encoding['bbox'] = encoding['bbox'][:,:512,:4]
return {
**encoding,
"label": label
}
```
Here is how I defined the processor:
```python
feature_extractor = LayoutLMv2FeatureExtractor()
tokenizer = LayoutLMv2TokenizerFast.from_pretrained("microsoft/layoutlmv2-base-uncased")
processor = LayoutLMv2Processor(feature_extractor, tokenizer)
```
The training starts but when it starts loading the data batches it fails.
Output:
```bash
Traceback (most recent call last):
File "main.py", line 82, in <module>
trainer.train()
File "env_3.8/lib/python3.8/site-packages/transformers/trainer.py", line 1258, in train
for step, inputs in enumerate(epoch_iterator):
File "env_3.8/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 517, in __next__
data = self._next_data()
File "env_3.8/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 557, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "env_3.8/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "env_3.8/lib/python3.8/site-packages/transformers/data/data_collator.py", line 66, in default_data_collator
return torch_default_data_collator(features)
File "env_3.8/lib/python3.8/site-packages/transformers/data/data_collator.py", line 105, in torch_default_data_collator
batch[k] = torch.stack([f[k] for f in features])
RuntimeError: stack expects each tensor to be equal size, but got [1, 20] at entry 0 and [1, 266] at entry 1
```
How can I solve this? Is there any documentation on how to build a simple pytorch dataset that works with huggingface transformers' models? It would be very nice if you had something like [this clear documentation on how to build a dataset for pytorch](https://pytorch.org/tutorials/beginner/data_loading_tutorial.html#dataset-class). I know there is a doc on transformers.datasets but I found it pretty confusing... | 09-02-2021 12:55:47 | 09-02-2021 12:55:47 | Please use the [forums](https://discuss.huggingface.co/) to debug your code as we keep the issues for bugs and feature requests only.
There is not a special Dataset that works better for the Trainer, using a standard PyTorch dataset completely works. The problem here is in your specific dataset, which doesn't have tensors of the same shapes, though can't be collated directly in a batch. Like in pure PyTorch, you either need to change your dataset to have elements of the same shape, or pass along a `data_collator` to the `Trainer` (which is the equivalent of a `collate_fn` for PyTorch DataLoaders) to process a list of samples from your dataset into a batch.<|||||>Hi,
Sorry for that, I did not know there was a forum.
I don't see how my inputs can be of different shapes, as they are images, and I convert them to RGB before resizing them using ```self.transforms(img)```, do you have any idea?
From what I understand, it seems that there is some kind of padding that I need to apply on the processor's output, but I just don't know how<|||||>Hi,
I see you are truncating the inputs, but the processor can take care of that for you. Just specify `truncation=True`.
```
def __getitem__(self, idx):
image = Image.open(self.files[idx]).convert('RGB')
label = self.labels[idx]
# processor creates input_ids, attention_mask, token_type_ids, bbox, image
encoding = self.processor(image, padding="max_length", truncation=True, return_tensors="pt")
# remove batch dimension (which the processor automatically adds)
for k,v in encoding.items():
encoding[k] = v.squeeze()
# add label
encoding["labels"] = torch.tensor(label)
return encoding
```
So what happens internally, is that `LayoutLMv2Processor` first uses `LayoutLMv2FeatureExtractor` to apply OCR (namely, Google's Tesseract) on the document image to get a list of words + corresponding boxes (coordinates). The feature extractor also resizes the document image to 224x224. Next, the list of words + boxes are provided to `LayoutLMv2TokenizerFast`, which convert them to token-level `input_ids`, `attention_mask`, `token_type_ids` and `bbox`. Together with the resized image and the label, you have everything you need to train the model.
<|||||>Hi,
Thank you very much that is what I was looking for. Actually I looked for such parameters in the doc but for some reason I could not find it...
Thank you a lot for your help.
I have a new error, however, I don't know if you have any idea of how to solve it?
```bash
python3.8/site-packages/detectron2/modeling/backbone/resnet.py", line 443, in forward
assert x.dim() == 4, f"ResNet takes an input of shape (N, C, H, W). Got {x.shape} instead!"
AssertionError: ResNet takes an input of shape (N, C, H, W). Got torch.Size([8, 1, 3, 224, 224]) instead!
```
Note: I have got this error using the above code<|||||>That's because you need to remove the batch dimension which the processor automatically adds. I have updated my code snippet above.<|||||>It works ! Thank you very much for your help ! |
transformers | 13,390 | closed | Transformers crashes mypy | Ubuntu 21.04, tested on both Python 3.8 and Python 3.9 with mypy 0.910
I get a crash caused by transformers `4.10.0` when typechecking a private codebase which imports transformers. This is the error message:
```
src/transformers/trainer.py:1435: error: INTERNAL ERROR -- Please try using mypy master on Github:
https://mypy.readthedocs.io/en/stable/common_issues.html#using-a-development-mypy-build
Please report a bug at https://github.com/python/mypy/issues
version: 0.910
```
(I also get the crash trying mypy master. I'm going to submit a corresponding bug report directly to the mypy issue tracker.)
This is the corresponding line:
https://github.com/huggingface/transformers/blob/b91e65afe0f467e24183928bf57d92b2cef4b69f/src/transformers/trainer.py#L1435
That method is imported into the class, which presumably upsets mypy somehow.
#### Repro
I can reproduce this locally in the following way:
1. Clone the transformers repo
2. Create a blank virtualenv and install mypy 0.910
3. Add the following `mypy.ini` file:
```
[mypy]
check_untyped_defs=True
```
4. Run `mypy src/transformers/trainer.py`
Confusingly, this local repro method also crashes mypy when using the 4.9.1 tag, but I don't experience the crash in my private codebase when using that transformers version :shrug:
cc @sgugger | 09-02-2021 10:14:40 | 09-02-2021 10:14:40 | I am unsure what you want us to do on the Transformers side? This seems to be a bug in mypi.<|||||>I agree the bug is in mypy.
On the transformers side, I would think it's possible to rearrange the code very slightly to avoid exercising the buggy mypy code path - it's easy to reproduce using the steps I provided above.
I wanted to open this issue mostly so you're aware that there are users who can't upgrade to 4.10 because of the crash. The codebase I was upgrading doesn't need any features of 4.10 so I'm afraid I'm not rushing to try to fix this myself.
If you're not worried, feel free to close this issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Although I hope mypy fixes the bug on their side, I did some digging into possible workarounds:
- The `transformers` commit that causes mypy to crash is d8fb278, which makes sense. Adding a `py.typed` enabled the type checking, which exposed the bug even though no code was changed in the commit itself. Any commit before it works without issues.
- If you want to use mypy but are okay with not having `transformers` type checked, add the following code to `mypy.ini`:
```ini
[mypy-transformers.*]
follow_imports = skip
```
Hope this helps!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Just want to mention that the workaround for the newer pyproject.toml-based config files is
```toml
[[tool.mypy.overrides]]
follow_imports = "skip"
module = [
"transformers.*",
]
``` |
transformers | 13,389 | closed | How can I convert fairseq checkpoint to huggingface for `XLMProphetModel`? | Recently, I spent a lot of time reproducing the New Title Generation (NTG) experiment result in the original paper with `microsoft/xprophetnet-large-wiki100-cased-xglue-ntg`. However, it seems cannot be reproduced.
Finally, I find the pre-trained model named `microsoft/xprophetnet-large-wiki100-cased` seems to be not the Prophet-X, i.e. Prophet-Multi in the original paper. Actually, it is a baseline named `Unicoder-FNP` in the original paper.
From the readme from the official repo: https://github.com/microsoft/ProphetNet/tree/master/ProphetNet_Multi
> For ProphetNet-Multi-Wiki100(Baseline Model Unicoder-FNP for XGLUE), it is pretrained with 100 languages Wikipedia data Wiki-100 described in [XGLUE](https://arxiv.org/abs/2004.01401).

**Fortunately, I run the code in the official repo with `fairseq` and reproduced the results.**
**I would like to know if there is a chance to offer a script to convert `fairseq` checkpoint to `huggingface` for `XLMProphetModel`** at your convenience.
Thanks a lot. @patrickvonplaten @LysandreJik @sgugger
| 09-02-2021 09:49:49 | 09-02-2021 09:49:49 | @patrickvonplaten Could you share the script to convert fairseq checkpoint to huggingface for XLMProphetModel?
Thanks very much!<|||||>Hey @ericwtlin,
yes we do have such a script here: https://github.com/huggingface/transformers/blob/master/src/transformers/models/prophetnet/convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py<|||||>> Hey @ericwtlin,
>
> yes we do have such a script here: https://github.com/huggingface/transformers/blob/master/src/transformers/models/prophetnet/convert_prophetnet_original_pytorch_checkpoint_to_pytorch.py
@patrickvonplaten I found this script before. But where is the "transformers_old" dependency? Moreover, from the code, I guess the prophetnet_checkpoint_path should be huggingface style checkpoint, rather than official checkpoint.
`prophet_old = XLMProphetNetForConditionalGenerationOld.from_pretrained(prophetnet_checkpoint_path)`
I tried to import src/transformers of branch save_old_prophetnet_model_structure as transformers_old, and use official dump (e.g. https://msraprophetnet.blob.core.windows.net/prophetnet/release_checkpoints/prophetnet_multi_wiki100.pt), the code doesn't work:
<|||||>Let me clarify, I can run the code, but it says most of the weights of official dump are not loaded.<|||||>@patrickvonplaten , could you share us the script to convert fairseq official checkpoint (e.g. https://msraprophetnet.blob.core.windows.net/prophetnet/release_checkpoints/prophetnet_multi_wiki100.pt) to patrickvonplaten/xprophetnet-large-wiki100-cased-xglue-ntg_old?
Many thanks!<|||||>Hey @ericwtlin,
Exactly `transformers_old` actually corresponds to this version of transformers: https://github.com/huggingface/transformers/tree/save_old_prophetnet_model_structure
(very ugly code that I've added there, sorry!)
In general, it's very difficult to ensure that checkpoint conversion scripts stay correct as the original codebase is often very likely to change or model parameter names change, etc...
We could try to debug this together in a google colab - I sadly don't have the time to dig deep into the current checkpoint of microsoft.
It would be great if you could provide a google colab where you get the error message mentioned above<|||||>`https://github.com/huggingface/transformers/tree/save_old_prophetnet_model_structure`
not available now
|
transformers | 13,388 | closed | Hard time installing huggingface for python3.8 cuda10.1 | ## Environment info
- `transformers` version: 4.10.0
- Platform: linux (ubuntu)
- Python version: 3.8
- PyTorch version (GPU?): 1.7.1+cu101
- Tensorflow version (GPU?): 2.2.0 , tf-gpu : 2.2.0
- Using GPU in script?: not even using it for now
- Using distributed or parallel set-up in script?: no
## Assignees
- trainer: @sgugger
## Information
I am trying to fine tune the LayoutLMv2ForSequenceClassification model.
When I test my installation inside a venv with the following command it crashes:
```
python -c "import tensorflow as tf"
```
Output:
```
env_3.8/lib/python3.8/site-packages/transformers/onnx/convert.py", line 23, in <module>
from .. import PreTrainedModel, PreTrainedTokenizer, TensorType, TFPreTrainedModel, is_torch_available
ImportError: cannot import name 'TFPreTrainedModel' from 'transformers'
```
| 09-02-2021 09:18:21 | 09-02-2021 09:18:21 | Can you try updating your TensorFlow version and letting us know if it fixes your issue? For example to tensorflow v2.5.0<|||||>Hello, finally I solved my issue : it was a tensorflow-related issue. In fact I was playing with a small script named code.py in which I was importing stuff from huggingface, and tensorflow was trying to import something else, also named code.py so there was a circular import or something. So in the end I just renamed my script file and it works !<|||||>Ah interesting! Thanks for updating us. |
transformers | 13,387 | closed | [doc] fix mBART example | # What does this PR do?
This PR fixes the tokenizer call, according to the new API.
Fixes #12073 | 09-02-2021 07:10:07 | 09-02-2021 07:10:07 | Thanks! |
transformers | 13,386 | closed | [docs] Update perplexity.rst to use negative log likelihood | # What does this PR do?
`forward` returns the negative log likelihood. The document correctly defines and calculates perplexity, but the description and variable names are inconsistent, which might cause confusion. This patch fixes the description to reflect the correct behavior (i.e., use negative log-likelihood).
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sashavor @sgugger
| 09-02-2021 05:17:45 | 09-02-2021 05:17:45 | |
transformers | 13,385 | closed | Fix name and get_class method in AutoFeatureExtractor | # What does this PR do?
This PR fixes the name of one of the feature extractor and the logic in the feature extraction auto class to match what we did in AutoTokenizer.
Merging since all failing tests in the nightlies pass locally after this patch. Can address any comment tomorrow! | 09-02-2021 00:45:09 | 09-02-2021 00:45:09 | |
transformers | 13,384 | closed | not support Pytorch 1.8.2 | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9 and 4.10
- Platform: Dockerfile with ubuntu 18.04 base
- Python version: 3.8
- PyTorch version (GPU?): 1.8.2+cpu and 1.8.2+cu102
I install transformers using Pip in my Dockerfile, when I compile my Dockerfile, it shows error that transformers does not support pytorch 1.8.2+cpu and 1.8.2+cu102:
Downloading transformers-4.10.0-py3-none-any.whl (2.8 MB)
ERROR: Could not find a version that satisfies the requirement torch==1.8.2+cpu (from versions: 0.1.2, 0.1.2.post1, 0.1.2.post2, 1.4.0, 1.4.0+cpu, 1.4.0+cu100, 1.4.0+cu92, 1.5.0, 1.5.0+cpu, 1.5.0+cu101, 1.5.0+cu92, 1.5.1, 1.5.1+cpu, 1.5.1+cu101, 1.5.1+cu92, 1.6.0, 1.6.0+cpu, 1.6.0+cu101, 1.6.0+cu92, 1.7.0, 1.7.0+cpu, 1.7.0+cu101, 1.7.0+cu110, 1.7.0+cu92, 1.7.1, 1.7.1+cpu, 1.7.1+cu101, 1.7.1+cu110, 1.7.1+cu92, 1.7.1+rocm3.7, 1.7.1+rocm3.8, 1.8.0, 1.8.0+cpu, 1.8.0+cu101, 1.8.0+cu111, 1.8.0+rocm3.10, 1.8.0+rocm4.0.1, 1.8.1, 1.8.1+cpu, 1.8.1+cu101, 1.8.1+cu102, 1.8.1+cu111, 1.8.1+rocm3.10, 1.8.1+rocm4.0.1, 1.9.0, 1.9.0+cpu, 1.9.0+cu102, 1.9.0+cu111, 1.9.0+rocm4.0.1, 1.9.0+rocm4.1, 1.9.0+rocm4.2)
| 09-01-2021 21:19:31 | 09-01-2021 21:19:31 | Hello,
There is actually no torch v1.8.2 on either pypi or [torch hosted repository](https://download.pytorch.org/whl/torch_stable.html). You should probably install v1.8.1 or v1.9.0+<|||||>> Hello,
>
> There is actually no torch v1.8.2 on either pypi or [torch hosted repository](https://download.pytorch.org/whl/torch_stable.html). You should probably install v1.8.1 or v1.9.0+
HiοΌyou can acess the https://pytorch.org/ and find the LTS version is PyTorch 1.8.2<|||||>Thanks for the info @Doragd. To OP, if you want to use PyTorch v1.8.2, you'll have to follow the install instruction on PyTorch website and then install `transformers` because PyTorch v1.8.2 doesn't exist on the official pypi repository.<|||||>I've installed PyTorch v1.8.2, but failed at installing transformers because v1.8.2 is not in supported list in transformers.<|||||>I'm pretty sure `transformers` supports PyTorch v1.8.2. The error message you posted does not mean **not supported**, it just means pip cannot find the specific version of PyTorch in the given pip repository.
I've tested the following code on Colab, `transformers` worked as expected.
```
!pip install torch==1.8.2+cpu torchvision==0.9.2+cpu torchaudio==0.8.2 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
!pip install transformers==4.10
!python --version
import torch
import transformers
from transformers import BertTokenizer
print('PyTorch version:', torch.__version__, ' transformers version:', transformers.__version__)
tokenizer = BertTokenizer.from_pretrained("bert-base-uncased")
print(tokenizer("Hello World!"))
```
Outputs:
```
Python 3.7.11
PyTorch version: 1.8.2+cpu transformers version: 4.10.0
{'input_ids': [101, 7592, 2088, 999, 102], 'token_type_ids': [0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1]}
```
Could you post your Dockerfile here? so we can better understand what the problem is.<|||||>@liuhoward I have installed transformers successfully under the PyTorch v1.8.2. All is well.
You can follow my steps
```shell
$ conda create -n test python=3.8
$ conda activate test
$ pip install torch==1.8.2+cu111 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
$ pip install transformers
```
The command output logs of `pip list`:
```text
Package Version
------------------ -------------------
backcall 0.2.0
certifi 2021.5.30
charset-normalizer 2.0.4
click 8.0.1
decorator 4.4.2
filelock 3.0.12
huggingface-hub 0.0.16
idna 3.2
ipykernel 5.5.0
ipython 7.21.0
ipython-genutils 0.2.0
jedi 0.18.0
joblib 1.0.1
jupyter-client 6.1.12
jupyter-core 4.7.1
numpy 1.21.2
packaging 21.0
parso 0.8.1
pexpect 4.8.0
pickleshare 0.7.5
pip 21.0.1
prompt-toolkit 3.0.18
ptyprocess 0.7.0
Pygments 2.8.1
pyparsing 2.4.7
python-dateutil 2.8.1
PyYAML 5.4.1
pyzmq 22.0.3
regex 2021.8.28
requests 2.26.0
sacremoses 0.0.45
setuptools 52.0.0.post20210125
six 1.15.0
tokenizers 0.10.3
torch 1.8.2+cu111 # torch
tornado 6.1
tqdm 4.62.2
traitlets 5.0.5
transformers 4.10.0 # transformers
typing-extensions 3.10.0.2
urllib3 1.26.6
wcwidth 0.2.5
wheel 0.37.0
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,383 | closed | [GPU Tests] Fix SpeechEncoderDecoder GPU tests | # What does this PR do?
model wasn't moved to GPU the 2nd time | 09-01-2021 21:11:24 | 09-01-2021 21:11:24 | |
transformers | 13,382 | closed | Small typo | https://github.com/huggingface/transformers/blob/4475f1dc2aa7153adb5e66b361c48aed1321fe3d/examples/pytorch/question-answering/run_qa.py#L423
On this line, the word 'agument' has a typo. I believe it should be argument. While we are at it, I think changing it to 'the argument' would be better as well. | 09-01-2021 20:50:04 | 09-01-2021 20:50:04 | If there's a typo, feel free to open a PR! Thanks.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Just spotted this issue again, creating a PR |
transformers | 13,381 | closed | Zero-shot classification pipeline truncation support | Transformers 4.10.0 brought [a change](https://github.com/huggingface/transformers/pull/13299/files#diff-c5af53af9b08fb383b49d7a07c1a56c890198b5cd48adc97aeef753fe2e7d60dR91) that modified the default truncation strategy to TruncationStrategy.DO_NOT_TRUNCATE for the ZeroShotClassificationPipeline.
That uncovered an issue in that the [ZeroShotClassificationPipeline](https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/zero_shot_classification.py#L217 ) doesn't appear to pass kwargs to the parent's call method. So even when calling the pipeline with truncation=True, it doesn't allow for truncation.
Thank you for the assistance in advance, appreciate all the work you guys do. | 09-01-2021 16:16:36 | 09-01-2021 16:16:36 | One additional related question: this warning gets printed out each zsl pipeline call now:
logger.warning("The tokenizer {self.tokenizer} does not have a pad token, we're not running it as a batch")
Would that make more sense to raise on init of the object vs each call? Otherwise, it can get pretty noisy even if you reuse the same zsl pipeline multiple times. <|||||>cc @Narsil <|||||>What model/tokenizer were you using ? Using ONLY_FIRST on tokenizer that do not have pad_token should have resulted in error before, but I could be mistaken.
For the warning, as it was supposed to be a new path (and not an old one), it is indeed a bit noisy, and should be cleaned up with the new refactor of pipelines. https://github.com/huggingface/transformers/pull/13308
What you are claiming is that this was an unexpected regression here, so I would like to test out what was exactly wrong, so we can have a real test for this case before fixing it.<|||||>Appreciate the quick response. Here is code that works in 4.9.2 but fails in 4.10.0
```python
from transformers import pipeline
nlp = pipeline("zero-shot-classification", model="roberta-large-mnli", tokenizer="roberta-large-mnli")
nlp(["Very long text" * 1000, "Happy to hear"], ["negative", "positive"], truncation=True)
nlp = pipeline("zero-shot-classification")
nlp(["Very long text" * 1000, "Happy to hear"], ["negative", "positive"], truncation=True)
```
I don't think the truncation param ever did anything but changing the default tokenization param from ONLY_FIRST to DO_NOT_TRUNCATE seems to have exposed the issue. <|||||>Ok.
I can confirm that truncation went from being on by default to not by default.
@LysandreJik that was to enable all tokenizers that don't have a `pad_token` (and can't pad anyway). However, changing the default for tokenizer that can was an oversight on my part.
We can go different routes:
1. Revert the change of default, and override to DO_NOT_TRUNCATE only in the `pad_token` missing path (I think this is what should have been done in the first place, my bad here).
2. Simply roll with it as it was released, but fix the passing around of the truncation argument (it does require a change, to pass on the kwargs to __call__ simply, but might break existing code where some kwargs where just ignored before and would wind up in the `tokenizer(...) ` call triggering new errors).<|||||>I think we can go with 1. and then release a patch (v4.10.1) to keep the previous behavior<|||||>The PR is ready. Turns out the default Truncation.ONLY_FIRST can break (on LED with small tokenizers) where the input is not large enough. (I am not really sure why it's an error in `tokenizers`).
So I made the changes that hopefully should match the old behavior more closely. |
transformers | 13,380 | closed | [Flax] Fix BigBird | # What does this PR do?
PyTorch BigBird was changed in a recent PR: https://github.com/huggingface/transformers/commit/ba1b3db70907b975b5ca52b9957c5ed7a186a0fa but the Flax version wasn't changed accordingly.
Thanks for spotting it @sgugger
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-01-2021 16:02:53 | 09-01-2021 16:02:53 | I see -> yeah we can do this! |
transformers | 13,379 | closed | Error using SpecAugment feature masking in Wav2Vec 2.0 | When fine-tuning Wav2Vec 2.0, turning on SpecAugment and setting a non-zero value for `mask_feature_prob` results in a size mismatch error at the line `spec_aug_mask = torch.where(attention_mask.bool(), spec_aug_mask, False)`. There are no issues when `mask_feature_prob` is set to zero.
## Environment info
- `transformers` version: 4.9.2
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.0 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): Wav2Vec 2.0
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Load the Wav2Vec 2.0 model, e.g., `facebook/wav2vec2-large-960h-lv60-self` with non-zero value for `mask_feature_prob`.
2. Train the model on a batch of data.
Sample code to replicate the error:
```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
import numpy as np
model_name = "facebook/wav2vec2-large-960h-lv60-self"
processor = Wav2Vec2Processor.from_pretrained(model_name)
model = Wav2Vec2ForCTC.from_pretrained(model_name,
mask_feature_prob=0.2)
model.train()
batch_duration_in_seconds = [1, 3, 2, 6]
input_features = [np.random.random(16_000 * s) for s in batch_duration_in_seconds]
batch = processor(input_features,
padding=True,
sampling_rate=16_000,
return_tensors="pt")
model(**batch)
```
The stacktrace is as follows:
```bash
Traceback (most recent call last):
File "spec.py", line 21, in <module>
model(**batch)
File "/Users/nithinholla/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/Users/nithinholla/opt/anaconda3/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1478, in forward
outputs = self.wav2vec2(
File "/Users/nithinholla/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/Users/nithinholla/opt/anaconda3/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1064, in forward
hidden_states = self._mask_hidden_states(
File "/Users/nithinholla/opt/anaconda3/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1004, in _mask_hidden_states
mask_feature_indices = _compute_mask_indices(
File "/Users/nithinholla/opt/anaconda3/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 186, in _compute_mask_indices
spec_aug_mask = torch.where(attention_mask.bool(), spec_aug_mask, False)
RuntimeError: The size of tensor a (299) must match the size of tensor b (1024) at non-singleton dimension 1
```
## Expected behavior
Successful forward and backward pass without errors.
| 09-01-2021 15:49:26 | 09-01-2021 15:49:26 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@patrickvonplaten Any idea what's causing this?<|||||>Hey @Nithin-Holla, very sorry to reply so late!
It's a bug on our side - I'm attaching a PR to fix it, should be merged to master today/tomorrow :-) |
transformers | 13,378 | closed | TRAINING CUSTOM MODEL USING LAYOUTLMv2! | If want to extract the information from the scanned document will LayoutLMV2 work? Just wanted your suggestion before I start annotating for training. Below is the example image, Red marks are the entities I want to extract.

| 09-01-2021 15:39:10 | 09-01-2021 15:39:10 | LayoutLMv2 depends on an OCR engine of choice. If you provide this image to `LayoutLMv2FeatureExtractor`, it will by default use the Tesseract OCR engine to extract a list of words + bounding boxes from the image. You'll then need to create word-level labels for the corresponding words, that indicate which are an entity and which are not.
Next, you can use `LayoutLMv2TokenizerFast` to turn the word-level words, boxes and word_labels into token-level input_ids, bbox, attention_mask, token_type_ids.
You can also use your own OCR engine of choice, as explained in the [docs](https://huggingface.co/transformers/master/model_doc/layoutlmv2.html). <|||||>> LayoutLMv2 depends on an OCR engine of choice. If you provide this image to `LayoutLMv2FeatureExtractor`, it will by default use the Tesseract OCR engine to extract a list of words + bounding boxes from the image. You'll then need to create word-level labels for the corresponding words, that indicate which are an entity and which are not.
>
> Next, you can use `LayoutLMv2TokenizerFast` to turn the word-level words, boxes and word_labels into token-level input_ids, bbox, attention_mask, token_type_ids.
>
> You can also use your own OCR engine of choice, as explained in the [docs](https://huggingface.co/transformers/master/model_doc/layoutlmv2.html).
How I can annotate my Dataset. I have similar daatset as I shared it above and Don't want to extract every information from the page. Just the red part as shown in the image. So, Do also need to annotate the words which are not required for extraction??
And please also throw some light on how to do custom training on my dataset.
Thanks @NielsRogge <|||||>Yes so LayoutLMv2 treats information extraction as a sequence labeling (NER) problem. It will label all tokens appearing in the document. So if only the words you indicate are relevant, all other words should be labeled as "not an entity". However, this is cheap to do.
> And please also throw some light on how to do custom training on my dataset.
Check out this notebook, which illustrates how to fine-tune `LayoutLMv2ForTokenClassification` for information extraction: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/FUNSD/Fine_tuning_LayoutLMv2ForTokenClassification_on_FUNSD.ipynb<|||||>> Yes so LayoutLMv2 treats information extraction as a sequence labeling (NER) problem. It will label all tokens appearing in the document. So if only the words you indicate are relevant, all other words should be labeled as "not an entity". However, this is cheap to do.
>
> > And please also throw some light on how to do custom training on my dataset.
>
> Check out this notebook, which illustrates how to fine-tune `LayoutLMv2ForTokenClassification` for information extraction: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv2/FUNSD/Fine_tuning_LayoutLMv2ForTokenClassification_on_FUNSD.ipynb
Thanks for answering. Do recommend any tool or way so that I can annotate faster on my dataset. That would be a great help. Thanks. @NielsRogge <|||||>For annotation, there are several tools, including:
* TagTog: https://www.tagtog.net/
* Tagalog: https://www.nlp.town/tagalog/
However, such annotation tools are typically not free.
Perhaps this one might help (it's free): https://github.com/openvinotoolkit/cvat<|||||>> For annotation, there are several tools, including:
>
> * TagTog: https://www.tagtog.net/
> * Tagalog: https://www.nlp.town/tagalog/
>
> However, such annotation tools are typically not free.
Just wanted a quick understanding. What are the files needed to prepared for annotation? Data preparation and files would be similar as FUNSD dataset consist off?<|||||>@
> For annotation, there are several tools, including:
>
> * TagTog: https://www.tagtog.net/
> * Tagalog: https://www.nlp.town/tagalog/
>
> However, such annotation tools are typically not free.
>
> Perhaps this one might help (it's free): https://github.com/openvinotoolkit/cvat
@NielsRogge Problem with annotation is need to annotate every word with bounding box + its associated text. Even though I have to specify non relevant words as "Non-Relevant" word, I need to provide bounding box information for those words as well. It would become huge task of annotating every word in an image. <|||||>In that case, I'd use PyTesseract (or another OCR engine) to get a list of words + boxes:
```
from transformers import LayoutLMv2FeatureExtractor
from PIL import Image
feature_extractor = LayoutLMv2FeatureExtractor.from_pretrained("microsoft/layoutlmv2-base-uncased")
image = Image.open("your pdf").convert("RGB")
encoding = feature_extractor(image)
words, boxes = encoding.words, encoding.boxes
```
Next, you can initialize the `word_labels` as:
```
word_labels = ["no entity" for _ in range(len(words))]
```
Then, you can search for the entities in the list of words and label the corresponding indices as "entity".<|||||>> In that case, I'd use PyTesseract (or another OCR engine) to get a list of words + boxes:
>
> ```
> from transformers import LayoutLMv2FeatureExtractor
> from PIL import Image
>
> feature_extractor = LayoutLMv2FeatureExtractor.from_pretrained("microsoft/layoutlmv2-base-uncased")
> image = Image.open("your pdf").convert("RGB")
>
> encoding = feature_extractor(image)
> words, boxes = encoding.words, encoding.boxes
> ```
>
> Next, you can initialize the `word_labels` as:
>
> ```
> word_labels = ["no entity" for _ in range(len(words))]
> ```
>
> Then, you can search for the entities in the list of words and label the corresponding indices as "entity".
Thanks @NielsRogge, I will try to annotate with the approach you answered. <|||||>> > In that case, I'd use PyTesseract (or another OCR engine) to get a list of words + boxes:
> > ```
> > from transformers import LayoutLMv2FeatureExtractor
> > from PIL import Image
> >
> > feature_extractor = LayoutLMv2FeatureExtractor.from_pretrained("microsoft/layoutlmv2-base-uncased")
> > image = Image.open("your pdf").convert("RGB")
> >
> > encoding = feature_extractor(image)
> > words, boxes = encoding.words, encoding.boxes
> > ```
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > Next, you can initialize the `word_labels` as:
> > ```
> > word_labels = ["no entity" for _ in range(len(words))]
> > ```
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > Then, you can search for the entities in the list of words and label the corresponding indices as "entity".
>
> Thanks @NielsRogge, I will try to annotate with the approach you answered.
Just one more question, How we add relative bounding with every label and convert it into the formate required by LayoutLMv2 finally?
<|||||>> In that case, I'd use PyTesseract (or another OCR engine) to get a list of words + boxes:
>
> ```
> from transformers import LayoutLMv2FeatureExtractor
> from PIL import Image
>
> feature_extractor = LayoutLMv2FeatureExtractor.from_pretrained("microsoft/layoutlmv2-base-uncased")
> image = Image.open("your pdf").convert("RGB")
>
> encoding = feature_extractor(image)
> words, boxes = encoding.words, encoding.boxes
> ```
>
> Next, you can initialize the `word_labels` as:
>
> ```
> word_labels = ["no entity" for _ in range(len(words))]
> ```
>
> Then, you can search for the entities in the list of words and label the corresponding indices as "entity".
I am getting this error!

<|||||>For me, that runs fine. Try restarting the runtime and running again.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@NielsRogge What will be the output format in this case? |
transformers | 13,377 | closed | AttributeError: '_LazyAutoMapping' object has no attribute '_mapping' | For
from transformer.models.auto.modeling_auto import MODEL_MAPPING
I get the following error:
AttributeError: '_LazyAutoMapping' object has no attribute '_mapping'
The *_LazyAutoMapping* class does indeed not have the attribute *_mapping* (see https://github.com/huggingface/transformers/blob/master/src/transformers/models/auto/auto_factory.py). However, it has a function *def _iter_(self)* that is supposed to *return iter(self._mapping.keys())*, which does not exist in *_init_()*. Am I missing something here or is this an error that needs to be fixed?
This is the error message:
File "/content/drive/MyDrive/SharedColabNotebooks/Code/transformersum/src/main.py", line 8, in <module>
from extractive import ExtractiveSummarizer
File "/content/drive/.shortcut-targets-by-
id/1AslFCJkKFwmDS9rtbO_CAdHbBWuL1I4A/SharedColabNotebooks/Code/transformersum/src/extractive.py", line 47, in <module>
[m.model_type for m in MODEL_MAPPING]
File "/usr/local/envs/transformersum_test/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 528, in __iter__
return iter(self._mapping.keys())
AttributeError: '_LazyAutoMapping' object has no attribute '_mapping'
Any help is greatly appreciated! | 09-01-2021 15:26:02 | 09-01-2021 15:26:02 | Hello! Could you provide the details of your environment please?<|||||>This is the one I'm using:
Python 3.8.10
# packages in environment at /usr/local/envs/transformersum:
#
# Name Version Build Channel
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 1_llvm conda-forge
abseil-cpp 20210324.2 h9c3ff4c_0 conda-forge
absl-py 0.13.0 pyhd8ed1ab_0 conda-forge
aiohttp 3.7.4.post0 py38h497a2fe_0 conda-forge
alabaster 0.7.12 py_0 conda-forge
arrow-cpp 5.0.0 py38h4026a5f_3_cpu conda-forge
async-timeout 3.0.1 py_1000 conda-forge
attrs 21.2.0 pyhd8ed1ab_0 conda-forge
aws-c-cal 0.5.11 h95a6274_0 conda-forge
aws-c-common 0.6.2 h7f98852_0 conda-forge
aws-c-event-stream 0.2.7 h3541f99_13 conda-forge
aws-c-io 0.10.5 hfb6a706_0 conda-forge
aws-checksums 0.1.11 ha31a3da_7 conda-forge
aws-sdk-cpp 1.8.186 hb4091e7_3 conda-forge
babel 2.9.1 pyh44b312d_0 conda-forge
bcj-cffi 0.5.1 py38h709712a_0 conda-forge
blinker 1.4 py_1 conda-forge
brotli-python 1.0.9 py38h709712a_5 conda-forge
brotlicffi 1.0.9.2 py38h709712a_0 conda-forge
brotlipy 0.7.0 py38h497a2fe_1001 conda-forge
bzip2 1.0.8 h7f98852_4 conda-forge
c-ares 1.17.2 h7f98852_0 conda-forge
ca-certificates 2021.5.30 ha878542_0 conda-forge
cachetools 4.2.2 pyhd8ed1ab_0 conda-forge
catalogue 2.0.5 py38h578d9bd_0 conda-forge
certifi 2021.5.30 py38h578d9bd_0 conda-forge
cffi 1.14.6 py38ha65f79e_0 conda-forge
chardet 4.0.0 py38h578d9bd_1 conda-forge
charset-normalizer 2.0.0 pyhd8ed1ab_0 conda-forge
click 7.1.2 pyh9f0ad1d_0 conda-forge
colorama 0.4.4 pyh9f0ad1d_0 conda-forge
conllu 4.4.1 pyhd8ed1ab_0 conda-forge
cryptography 3.4.7 py38ha5dfef3_0 conda-forge
cymem 2.0.5 py38h709712a_2 conda-forge
cython-blis 0.7.4 py38h5c078b8_0 conda-forge
dataclasses 0.8 pyhc8e2a94_3 conda-forge
datasets 1.11.0 pyhd8ed1ab_0 conda-forge
dill 0.3.4 pyhd8ed1ab_0 conda-forge
docutils 0.17.1 py38h578d9bd_0 conda-forge
et_xmlfile 1.0.1 py_1001 conda-forge
filelock 3.0.12 pyh9f0ad1d_0 conda-forge
fsspec 2021.8.1 pyhd8ed1ab_0 conda-forge
future 0.18.2 py38h578d9bd_3 conda-forge
gflags 2.2.2 he1b5a44_1004 conda-forge
glog 0.5.0 h48cff8f_0 conda-forge
gmp 6.2.1 h58526e2_0 conda-forge
google-auth 1.35.0 pyh6c4a22f_0 conda-forge
google-auth-oauthlib 0.4.6 pyhd8ed1ab_0 conda-forge
grpc-cpp 1.39.1 hf1f433d_0 conda-forge
grpcio 1.38.1 py38hdd6454d_0 conda-forge
huggingface_hub 0.0.16 pyhd8ed1ab_0 conda-forge
icu 68.1 h58526e2_0 conda-forge
idna 3.1 pyhd3deb0d_0 conda-forge
imagesize 1.2.0 py_0 conda-forge
importlib-metadata 4.8.1 py38h578d9bd_0 conda-forge
importlib_metadata 4.8.1 hd8ed1ab_0 conda-forge
jdcal 1.4.1 py_0 conda-forge
jinja2 3.0.1 pyhd8ed1ab_0 conda-forge
joblib 1.0.1 pyhd8ed1ab_0 conda-forge
krb5 1.19.2 hcc1bbae_0 conda-forge
ld_impl_linux-64 2.36.1 hea4e1c9_2 conda-forge
libblas 3.9.0 11_linux64_mkl conda-forge
libbrotlicommon 1.0.9 h7f98852_5 conda-forge
libbrotlidec 1.0.9 h7f98852_5 conda-forge
libbrotlienc 1.0.9 h7f98852_5 conda-forge
libcblas 3.9.0 11_linux64_mkl conda-forge
libcurl 7.78.0 h2574ce0_0 conda-forge
libedit 3.1.20191231 he28a2e2_2 conda-forge
libev 4.33 h516909a_1 conda-forge
libevent 2.1.10 hcdb4288_3 conda-forge
libffi 3.3 h58526e2_2 conda-forge
libgcc-ng 11.1.0 hc902ee8_8 conda-forge
libgfortran-ng 11.1.0 h69a702a_8 conda-forge
libgfortran5 11.1.0 h6c583b3_8 conda-forge
libiconv 1.16 h516909a_0 conda-forge
liblapack 3.9.0 11_linux64_mkl conda-forge
libnghttp2 1.43.0 h812cca2_0 conda-forge
libprotobuf 3.16.0 h780b84a_0 conda-forge
libssh2 1.10.0 ha56f1ee_0 conda-forge
libstdcxx-ng 11.1.0 h56837e0_8 conda-forge
libthrift 0.14.2 he6d91bd_1 conda-forge
libutf8proc 2.6.1 h7f98852_0 conda-forge
libxml2 2.9.12 h72842e0_0 conda-forge
libxslt 1.1.33 h15afd5d_2 conda-forge
llvm-openmp 12.0.1 h4bd325d_1 conda-forge
lxml 4.6.3 py38hf1fe3a4_0 conda-forge
lz4-c 1.9.3 h9c3ff4c_1 conda-forge
markdown 3.3.4 pyhd8ed1ab_0 conda-forge
markupsafe 2.0.1 py38h497a2fe_0 conda-forge
mkl 2021.3.0 h726a3e6_557 conda-forge
multidict 5.1.0 py38h497a2fe_1 conda-forge
multiprocess 0.70.12.2 py38h497a2fe_0 conda-forge
multivolumefile 0.2.3 pyhd8ed1ab_0 conda-forge
murmurhash 1.0.5 py38h709712a_0 conda-forge
ncurses 6.2 h58526e2_4 conda-forge
ninja 1.10.2 h4bd325d_0 conda-forge
numpy 1.21.2 py38he2449b9_0 conda-forge
oauthlib 3.1.1 pyhd8ed1ab_0 conda-forge
openpyxl 3.0.7 pyhd8ed1ab_0 conda-forge
openssl 1.1.1k h7f98852_1 conda-forge
orc 1.6.10 h58a87f1_0 conda-forge
packaging 21.0 pyhd8ed1ab_0 conda-forge
pandas 1.3.2 py38h43a58ef_0 conda-forge
parquet-cpp 1.5.1 2 conda-forge
pathy 0.6.0 pyhd8ed1ab_0 conda-forge
pip 21.2.4 pyhd8ed1ab_0 conda-forge
preshed 3.0.5 py38h709712a_1 conda-forge
protobuf 3.16.0 py38h709712a_0 conda-forge
py7zr 0.16.1 pyhd8ed1ab_1 conda-forge
pyarrow 5.0.0 py38h1bc9799_3_cpu conda-forge
pyasn1 0.4.8 py_0 conda-forge
pyasn1-modules 0.2.7 py_0 conda-forge
pycparser 2.20 pyh9f0ad1d_2 conda-forge
pycryptodomex 3.10.1 py38h497a2fe_0 conda-forge
pydantic 1.8.2 py38h497a2fe_0 conda-forge
pydeprecate 0.3.1 pyhd8ed1ab_0 conda-forge
pygments 2.10.0 pyhd8ed1ab_0 conda-forge
pyjwt 2.1.0 pyhd8ed1ab_0 conda-forge
pyopenssl 20.0.1 pyhd8ed1ab_0 conda-forge
pyparsing 2.4.7 pyh9f0ad1d_0 conda-forge
pyppmd 0.16.1 py38h709712a_0 conda-forge
pysocks 1.7.1 py38h578d9bd_3 conda-forge
python 3.8.10 h49503c6_1_cpython conda-forge
python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge
python-xxhash 2.0.2 py38h497a2fe_0 conda-forge
python_abi 3.8 2_cp38 conda-forge
pytorch 1.9.0 cpu_py38h4bbe6ce_2 conda-forge
pytorch-lightning 1.4.5 pyhd8ed1ab_0 conda-forge
pytorch-ranger 0.1.1 pyhd8ed1ab_0 conda-forge
pytz 2021.1 pyhd8ed1ab_0 conda-forge
pyu2f 0.1.5 pyhd8ed1ab_0 conda-forge
pyyaml 5.4.1 py38h497a2fe_1 conda-forge
pyzstd 0.14.4 py38hd4831d6_2 conda-forge
re2 2021.08.01 h9c3ff4c_0 conda-forge
readline 8.1 h46c0cb4_0 conda-forge
regex 2021.8.28 py38h497a2fe_0 conda-forge
requests 2.26.0 pyhd8ed1ab_0 conda-forge
requests-oauthlib 1.3.0 pyh9f0ad1d_0 conda-forge
rsa 4.7.2 pyh44b312d_0 conda-forge
s2n 1.0.10 h9b69904_0 conda-forge
sacremoses 0.0.43 pyh9f0ad1d_0 conda-forge
scikit-learn 0.24.2 py38h1561384_1 conda-forge
scipy 1.7.1 py38h56a6a73_0 conda-forge
setuptools 57.4.0 py38h578d9bd_0 conda-forge
shellingham 1.4.0 pyh44b312d_0 conda-forge
six 1.16.0 pyh6c4a22f_0 conda-forge
sleef 3.5.1 h7f98852_1 conda-forge
smart_open 5.2.1 pyhd8ed1ab_0 conda-forge
snappy 1.1.8 he1b5a44_3 conda-forge
snowballstemmer 2.1.0 pyhd8ed1ab_0 conda-forge
spacy 3.1.2 py38h2b96118_0 conda-forge
spacy-legacy 3.0.8 pyhd8ed1ab_0 conda-forge
sphinx 4.1.2 pyh6c4a22f_1 conda-forge
sphinxcontrib-applehelp 1.0.2 py_0 conda-forge
sphinxcontrib-devhelp 1.0.2 py_0 conda-forge
sphinxcontrib-htmlhelp 2.0.0 pyhd8ed1ab_0 conda-forge
sphinxcontrib-jsmath 1.0.1 py_0 conda-forge
sphinxcontrib-qthelp 1.0.3 py_0 conda-forge
sphinxcontrib-serializinghtml 1.1.5 pyhd8ed1ab_0 conda-forge
sqlite 3.36.0 h9cd32fc_0 conda-forge
srsly 2.4.1 py38h709712a_0 conda-forge
tbb 2021.3.0 h4bd325d_0 conda-forge
tensorboard 2.6.0 pyhd8ed1ab_1 conda-forge
tensorboard-data-server 0.6.0 py38h3e25421_0 conda-forge
tensorboard-plugin-wit 1.8.0 pyh44b312d_0 conda-forge
texttable 1.6.4 pyhd8ed1ab_0 conda-forge
thinc 8.0.8 py38hfc89cab_0 conda-forge
threadpoolctl 2.2.0 pyh8a188c0_0 conda-forge
tk 8.6.11 h27826a3_1 conda-forge
tokenizers 0.10.1 py38hb63a372_0 conda-forge
torch-optimizer 0.1.0 pyhd8ed1ab_0 conda-forge
torchmetrics 0.5.1 pyhd8ed1ab_0 conda-forge
tqdm 4.49.0 pyh9f0ad1d_0 conda-forge
transformers 4.9.2 pyhd8ed1ab_0 conda-forge
typer 0.3.2 pyhd8ed1ab_0 conda-forge
typing-extensions 3.10.0.0 hd8ed1ab_0 conda-forge
typing_extensions 3.10.0.0 pyha770c72_0 conda-forge
urllib3 1.26.6 pyhd8ed1ab_0 conda-forge
wasabi 0.8.2 pyh44b312d_0 conda-forge
werkzeug 2.0.1 pyhd8ed1ab_0 conda-forge
wheel 0.37.0 pyhd8ed1ab_1 conda-forge
xxhash 0.8.0 h7f98852_3 conda-forge
xz 5.2.5 h516909a_1 conda-forge
yaml 0.2.5 h516909a_0 conda-forge
yarl 1.6.3 py38h497a2fe_2 conda-forge
zipp 3.5.0 pyhd8ed1ab_0 conda-forge
zlib 1.2.11 h516909a_1010 conda-forge
zstd 1.5.0 ha95c52a_0 conda-forge
# conda environments:
#
base /usr/local
transformersum_new_python * /usr/local/envs/transformersum<|||||>Does this still happen if you install transformers from our channel?
```
conda install -c huggingface transformers
```<|||||>it worked, thank you very much! I also had to install huggingface tokenizers=0.10.1, in case this information might help someone. |
transformers | 13,376 | closed | Enabling automatic loading of tokenizer with `pipeline` for `audio-classification`. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
@LysandreJik
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 09-01-2021 15:07:14 | 09-01-2021 15:07:14 | Once Lysandre views this, I'll merge. |
transformers | 13,375 | closed | Fix RemBERT tokenizer initialization | Fixes an issue with the tokenizer initializer for RemBERT which has a requirement on `vocab_file`, hence making initialization through `tokenizer_file` impossible.
Also adds a missing `_CHECKPOINT_FOR_DOC` for the same model. | 09-01-2021 14:54:11 | 09-01-2021 14:54:11 | |
transformers | 13,374 | closed | Add missing feature extractors | Adds some missing feature extractors to the mapping so that they may be used with `AutoFeatureExtractor` | 09-01-2021 14:53:11 | 09-01-2021 14:53:11 | |
transformers | 13,373 | closed | Add LayoutXLM tokenizer docs | # What does this PR do?
LayoutXLM uses a different tokenizer than LayoutLMv2, based on XLMRobertaTokenizer. As this is quite important, I have added a remark about it in the docs. | 09-01-2021 14:49:24 | 09-01-2021 14:49:24 | |
transformers | 13,372 | closed | Properly register missing submodules in main init | # What does this PR do?
This PR adds a few submodules that are not mentioned anywhere in the init. This results in them not being accessible when importing the transformers module without importing any object:
```
import transformers
transformers.modeling_outputs
```
fails previous to this PR. | 09-01-2021 14:32:38 | 09-01-2021 14:32:38 | |
transformers | 13,371 | closed | wav2vec2-large-xlsr-53 Tokenizer unable to load | ## Environment info
- `transformers` version: 4.10.0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.11
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensorflow version (GPU?): 2.6.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...):
I'm using the Wav2Vec2-XLSR-53 model, and it is specifically the loading part that doesn't work properly. The script below is provided on the model's huggingface page, and the tokenizer (2:nd line) does not load, while the actual model does work (3:d line).
```python
from transformers import AutoTokenizer, AutoModelForPreTraining
tokenizer = AutoTokenizer.from_pretrained("facebook/wav2vec2-large-xlsr-53")
model = AutoModelForPreTraining.from_pretrained("facebook/wav2vec2-large-xlsr-53")
```
## To reproduce
Steps to reproduce the behavior:
1. Run the script above
## Expected behavior
I would expect the tokenizer to load.
| 09-01-2021 14:11:48 | 09-01-2021 14:11:48 | https://github.com/huggingface/transformers/blob/ecd5397106e243021ef28e65ca566881bb825bcb/examples/research_projects/wav2vec2/run_common_voice.py#L343-L366
The pretrained model has no tokenizer, you need to built your own one.
If you want to pretrain there is an example script here: https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_pretrain.py<|||||>Ok, thanks for the quick response! |
transformers | 13,370 | closed | [Consistency] Make sure all xxxForSequenceClassification models support problem_type | A while ago (#11012), an additional attribute called `problem_type` has been added to `xxxForSequenceClassification` models, which you can set to "multi_label_classification", "single_label_classification" or "regression" to fine-tune `xxxForSequenceClassification` models for the respective problem. This makes sure the appropriate loss function is used.
This is great, however, 3 things need to be improved in my opinion:
* this is causing a bit of an inconsistency as it's not implemented in all models which have an `xxxForSequenceClassification` head model.
* it's also not included in any of the CookieCutter templates, which are used to add new models.
* this is not documented anywhere.
cc @abhishekkrthakur | 09-01-2021 13:31:23 | 09-01-2021 13:31:23 | Pretty sure it is documented :)
https://huggingface.co/transformers/main_classes/configuration.html
Agree that its not in cookie-cutter and its not implemented for all models (it cannot be implemented for all). :)

<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,369 | closed | Fix DINO | # What does this PR do?
This PR includes a very tiny fix, which enables us to make a DINO demo for video (i.e. visualizing the attention maps on a sequence of non-square frames).
cc @nateraw @osanseviero | 09-01-2021 12:12:20 | 09-01-2021 12:12:20 | |
transformers | 13,368 | closed | Fix GPT-J _CHECKPOINT_FOR_DOC typo | null | 09-01-2021 10:57:39 | 09-01-2021 10:57:39 | |
transformers | 13,367 | closed | Add BlenderBot small tokenizer to the init | This class was forgotten, adding it to the init and the documentation. | 09-01-2021 10:46:30 | 09-01-2021 10:46:30 | Actually this tokenizer seems a bit broken:
```py
tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot_small-90M")
```
```
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/lysandre/Workspaces/Python/transformers/src/transformers/models/auto/tokenization_auto.py", line 469, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/home/lysandre/Workspaces/Python/transformers/src/transformers/tokenization_utils_base.py", line 1741, in from_pretrained
return cls._from_pretrained(
File "/home/lysandre/Workspaces/Python/transformers/src/transformers/tokenization_utils_base.py", line 1858, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/lysandre/Workspaces/Python/transformers/src/transformers/models/blenderbot_small/tokenization_blenderbot_small_fast.py", line 76, in __init__
ByteLevelBPETokenizer(
File "/home/lysandre/transformers/.env/lib/python3.8/site-packages/tokenizers/implementations/byte_level_bpe.py", line 36, in __init__
BPE(
Exception: Error while initializing BPE: Token `_</w>` out of vocabular
```<|||||>Which only appears after fixing with 6d90d5a3213b812f3d54d6142751a275b59343b1
cc @patil-suraj if I recall correctly you implemented this tokenizer, do you remember what might have gone wrong? |
transformers | 13,366 | closed | Add `Hubert` to the `AutoFeatureExtractor` | Quick fix to allow `Hubert` models to auto-load `Wav2Vec2FeatureExtractor`.
Caught this while trying to load Hubert without an explicit feature extractor in `pipeline("audio-classification")` | 09-01-2021 10:12:18 | 09-01-2021 10:12:18 | |
transformers | 13,365 | closed | flax ner example | # What does this PR do?
flax ner example
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten | 09-01-2021 08:29:28 | 09-01-2021 08:29:28 | Hi @patil-suraj,
Thanks for the review.
Done changes according to your suggestions
<|||||>@patil-suraj
Thanks for the review.
Fixed the issues.<|||||>Thanks a lot for all your work! |
transformers | 13,364 | closed | Move Flax self-push to test machine | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Feel free to merge whenever @LysandreJik
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 09-01-2021 07:36:49 | 09-01-2021 07:36:49 | |
transformers | 13,363 | closed | Which files are essential when customize and modify a specific pre-trained modelοΌ | For some reason, I want to customize and modify a specific pre-trained model to adapt my experiments, e.g. BERT.
However, there are so many unrelated files in the `src/transformers`, and it really confuses me a lot.
I only want to extract the essential source files to support my experiments without installing `transformers`.
**So I would like to know Which files are essential when customize and modify a specific pre-trained model.**
For example, for the BERT model, I think the files related to BERT are in the `src/transformers/models/bert`.
Besides that, are there any source files essential?
Could you help me at your convenience? @LysandreJik @patrickvonplaten @qqaatw | 09-01-2021 05:16:37 | 09-01-2021 05:16:37 | For example, I find https://github.com/microsoft/unilm/tree/master/unilm-v1/src/pytorch_pretrained_bert use the `pytorch_pretrained_bert` without installing `transformers`. The file tree is very clean and clear. I also want to do this too, without unnecessary irrelevant source files.
<|||||>Hi,
Basically, it's not readily to extract `BertModel` or `BertTokenizer` as an independent module because they're inheriting some base classes resided at other folders, intending to provide some easy-to-use utilities such as `from_pretrained` method.
IMO, I think you can have `transformers` as a git submodule placed in your repository and modify the features you want accordingly. This way you can still pull the latest features from upstream `transformers` and have your custom BERT (or other models).<|||||>> Hi,
>
> Basically, it's not readily to extract `BertModel` or `BertTokenizer` as an independent module because they're inheriting some base classes resided at other folders, intending to provide some easy-to-use utilities such as `from_pretrained` method.
>
> IMO, I think you can have `transformers` as a git submodule placed in your repository and modify the features you want accordingly. This way you can still pull the latest features from upstream `transformers` and have your custom BERT (or other models).
Thanks for your reply.
I plan to inherit the origin class and modify it. Maybe this is a compromise solution for me.
Do you think so? @qqaatw
<|||||>Yeah, I think this way is comparatively easy to achieve your goal.<|||||>Thanks. |
transformers | 13,362 | closed | Hyperparameter search function not working with Trainer and mlflow | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.2
- Platform:
- Python version:
- PyTorch version (GPU?): 1.9.0
- Tensorflow version (GPU?):
- Using GPU in script?:Yes
- Using distributed or parallel set-up in script?:Yes
### Who can help
@sgugger @noise-field @LysandreJik
Models: bert
my own task or dataset: (give details below) : I am using my own dataset
## To reproduce
Steps to reproduce the behavior:
1. Create Bert model for AutoModelForSequenceClassification and pre process with a dataset
2. Use pytorch GPU distributed fashion
3. Use trainer.hyperparameter_search backend as optuna
3. Install the mlflow
4. Use report_to = "none" in the training argument to stop the callback
The Hyper parameter training stops in middle, there is a communication break between the master and worker nodes.
If I do not give report_to = "none", I am using mlflow in my own script. It given me error that a session with mlflow uuid already running. If I stop my mlflow session then the hyperparameter search happens successfully.
I do not want to integrated mlflow with the Trainer and want to use mlflow for logging my params. But using the report_to = "none", breaks my code by breaking the communication with master and worker.
This is my code:
` def tune(self, train_df, val_df, test_df):
logging.info(" tuning started")
model_full_path = self.model_base_path + "/" + self.model_name
logging.info(" model path["+model_full_path+"]")
tokenizer = AutoTokenizer.from_pretrained(model_full_path, do_lower_case=True)
train_dataset, val_dataset, test_dataset = self.pre_process(tokenizer, train_df, val_df, test_df)
training_args = TrainingArguments(
output_dir=self.train_out_dir, # output directory
logging_dir=self.train_log_dir, # directory for storing logs
num_train_epochs=self.train_param_nb_epochs, # total # of training epochs
per_device_train_batch_size=self.train_param_per_device_train_batch_size, # batch size per device during training
per_device_eval_batch_size=self.train_param_per_device_eval_batch_size, # batch size for evaluation
warmup_steps=self.train_param_warmup_steps, # number of warmup steps for learning rate scheduler
weight_decay=self.train_param_weight_decay, # strength of weight decay
learning_rate = self.train_param_learning_rate, # args.learning_rate - default is 5e-5, our notebook had 2e-5
adam_epsilon = self.train_param_adam_epsilon,
report_to = "none" #no callbacks to mlflow
)
def model_init():
print("-----------------")
print(model_full_path)
print(self.nb_labels)
print("-----------------")
return AutoModelForSequenceClassification.from_pretrained(model_full_path,
num_labels = self.nb_labels, # The number of output labels--2 for binary classification.
output_attentions = False, # Whether the model returns attentions weights.
output_hidden_states = False, # Whether the model returns all hidden-states.
)
from ray.tune.examples.pbt_transformers import utils
trainer = Trainer(
model_init=model_init, # the instantiated Γ°ΕΈΒ€β Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=val_dataset, # evaluation dataset
compute_metrics=utils.build_compute_metrics_fn('rte')
)
def hp_space(trial):
if self.backend == 'ray':
from ray import tune
return {
"learning_rate": tune.choice(self.tune_param_learning_rate),
"num_train_epochs": tune.choice(self.tune_param_nb_epochs),
"per_device_train_batch_size": tune.choice(self.tune_param_per_device_train_batch_size),
"per_device_eval_batch_size": tune.choice(self.tune_param_per_device_eval_batch_size),
"warmup_steps": tune.choice(self.tune_param_warmup_steps),
"weight_decay": tune.choice(self.tune_param_weight_decay),
"adam_epsilon": tune.choice(self.tune_param_adam_epsilon)
}
elif self.backend == 'optuna':
return {
"learning_rate": trial.suggest_float("learning_rate", self.tune_param_learning_rate[0], self.tune_param_learning_rate[1]),
"num_train_epochs": trial.suggest_int("num_train_epochs", self.tune_param_nb_epochs[0], self.tune_param_nb_epochs[1]),
"per_device_train_batch_size": trial.suggest_discrete_uniform("per_device_train_batch_size", self.tune_param_per_device_train_batch_size[0], self.tune_param_per_device_train_batch_size[1],self.tune_param_per_device_train_batch_size[2]),
"per_device_eval_batch_size": trial.suggest_discrete_uniform("per_device_eval_batch_size", self.tune_param_per_device_eval_batch_size[0], self.tune_param_per_device_eval_batch_size[1], self.tune_param_per_device_eval_batch_size[2]),
"warmup_steps": trial.suggest_int("warmup_steps", self.tune_param_warmup_steps[0], self.tune_param_warmup_steps[1]),
"weight_decay": trial.suggest_float("weight_decay", self.tune_param_weight_decay[0], self.tune_param_weight_decay[1]),
"adam_epsilon": trial.suggest_float("adam_epsilon", self.tune_param_adam_epsilon[0], self.tune_param_adam_epsilon[1])
}
best_run = trainer.hyperparameter_search(
direction="maximize",
hp_space = hp_space,
backend=self.backend,
n_trials = self.tune_param_n_trials,
)
for n, v in best_run.hyperparameters.items():
setattr(trainer.args, n, v)
mlflow.log_param("best_"+n, v)
trainer.train()
# log the validation metrics with test data
val_eval = trainer.evaluate(eval_dataset = val_dataset)
self.log_metrics(val_eval, 'val')
self.log_artifacts(val_df, tokenizer, trainer, "validation_set")
train_eval = trainer.evaluate(eval_dataset = train_dataset)
self.log_metrics(train_eval, 'train')
self.log_artifacts(train_df, tokenizer, trainer, "train_set")
test_eval = trainer.evaluate(eval_dataset = test_dataset)
self.log_metrics(test_eval, 'test')
self.log_artifacts(test_df, tokenizer, trainer, "test_set")
now = dt.datetime.now()
model_out_dir = self.model_out_dir+'_'+str(now.year)+'_'+str(now.month)+'_'+str(now.day)+'_'+str(now.hour)+'_'+str(now.minute)
trainer.save_model(output_dir = model_out_dir)
return trainer`
| 09-01-2021 04:10:18 | 09-01-2021 04:10:18 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,361 | closed | Fixes for the documentation | # What does this PR do?
This PR contains a few fixes necessary for me to build the documentation with custom tooling. They are all making the current documentation better as well, so this shouldn't cause any problems to merge. | 08-31-2021 21:09:10 | 08-31-2021 21:09:10 | |
transformers | 13,360 | closed | CTRL's `config.json` on HF Hub is missing a `model_type` | The [`config.json` for CTRL](https://huggingface.co/ctrl/blob/main/config.json) on the Model Hub is missing the key `model_type`.
As a result, passing the model repo as a path to `AutoModel.from_pretrained` will fail in some cases.
## To reproduce
Clone the model repo and `cd` into it:
```
git lfs install
git clone https://huggingface.co/ctrl
cd ctrl
```
Then, in a python session, try to load it:
```python
>>> import transformers
>>> model = transformers.AutoModel.from_pretrained(".")
ValueError: Unrecognized model in .. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: layoutlmv2, beit, rembert, visual_bert, canine, roformer, clip, bigbird_pegasus, deit, luke, detr, gpt_neo, big_bird, speech_to_text, vit, wav2vec2, m2m_100, convbert, led, blenderbot-small, retribert, ibert, mt5, t5, mobilebert, distilbert, albert, bert-generation, camembert, xlm-roberta, pegasus, marian, mbart, megatron-bert, mpnet, bart, blenderbot, reformer, longformer, roberta, deberta-v2, deberta, flaubert, fsmt, squeezebert, hubert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm-prophetnet, prophetnet, xlm, ctrl, electra, encoder-decoder, funnel, lxmert, dpr, layoutlm, rag, tapas, splinter
```
## Relation to string matching
The problem does not occur if the substring "ctrl" appears in the argument of `from_pretrained`. This happens because of the string-matching fallback on [this line](https://github.com/huggingface/transformers/blob/master/src/transformers/models/auto/configuration_auto.py#L524) of `AutoConfig`.
For example, the above will work properly if we skip the `cd ctrl` step, and then load the model with `model = transformers.AutoModel.from_pretrained("ctrl/")`. This points to the same path on the file system, so it is counter-intuitive that the two would differ.
Using the model name (rather than a path) works, for the same reason. I.e. `model = transformers.AutoModel.from_pretrained("ctrl")`
#### Why string matching?
Beyond this specific bug, I am a little confused why the string-matching logic in `AutoConfig` is necessary / useful:
- If the config.json file is well-formed, then it shouldn't be necessary.
- If the file is malformed, as in this case, the string matching works around the issue in some cases but not others.
- The difference between these cases is the result of internal library code, and is opaque to the end user, leading to confusion.
Are there other examples, besides CTRL, where the string matching provides value?
## Expected behavior
`transformers.AutoModel.from_pretrained` loads CTRL whenever it is passed a local path to the repo, no matter what the path looks like as a string.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.10.0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.11
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Tagging @LysandreJik since they helped out with a bug in CTRL's vocab file https://github.com/huggingface/transformers/issues/11088
## Information
Model I am using (Bert, XLNet ...): CTRL
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
N/A
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
| 08-31-2021 19:19:48 | 08-31-2021 19:19:48 | Thanks a lot for opening an issue @nostalgebraist, there is indeed an issue with the configuration file. Updating it.<|||||>It should be fixed now with [`huggingface@07e9f73d`](https://huggingface.co/ctrl/commit/07e9f73d6b0d0dda24506e8ad9ad4fb5cf87f4c9). Thanks again for your report! |
transformers | 13,359 | closed | Add FlaxVisionEncoderDecoderModel | # What does this PR do?
Add `FlaxVisionEncoderDecoderModel`.
- Unlike the test for `FlaxVisionEncoderDecoderModel` where there is `test_bert2gpt2_summarization`, there is currently no official pretrained checkpoint for `FlaxVisionEncoderDecoderModel`, and therefore there is no similar test for it. We can add this test once the PR is valid and used to train a `FlaxVisionEncoderDecoderModel`.
- Some tests involving the shapes of the `encoder_attentions` and `cross_attentions` require knowing the sequence lengths once the `pixel_values` is processed to a sequence. I don't find a built-in method in `ViT` providing this computation, so currently I just check the `batch_dim` and `hidden_dim`. Maybe it is good to think of this and add some utility functions to vision models. (This is computed as `self.num_patches` I think, but similar to the issue below, I don't have a clear idea how to access it for now).
- The test
https://github.com/huggingface/transformers/blob/35a3921180fe053ad474e137c0d738dfac543a8b/tests/test_modeling_flax_vision_encoder_decoder.py#L332
will fail since `FlaxVisionEncoderDecoderModel` doesn't have encoder/decoder attributes (I tried it locally by disable `@slow`). I could't find a way to access `FlaxVisionEncoderDecoderModel.module.encoder` and ``FlaxVisionEncoderDecoderModel.module.decoder`. The same issue exists for
https://github.com/huggingface/transformers/blob/35a3921180fe053ad474e137c0d738dfac543a8b/tests/test_modeling_flax_encoder_decoder.py#L364
(It is a slow test, so won't show up when we push).
## Who can review?
@patil-suraj @patrickvonplaten
| 08-31-2021 17:14:17 | 08-31-2021 17:14:17 | Hi @ydshieh,
Thanks for this PR, I will draft a `VisionEncoderDecoderModel` soon, as I would like to add it alongside the TrOCR model by Microsoft, which combines a vision encoder (BEiT) with an autoregressive text decoder (initialized from RoBERTa).
However, the current design of the `EncoderDecoderModels` does have a limitation: if the `hidden_size` of the encoder and decoder don't match, then they define a single projection layer to project the `encoder_hidden_states` to the same dimension as the decoder. However, in TrOCR, this is not how it's done: there, they project the `encoder_hidden_states` to the dimension of the decoder when defining the keys and values, in each layer separately.
Hence, I will set up a draft PR that allows both options (either a single projection layer, or projecting them in the keys/values).
<|||||>> Hence, I will set up a draft PR that allows both options (either a single projection layer, or projecting them in the keys/values).
Hi, @NielsRogge . Maybe it's good to add this option to `VisionEncoderDecoderModel` as a first attempt, and apply to `EncoderDecoderModels` later? Nothing huge, but since you have a target `TrOCR model`, I think it's good to start with `VisionEncoderDecoderModel`.<|||||>Hi, can you perhaps also add support for `cross_attention_hidden_size` similar to the PyTorch implementation?
This is to support the case where, instead of just projecting the `encoder_hidden_states` to the same size as the decoder with a single projection layer, one projects them when creating keys and values in each cross-attention layer of the decoder.
We could perhaps, for consistency, add this also to the existing, text-only encoder-decoder model.<|||||>>
>
> Hi, can you perhaps also add support for `cross_attention_hidden_size` similar to the PyTorch implementation?
>
> This is to support the case where, instead of just projecting the `encoder_hidden_states` to the same size as the decoder with a single projection layer, one projects them when creating keys and values in each cross-attention layer of the decoder.
>
> We could perhaps, for consistency, add this also to the existing, text-only encoder-decoder model.
Sure, I will add it :)
For the text-only encoder-decoder, maybe in another PR though.<|||||>@NielsRogge , I add the projection layer to `FlaxVisionEncoderDecoderModel` when the condition meets. (and the PT/Flax equivalence test will check both cases)
It would be great if you can check the few minor changes I made in `modeling_vision_encoder_decoder.py` and `configuration_vision_encoder_decoder.py`.
@patrickvonplaten
I added PT/Flax equivalence test + an image captioning pretrained model integration test: All passed.
(currently, I comment out some @slow tests to make sure all tests pass. I will clean up them later).
I also uploaded the PyTorch version of the new trained Flax's image captioning model, and added the same test for `VisionEncoderDecoderModel`.
https://github.com/huggingface/transformers/blob/05918efda1924510a4cf5b3ce0c7ca0b6bd8cf22/tests/test_modeling_flax_vision_encoder_decoder.py#L465
https://github.com/huggingface/transformers/blob/05918efda1924510a4cf5b3ce0c7ca0b6bd8cf22/tests/test_modeling_flax_vision_encoder_decoder.py#L332
I think this PR is ready for review.<|||||>@patil-suraj - feel free to merge once you're happy with the PR<|||||>Impressive work! I also saw your image captioning demo - so if I understand correctly, you first implemented a Vit2Gpt2 model yourself which you used for training (without using this new `FlaxVisionEncoderDecoderModel` class), and now you can load the weights of that model into it?<|||||>@NielsRogge I made a ViT+GPT2 during the Flax community week, and it had 2 major bugs, so gave really non-sense captions.
After that event, I decided to work on Encoder-Decoder models.
For the demo you saw, it does use the newly added `FlaxVisionEncoderDecoderModel` to load original pretrained ViT / GPT2 model, then finetuned on COCO 2017 :)<|||||>- added related lines for `AutoModelForVision2Seq`
- removed `test_configuration_tie`
- change `# @slow` to `@slow`<|||||>I removed `attention_mask` from `FlaxVisionEncoderDecoderModel.__call__`, and also from the inputs docstring of both `modeling_vision_encoder_decoder.py` and `modeling_flax_vision_encoder_decoder.py`.
cc @NielsRogge <|||||>All reviews from @patil-suraj have been addressed, except for a few `raise NotImplementedError` in the test.
I rebased on master, and needed to fix a pt/flax equivalence test.<|||||>Some final clean-ups regarding the encoder attention mask and then we're good to go IMO :-)<|||||>>
>
> Some final clean-ups regarding the encoder attention mask and then we're good to go IMO :-)
Clean-ups done. I have checked again the slow tests and all good :-)
(other failed tests seem unrelated)<|||||>Awesome - thanks a lot @ydshieh! Could you maybe rebase (or merge master into your branch) to fix the failing tests? :-) There was a problem with TF tests on master<|||||>It's green now :-)<|||||>Hi, @patrickvonplaten @NielsRogge , about the blog post for image captioning mentioned in
https://github.com/huggingface/transformers/pull/14139#issuecomment-950868338
```
Also cc @ydshieh - If you're interested we could work on a nice blog post
(to be added here: https://huggingface.co/blog) on
how to leverage VisionEncoderDecoder for image captioning if you're interested :-)
```
I am thinking to read
https://huggingface.co/blog/encoder-decoder
https://huggingface.co/blog/fine-tune-wav2vec2-english
to get some idea about the writing.
My Flax/TPU training script however uses a custom `datasets` script to load a local COCO dataset, and there are something to modify to avoid the dataset processing speed issue.
I am just wondering if the blog post should include this training processes, or just contains how to use a pretrained model?<|||||>Pinged you on Slack, to not spam this PR :)<|||||>@patrickvonplaten I changed it back to `one` with copying you words as comment
https://github.com/huggingface/transformers/blob/d8aea41778e709ee6c6a9e8c84247d7cd9421c10/src/transformers/models/vision_encoder_decoder/modeling_flax_vision_encoder_decoder.py#L228-L233<|||||>The failure is unrelated to this PR, merging now. |
transformers | 13,358 | closed | Unexpected weights were not initialized from the model checkpoint error | ## Environment info
- `transformers` version: 4.9.2
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.11
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensorflow version (GPU?): 2.6.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik @sgugger
## Information
I am defining a simple multi-class BERT classification model and then training it using pytorch-lightning. The code is below under class `BertForMulticlassSequenceClassification(BertPreTrainedModel)`. The issue is that after training when I am loading the classifier model `model = ClassTaggerModel.load_from_checkpoint(checkpoint_file)` I get
```
Some weights of BertForMulticlassSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifiers.0.weight', 'classifiers.1.bias', 'classifiers.0.bias', 'classifiers.1.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
This is quite strange as I am loading BertForMulticlassSequenceClassification checkpoint. In fact if I inspect state_dict with torch I can see `model.classifiers.0.weight` etc, so they are in there. Any suggestions on why I am still getting this error would be most appreciated! Thank you!
## Reproduce
Extract notebook https://colab.research.google.com/drive/1os9mz7w7gmLBL_ZDvZ9K1saz9UA3rmD7?usp=sharing and minimal data files attached here and "run all".
[classes.txt](https://github.com/huggingface/transformers/files/7085157/classes.txt)
[train.csv](https://github.com/huggingface/transformers/files/7085158/train.csv)
[val.csv](https://github.com/huggingface/transformers/files/7085159/val.csv)
| 08-31-2021 12:57:21 | 08-31-2021 12:57:21 | You are not providing the code necessary to reproduce the error so there is little we can do to help. Also note that code like `model = ClassTaggerModel.load_from_checkpoint(checkpoint_file)` does not execute anything from the Transformers library as it uses PyTorch Lightning, so it's unlikely you are getting the warning printed above with it.<|||||>@sgugger thanks for fast reply - I have added minimal example in my original report above - the link is to colab. It does a simple one epoch training on a subset of data (seconds) and saves checkpoint. Then it loads checkpoint leading to reported error. <|||||>I'm sorry but I'm really confused as to why you are opening an issue here: you are saving a checkpoint with PyTorch ligthning and reloading it with PyTorch lightning, not Transformers. |
transformers | 13,357 | closed | [GitHub Runner] Fix flax runner | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
As discussed offline, enable test fetcher for self-push tests for flax / comment out muilt-gpu tests for now for flax and run tests on separate machine
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 08-31-2021 12:55:40 | 08-31-2021 12:55:40 | |
transformers | 13,356 | closed | Import of transformers package throwing value_error | I have successfully installed transformers package in my Jupyter Notebook from Anaconda administrator console using the command 'conda install -c conda-forge transformers'.
However when I try to load the transformers package in my Jupyter notebook using 'import transformers' command, I am getting an error, 'ValueError: got_ver is None'.
I am not sure how I can resolve this. Appreciate any inputs.
| 08-31-2021 11:15:42 | 08-31-2021 11:15:42 | Do you get the same error when installing from our conda channel?
```
conda install -c huggingface transformers
```<|||||>>
>
> Do you get the same error when installing from our conda channel?
>
> ```
> conda install -c huggingface transformers
> ```
Hi, I don't see the problem while installing. Installation goes through successfully. I have reproduced the installation screen shot below. The error message appears when loading the 'transformers' package in Jupyter NB. by the way, as you can see below, I have installed using the above suggested channel too, but the same error persists, while loading the package. Thank you.
(base) C:\WINDOWS\system32>conda install -c conda-forge transformers
Collecting package metadata (current_repodata.json): done
Solving environment: done
## Package Plan ##
environment location: C:\ProgramData\Anaconda3
added / updated specs:
- transformers
The following packages will be downloaded:
package | build
---------------------------|-----------------
dataclasses-0.8 | pyhc8e2a94_3 10 KB conda-forge
huggingface_hub-0.0.16 | pyhd8ed1ab_0 52 KB conda-forge
sacremoses-0.0.43 | pyh9f0ad1d_0 430 KB conda-forge
tokenizers-0.10.1 | py38h291c280_0 1.9 MB conda-forge
transformers-4.9.2 | pyhd8ed1ab_0 1.3 MB conda-forge
typing-extensions-3.10.0.0 | hd8ed1ab_0 8 KB conda-forge
typing_extensions-3.10.0.0 | pyha770c72_0 28 KB conda-forge
------------------------------------------------------------
Total: 3.7 MB
The following NEW packages will be INSTALLED:
dataclasses conda-forge/noarch::dataclasses-0.8-pyhc8e2a94_3
huggingface_hub conda-forge/noarch::huggingface_hub-0.0.16-pyhd8ed1ab_0
sacremoses conda-forge/noarch::sacremoses-0.0.43-pyh9f0ad1d_0
tokenizers conda-forge/win-64::tokenizers-0.10.1-py38h291c280_0
transformers conda-forge/noarch::transformers-4.9.2-pyhd8ed1ab_0
typing-extensions conda-forge/noarch::typing-extensions-3.10.0.0-hd8ed1ab_0
The following packages will be UPDATED:
typing_extensions pkgs/main::typing_extensions-3.7.4.3-~ --> conda-forge::typing_extensions-3.10.0.0-pyha770c72_0
Proceed ([y]/n)?
Downloading and Extracting Packages
typing_extensions-3. | 28 KB | ##################################################################################################### | 100%
huggingface_hub-0.0. | 52 KB | ##################################################################################################### | 100%
typing-extensions-3. | 8 KB | ##################################################################################################### | 100%
dataclasses-0.8 | 10 KB | ##################################################################################################### | 100%
sacremoses-0.0.43 | 430 KB | ##################################################################################################### | 100%
transformers-4.9.2 | 1.3 MB | ##################################################################################################### | 100%
tokenizers-0.10.1 | 1.9 MB | ##################################################################################################### | 100%
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
(base) C:\WINDOWS\system32>conda install -c huggingface transformers
Collecting package metadata (current_repodata.json): done
Solving environment: done
## Package Plan ##
environment location: C:\ProgramData\Anaconda3
added / updated specs:
- transformers
The following packages will be UPDATED:
ca-certificates conda-forge::ca-certificates-2021.5.3~ --> pkgs/main::ca-certificates-2021.7.5-haa95532_1
The following packages will be SUPERSEDED by a higher-priority channel:
certifi conda-forge::certifi-2021.5.30-py38ha~ --> pkgs/main::certifi-2021.5.30-py38haa95532_0
Proceed ([y]/n)?
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
(base) C:\WINDOWS\system32><|||||>Hmmm I have trouble reproducing. If you have it handy, do you mind pasting the full stack trace? Thank you<|||||>Hi @LysandreJik, Sorry for getting back late. I have tried many things including installation and reinstallation of `transformers `several times.
1. Whether I uninstall or remove the package using commands like `conda uninstall -c pypi transformers` or `conda uninstall -c conda-forge transformers`, or `conda uninstall transformers` or `conda remove transformers` from Anaconda prompt, or using `pip uninstall transformers` from within Jupyter, the `transformers` module continues to show up when I check using `conda list`. Please see the picture.

2. After reinstalling using `conda install -c huggingface transformers` from Anaconda prompt, the below commands and the out put clearly indicate the package is installed.
```
import transformers
help(transformers)
```
```
Help on package transformers:
NAME
transformers
PACKAGE CONTENTS
activations
activations_tf
benchmark (package)
commands (package)
configuration_utils
convert_graph_to_onnx
convert_pytorch_checkpoint_to_tf2
convert_slow_tokenizer
convert_slow_tokenizers_checkpoints_to_fast
convert_tf_hub_seq_to_seq_bert_to_pytorch
data (package)
debug_utils
deepspeed
dependency_versions_check
dependency_versions_table
feature_extraction_sequence_utils
feature_extraction_utils
file_utils
generation_beam_search
generation_flax_logits_process
generation_flax_utils
generation_logits_process
generation_stopping_criteria
generation_tf_utils
generation_utils
hf_argparser
image_utils
integrations
modelcard
modeling_flax_outputs
modeling_flax_pytorch_utils
modeling_flax_utils
modeling_outputs
modeling_tf_outputs
modeling_tf_pytorch_utils
modeling_tf_utils
modeling_utils
models (package)
onnx (package)
optimization
optimization_tf
pipelines (package)
sagemaker (package)
testing_utils
tokenization_utils
tokenization_utils_base
tokenization_utils_fast
trainer
trainer_callback
trainer_pt_utils
trainer_seq2seq
trainer_tf
trainer_utils
training_args
training_args_seq2seq
training_args_tf
utils (package)
FILE
(built-in)
```
3. However, I am in doubt whether it installed the correct package and all sub-modules, because, I don't find popular sub-modules like `pipeline`, `AutoModelForTokenClassification`, `AutoTokenizer`.
4. When I try to import some sub-modules from the list shown above, some seem to get executed and some not. Below, the 1st one gets executed and not the second one (gives an error message).
5. `from transformers import activations_tf`
`from transformers import activations`
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_6928/4272544316.py in <module>
----> 1 from transformers import activations
C:\ProgramData\Anaconda3\lib\site-packages\transformers\activations.py in <module>
19 from torch import nn
20
---> 21 from .utils import logging
22
23
C:\ProgramData\Anaconda3\lib\site-packages\transformers\utils\__init__.py in <module>
17 from packaging import version
18
---> 19 from .. import __version__
20
21
ImportError: cannot import name '__version__' from 'transformers' (unknown location)
```
Other than getting frustrated, I am unable to figure out what is the problem. Hope I get the help.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I still hope this can be resolved. Hope someone can help.
On Tue, 5 Oct 2021, 7:12 p.m. github-actions[bot], ***@***.***>
wrote:
> This issue has been automatically marked as stale because it has not had
> recent activity. If you think this still needs to be addressed please
> comment on this thread.
>
> Please note that issues that do not follow the contributing guidelines
> <https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md>
> are likely to be ignored.
>
> β
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/13356#issuecomment-934499745>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AMGKJKEGAJIYKHIJVDFTTY3UFMIVFANCNFSM5DD4AMVA>
> .
> Triage notifications on the go with GitHub Mobile for iOS
> <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
> or Android
> <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
>
>
<|||||>me too, same problem<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @valmetisrinivas , I meet the same issue and I found that I have two numpy package in my conda env (/home/xxx/anaconda3/env/xxx/lib/python3.x/site-package/numpy-1.xx.x.dist-info). After I remove the empty one(not have METADATA or version not correspond), everything goes fine. Hope it can help you<|||||>Thanks a lot. Appreciate.
reg,
Srinivas
On Wed, 8 Dec 2021 at 12:10, ynlee-43 ***@***.***> wrote:
> Hi @valmetisrinivas <https://github.com/valmetisrinivas> , I meet the
> same issue and I found that I have two numpy package in my conda env
> (/home/xxx/anaconda3/env/xxx/lib/python3.x/site-package). After I remove
> the empty one(not have METADATA or version not correspond), everything goes
> fine. Hope it can help you
>
> β
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/13356#issuecomment-988588290>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AMGKJKELWJHYBX6F7WXD2PLUP4HGFANCNFSM5DD4AMVA>
> .
> Triage notifications on the go with GitHub Mobile for iOS
> <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
> or Android
> <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
>
>
<|||||>> Hi @valmetisrinivas , I meet the same issue and I found that I have two numpy package in my conda env (/home/xxx/anaconda3/env/xxx/lib/python3.x/site-package/numpy-1.xx.x.dist-info). After I remove the empty one(not have METADATA or version not correspond), everything goes fine. Hope it can help you
For others coming here: This was what helped me; I had installed `numpy` twice, once from `conda` and once via `pip`. Manually removing the folder, as per the comment above, did the trick.
Tip: Copy the folder before doing any modifications; in case anything goes wrong you'll have a backup.<|||||>>
Deleting empty dist-info folder of `numpy` really solves the problem. Thank you! |
transformers | 13,355 | open | Dependency parsing head for pretrained models | # π Feature request
Add a new classification head for pretrained models, for dependency parsing.
## Motivation
Current heads, such as `AutoModelForTokenClassification`, does not work well for finetuning a pretrained model for dependency parsing. [There are now such heads available](https://openreview.net/pdf?id=Hk95PK9le), so adding such would make it a lot easier to finetune models for such tasks. A `PyTorch` implementation of that paper (and others) can be found [in this repo](https://github.com/yzhangcs/parser).
## Your contribution
I could assist with a PR, but for now I'd like to start the discussion to see if this is something that the HF team and others would be interested in. | 08-31-2021 10:04:39 | 08-31-2021 10:04:39 | Dan, did you start working on this? I would be glad to help with this feature.<|||||>Why me :| @huberemanuel <|||||>Sorry, wrong mention<|||||>@huberemanuel I have a prototype implementation currently, but it probably needs a bunch of tweaking. You can find it at https://github.com/saattrupdan/ScandEval/blob/dependency-parsing/scandeval/dependency_parsing/parser.py#L17.
Right now it's basically a hack which tweaks the `forward` method of the `AutoModelForTokenClassification`. A proper implementation would use the underlying code instead of subclassing it, of course!
If you're interested in testing it and improving the implementation then let me know and we could work it out together π <|||||>Thanks! I will take a look into it and see how can I contribute to your initiative.<|||||>Hey Dan, just to send an update about this topic. I messed around with the `transformers` code and added an `AutoModelForDependencyParsing` class which for now is just a copy of `AutoModelForTokenClassification`. With that, I ran the NER training script, just to do a sanity test on token classification training. It all worked well and now I know which parts we will need to modify to add this feature.
I will send more updates as I progress, and when I got to the point to merge your code with the `transformers` internals, we can discuss what design is better. If you advance more, please let me know :) <|||||>@huberemanuel Amazing, great stuff! With the internals in place, it's hopefully doable to tweak it in (roughly) the way I did. Thanks for all the work you're putting into it, and would love to see your code when you're ready to share it! π <|||||>@huberemanuel Any news on this? |
transformers | 13,354 | closed | Does Bart Model can fill <mask> with variable length? | Hi, is it possible for bart in huggingface to fill <mask> with variable length?
For example,
```python
from transformers import BartTokenizer, BartForConditionalGeneration
tokenizer = BartTokenizer.from_pretrained(config["bart_path"])
TXT = "My dog is so <mask>."
model = BartForConditionalGeneration.from_pretrained(config["bart_path"])
input_ids = tokenizer.encode_plus(TXT, return_tensors='pt')['input_ids'] # batch_encode_plus([TXT]
logits = model(input_ids, output_hidden_states=True)[0]
masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item()
probs = logits[0, masked_index].softmax(dim=0)
values, predictions = probs.topk(5)
print(tokenizer.decode(predictions).split())
```
This code will output `['.', 'cute', 'sweet', 'funny', 'awesome']`. Is Bart able to fill <mask> with more than one words like "cute and smart"? If so, what should I do? Is there an example?
Thank you. | 08-31-2021 09:17:25 | 08-31-2021 09:17:25 | There is an example in the [docs](https://huggingface.co/transformers/model_doc/bart.html) of BART:
```
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", force_bos_token_to_be_generated=True)
tok = BartTokenizer.from_pretrained("facebook/bart-large")
example_english_phrase = "UN Chief Says There Is No <mask> in Syria"
batch = tok(example_english_phrase, return_tensors='pt')
generated_ids = model.generate(batch['input_ids'])
assert tok.batch_decode(generated_ids, skip_special_tokens=True) == ['UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria']
```
As stated in the docs:
> The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token.
In other words, it's possible as shown in the example above.
However, could you please ask such questions on the forum rather than here? We like to keep Github issues for bugs/feature requests.
Thanks!<|||||>Just FYI, this code in the documentation doesn't work. I have latest (4.10.2) transformers.
The error is
```
TypeError: __init__() got an unexpected keyword argument 'force_bos_token_to_be_generated'
```
And if I remove the keyword argument and run I get `['UNALSO SEE']` as result, not the one expected.
UPDATE: I see it was reported already https://github.com/huggingface/transformers/issues/12296. Please update the docs :)
<|||||>cc @patil-suraj @patrickvonplaten <|||||>Hi @xsway , I reproduce the result that @NielsRogge shows. My code is as follows
```
from transformers import BartForConditionalGeneration, BartTokenizer
article_en = "UN Chief Says There Is No <mask> in Syria"
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large").cuda()
tokenizer = BartTokenizer.from_pretrained("facebook/bart-large")
tokenizer.src_lang = "en_XX"
encoded_en = tokenizer(article_en, return_tensors="pt")
for key, value in encoded_en.items():
encoded_en[key] = value.cuda()
generated_tokens = model.generate(encoded_en['input_ids'], forced_bos_token_id=tokenizer.bos_token_id) # add forced_bos_token_id
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
assert result == ['UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria']
```
However, my transformers version is 4.9.2, I am not sure whether it works for the 4.10.2 transformers.
Hope this helps :)<|||||>https://github.com/huggingface/transformers/pull/14434 to fix the docs<|||||>Hi,
I realized while this approach works but the model doesn't just fill the `<mask>` span and it might also go beyond that and changes other part of the given text. For example for below code:
```
masked_text = "Police said in the first four months of the project, they laid more than 100 charges against 10 people, in connection with the illegal towing industry. βOnce we started our investigation, we found that the people involved were not only breaking the law, but they were also <mask> said Sgt. Sean Cassidy of the Toronto Police Service. βThey were breaking the laws surrounding the storage of the vehicles, the fees that they were charging and the manner in which they were charging,β he added. "
max_length = 20
print(max_length)
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large").cuda()
tokenizer = BartTokenizer.from_pretrained("facebook/bart-large")
tokenizer.src_lang = "en_XX"
encoded_en = tokenizer(text, return_tensors="pt")
for key, value in encoded_en.items():
encoded_en[key] = value.cuda()
max_length += encoded_en.input_ids.shape[1]
generated_tokens = model.generate(encoded_en['input_ids'], forced_bos_token_id=tokenizer.bos_token_id, max_length=max_length, do_sample=True, top_p=0.96, num_return_sequences=5)
result = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
```
And below is one of the generated sequences with **bold** showing where the model unnecessarily changed the text and _italic_ is the correctly filled masked span.:
"Police said in the first four months of the project, they laid more than 100 charges against 10 people, in connection with the illegal towing industry. βOnce we started our investigation, we found that the people involved were not only breaking the law, but they were also _breaking a lot of other laws as well,_β said Sgt. Sean Cassidy of the Toronto Police Service.**Article Continued Below**βThey were breaking the laws surrounding the storage of the vehicles, the fees that they were charging and the manner in which they are charging **for the services that theyβre providing to the public,',** "
I wonder if this is what it is and BART cannot effectively be used for this use case? or should we fine-tune a separate model to do this infilling task only?<|||||>Just curious; does infilling have a function to explicitly reconnect to the suffixβi.e., the sentence continuation following the `<mask>`? Or is it just assumed that conditioning on the context before and after the `<mask>` will _predispose_ it to generate an infill that reconnects with the suffix? Of course, this would be a fair assumption, given that the original training objective is based on this kind of infilling.
From the behaviour I've observed I'm guessing it's the latter. I certainly haven't been able to find anything in the source that would force it to reconnect. (But if I missed it, maybe somebody can point me to the line?)<|||||>@fabrahman Hi, I am also interested in the issue you mentionedβcan we force BART to infill texts just at the masked positions (spans)? Have you found any solutions to that? I'm looking into the codes and wonder if there is such a mechanism @jbmaxwell commented above. I will appreciate it if you can share some clues or ideas :) Thank you!<|||||>Hi @HenryCai11, unfortunately it is as I mentioned in my comment above; Bart _attempts_ to reconnect the original suffix, but there are no guarantees that it will. However, for an approach using GPT-2, see my post here:
https://discuss.huggingface.co/t/is-bart-guaranteed-to-not-mess-up-unmasked-tokens-during-text-infilling/21218/2<|||||>Thank you so much @jbmaxwell, the post really helps. I will read the paper shared in the post, and hopefully find a way to use BART in that way. Again, thanks a lot!!! |
transformers | 13,353 | closed | Torchscript test for Flaubert | cc @HamidShojanazeri, test for https://github.com/huggingface/transformers/pull/12292 | 08-31-2021 08:38:45 | 08-31-2021 08:38:45 | |
transformers | 13,352 | closed | Torchscript test for ConvBERT | cc @HamidShojanazeri, test for https://github.com/huggingface/transformers/pull/12287 | 08-31-2021 08:38:28 | 08-31-2021 08:38:28 | |
transformers | 13,351 | closed | Torchscript test for DistilBERT | cc @HamidShojanazeri, test for https://github.com/huggingface/transformers/pull/12290 | 08-31-2021 08:38:04 | 08-31-2021 08:38:04 | |
transformers | 13,350 | closed | Torchscript test | cc @HamidShojanazeri | 08-31-2021 08:24:22 | 08-31-2021 08:24:22 | |
transformers | 13,349 | closed | Layoutlm onnx support (Issue #13300) | # What does this PR do?
This PR extends ONNX support to LayoutLM as explained in https://huggingface.co/transformers/serialization.html?highlight=onnx#converting-an-onnx-model-using-the-transformers-onnx-package
Fixes Issue #13300
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@mfuntowicz
| 08-31-2021 07:49:26 | 08-31-2021 07:49:26 | |
transformers | 13,348 | closed | Cannot run grid search using Trainer API and Ray Tune | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.2
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.11
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensorflow version (GPU?): 2.6.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@richardliaw, @amogkam
@sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- gpt2: @patrickvonplaten, @LysandreJik
Library:
- ray/raytune: @richardliaw, @amogkam
- trainer: @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): roBERTa
The problem arises when using:
* [x ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x ] my own task or dataset: (give details below)
## To reproduce
Hi, I am trying to do grid search on my roberta model.
Steps to reproduce the behavior:
1.
```
hyperParameters = {
"per_gpu_batch_size": [32],
"learning_rate": [2e-5],
"num_epochs": [2,3]
}
```
```
def my_hp_space_ray(trial):
from ray import tune
return {
"learning_rate": tune.choice(hyperParameters.get('learning_rate')),
"num_train_epochs": tune.choice(hyperParameters.get('num_epochs'))
}
```
2.
```
training_args = TrainingArguments("test",
per_device_train_batch_size= 32,
per_device_eval_batch_size = 32,
evaluation_strategy = "epoch", #Can be epoch or steps
weight_decay=0.01,
logging_strategy ="epoch",
metric_for_best_model="accuracy",
report_to="wandb"
)
```
3.
```
trainer = Trainer(
args=training_args,
tokenizer=tokenizer,
train_dataset=tokenized_datasets_train,
eval_dataset=tokenized_datasets_val,
model_init=model_init,
compute_metrics=compute_metrics,
)
```
4.
```
trainer.hyperparameter_search(
direction="minimize",
backend="ray",
n_trials= 2,
hp_space = my_hp_space_ray)
```
`
2021-08-30 21:07:06,743 ERROR trial_runner.py:773 -- Trial _objective_2e533_00001: Error processing event.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/ray/tune/trial_runner.py", line 739, in _process_trial
results = self.trial_executor.fetch_result(trial)
File "/usr/local/lib/python3.7/dist-packages/ray/tune/ray_trial_executor.py", line 746, in fetch_result
result = ray.get(trial_future[0], timeout=DEFAULT_GET_TIMEOUT)
File "/usr/local/lib/python3.7/dist-packages/ray/_private/client_mode_hook.py", line 82, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/ray/worker.py", line 1621, in get
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(TuneError): ray::ImplicitFunc.train_buffered() (pid=1182, ip=172.28.0.2, repr=<types.ImplicitFunc object at 0x7f67395b5ed0>)
File "/usr/local/lib/python3.7/dist-packages/ray/tune/trainable.py", line 178, in train_buffered
result = self.train()
File "/usr/local/lib/python3.7/dist-packages/ray/tune/trainable.py", line 237, in train
result = self.step()
File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 379, in step
self._report_thread_runner_error(block=True)
File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 527, in _report_thread_runner_error
("Trial raised an exception. Traceback:\n{}".format(err_tb_str)
ray.tune.error.TuneError: Trial raised an exception. Traceback:
ray::ImplicitFunc.train_buffered() (pid=1182, ip=172.28.0.2, repr=<types.ImplicitFunc object at 0x7f67395b5ed0>)
File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 260, in run
self._entrypoint()
File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 329, in entrypoint
self._status_reporter.get_checkpoint())
File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 594, in _trainable_func
output = fn()
File "/usr/local/lib/python3.7/dist-packages/ray/tune/utils/trainable.py", line 344, in inner
trainable(config, **fn_kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/integrations.py", line 162, in _objective
local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial)
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1269, in train
tr_loss += self.training_step(model, inputs)
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1762, in training_step
loss = self.compute_loss(model, inputs)
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1794, in compute_loss
outputs = model(**inputs)
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py", line 1184, in forward
return_dict=return_dict,
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py", line 845, in forward
return_dict=return_dict,
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py", line 529, in forward
output_attentions,
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py", line 414, in forward
past_key_value=self_attn_past_key_value,
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py", line 344, in forward
output_attentions,
File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py", line 257, in forward
attention_scores = attention_scores / math.sqrt(self.attention_head_size)
RuntimeError: CUDA out of memory. Tried to allocate 384.00 MiB (GPU 0; 15.90 GiB total capacity; 13.14 GiB already allocated; 312.75 MiB free; 13.29 GiB reserved in total by PyTorch)
Result for _objective_2e533_00001:
{}`
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Hi i would like to hyper parameter tune my roberta model, using ray tune and the trainer API, is there a way to not run out of memory even if it takes longer time to finish? Or is there some other type of parameter tuning i should use instead?
I spent the whole day trying to figure it out, so any help would be hugely appreciated | 08-31-2021 07:35:05 | 08-31-2021 07:35:05 | Maybe try reducing the batch size of your model, do something like 8 or 4 first?<|||||>Tried with 4 and 8 i still get this error :(
```
---------------------------------------------------------------------------
TuneError Traceback (most recent call last)
<ipython-input-47-3b35fd348675> in <module>()
3 backend="ray",
4 n_trials= 1,
----> 5 hp_space = my_hp_space_ray
6 )
2 frames
/usr/local/lib/python3.7/dist-packages/ray/tune/tune.py in run(run_or_experiment, name, metric, mode, stop, time_budget_s, config, resources_per_trial, num_samples, local_dir, search_alg, scheduler, keep_checkpoints_num, checkpoint_score_attr, checkpoint_freq, checkpoint_at_end, verbose, progress_reporter, log_to_file, trial_name_creator, trial_dirname_creator, sync_config, export_formats, max_failures, fail_fast, restore, server_port, resume, queue_trials, reuse_actors, trial_executor, raise_on_failed_trial, callbacks, loggers, ray_auto_init, run_errored_only, global_checkpoint_period, with_server, upload_dir, sync_to_cloud, sync_to_driver, sync_on_checkpoint, _remote)
553 if incomplete_trials:
554 if raise_on_failed_trial and not state[signal.SIGINT]:
--> 555 raise TuneError("Trials did not complete", incomplete_trials)
556 else:
557 logger.error("Trials did not complete: %s", incomplete_trials)
TuneError: ('Trials did not complete', [_objective_33435_00000])
```<|||||>Can you provide the full traceback?
On Tue, Aug 31, 2021 at 1:29 AM Mosleh Mahamud ***@***.***>
wrote:
> Tried with 4 and 8 i still get this error :(
>
> ---------------------------------------------------------------------------
> TuneError Traceback (most recent call last)
> <ipython-input-47-3b35fd348675> in <module>()
> 3 backend="ray",
> 4 n_trials= 1,
> ----> 5 hp_space = my_hp_space_ray
> 6 )
>
> 2 frames
> /usr/local/lib/python3.7/dist-packages/ray/tune/tune.py in run(run_or_experiment, name, metric, mode, stop, time_budget_s, config, resources_per_trial, num_samples, local_dir, search_alg, scheduler, keep_checkpoints_num, checkpoint_score_attr, checkpoint_freq, checkpoint_at_end, verbose, progress_reporter, log_to_file, trial_name_creator, trial_dirname_creator, sync_config, export_formats, max_failures, fail_fast, restore, server_port, resume, queue_trials, reuse_actors, trial_executor, raise_on_failed_trial, callbacks, loggers, ray_auto_init, run_errored_only, global_checkpoint_period, with_server, upload_dir, sync_to_cloud, sync_to_driver, sync_on_checkpoint, _remote)
> 553 if incomplete_trials:
> 554 if raise_on_failed_trial and not state[signal.SIGINT]:
> --> 555 raise TuneError("Trials did not complete", incomplete_trials)
> 556 else:
> 557 logger.error("Trials did not complete: %s", incomplete_trials)
>
> TuneError: ('Trials did not complete', [_objective_33435_00000])
>
> β
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/13348#issuecomment-909018861>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ABCRZZK4FIVYD546RHIR2SDT7SHFZANCNFSM5DDPNGOA>
> .
>
<|||||>yes here you go, seems like ray tune doesn't want to run at all :(
```
`No `resources_per_trial` arg was passed into `hyperparameter_search`. Setting it to a default value of 1 CPU and 1 GPU for each trial.
2021-08-31 08:49:52,166 WARNING callback.py:117 -- The TensorboardX logger cannot be instantiated because either TensorboardX or one of it's dependencies is not installed. Please make sure you have the latest version of TensorboardX installed: `pip install -U tensorboardx`
== Status ==
Memory usage on this node: 6.4/51.0 GiB
Using FIFO scheduling algorithm.
Resources requested: 0/8 CPUs, 0/1 GPUs, 0.0/30.31 GiB heap, 0.0/15.15 GiB objects (0.0/1.0 accelerator_type:P100)
Result logdir: /root/ray_results/_objective_2021-08-31_08-49-52
Number of trials: 1/1 (1 PENDING)
+------------------------+----------+-------+-----------------+--------------------+
| Trial name | status | loc | learning_rate | num_train_epochs |
|------------------------+----------+-------+-----------------+--------------------|
| _objective_680b5_00000 | PENDING | | 2e-05 | 2 |
+------------------------+----------+-------+-----------------+--------------------+
(pid=1376) Some weights of the model checkpoint at roberta-base were not used when initializing RobertaForSequenceClassification: ['pooler.dense.weight', 'pooler.dense.bias']
(pid=1376) - This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
(pid=1376) - This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
(pid=1376) Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at roberta-base and are newly initialized: ['classifier.out_proj.weight', 'classifier.dense.weight', 'classifier.out_proj.bias', 'classifier.dense.bias']
(pid=1376) You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
(pid=1376) wandb: Currently logged in as: mosh98 (use `wandb login --relogin` to force relogin)
(pid=1376) wandb: Tracking run with wandb version 0.12.1
(pid=1376) wandb: Syncing run test
(pid=1376) wandb: View project at https://wandb.ai/mosh98/Binary_Tuned
(pid=1376) wandb: View run at https://wandb.ai/mosh98/Binary_Tuned/runs/15myily9
(pid=1376) wandb: Run data is saved locally in /root/ray_results/_objective_2021-08-31_08-49-52/_objective_680b5_00000_0_learning_rate=2e-05,num_train_epochs=2_2021-08-31_08-49-52/wandb/run-20210831_085001-15myily9
(pid=1376) wandb: Run `wandb offline` to turn off syncing.
(pid=1376)
(pid=1376) signal only works in main thread
(pid=1376) <IPython.core.display.HTML object>
(pid=1376) <IPython.core.display.HTML object>
(pid=1376) <IPython.core.display.HTML object>
(pid=1376) <IPython.core.display.HTML object>
(pid=1376) <IPython.core.display.HTML object>
(pid=1376) <IPython.core.display.HTML object>
(pid=1376) <IPython.core.display.HTML object>
(pid=1376) <IPython.core.display.HTML object>
(pid=1376) 2021-08-31 08:50:10,695 ERROR function_runner.py:266 -- Runner Thread raised error.
(pid=1376) Traceback (most recent call last):
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 260, in run
(pid=1376) self._entrypoint()
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 329, in entrypoint
(pid=1376) self._status_reporter.get_checkpoint())
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 594, in _trainable_func
(pid=1376) output = fn()
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/ray/tune/utils/trainable.py", line 344, in inner
(pid=1376) trainable(config, **fn_kwargs)
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/transformers/integrations.py", line 162, in _objective
(pid=1376) local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial)
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1331, in train
(pid=1376) self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1426, in _maybe_log_save_evaluate
(pid=1376) metrics = self.evaluate()
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 2031, in evaluate
(pid=1376) metric_key_prefix=metric_key_prefix,
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 2260, in evaluation_loop
(pid=1376) metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
(pid=1376) File "<ipython-input-38-b8a033e8f995>", line 5, in compute_metrics
(pid=1376) NameError: name 'metric' is not defined
(pid=1376) Exception in thread Thread-2:
(pid=1376) Traceback (most recent call last):
(pid=1376) File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner
(pid=1376) self.run()
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 279, in run
(pid=1376) raise e
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 260, in run
(pid=1376) self._entrypoint()
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 329, in entrypoint
(pid=1376) self._status_reporter.get_checkpoint())
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 594, in _trainable_func
(pid=1376) output = fn()
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/ray/tune/utils/trainable.py", line 344, in inner
(pid=1376) trainable(config, **fn_kwargs)
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/transformers/integrations.py", line 162, in _objective
(pid=1376) local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial)
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1331, in train
(pid=1376) self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1426, in _maybe_log_save_evaluate
(pid=1376) metrics = self.evaluate()
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 2031, in evaluate
(pid=1376) metric_key_prefix=metric_key_prefix,
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 2260, in evaluation_loop
(pid=1376) metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
(pid=1376) File "<ipython-input-38-b8a033e8f995>", line 5, in compute_metrics
(pid=1376) NameError: name 'metric' is not defined
(pid=1376)
2021-08-31 08:50:10,898 ERROR trial_runner.py:773 -- Trial _objective_680b5_00000: Error processing event.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/ray/tune/trial_runner.py", line 739, in _process_trial
results = self.trial_executor.fetch_result(trial)
File "/usr/local/lib/python3.7/dist-packages/ray/tune/ray_trial_executor.py", line 746, in fetch_result
result = ray.get(trial_future[0], timeout=DEFAULT_GET_TIMEOUT)
File "/usr/local/lib/python3.7/dist-packages/ray/_private/client_mode_hook.py", line 82, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/ray/worker.py", line 1621, in get
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(TuneError): ray::ImplicitFunc.train_buffered() (pid=1376, ip=172.28.0.2, repr=<ray.tune.function_runner.ImplicitFunc object at 0x7f920a17d850>)
File "/usr/local/lib/python3.7/dist-packages/ray/tune/trainable.py", line 178, in train_buffered
result = self.train()
File "/usr/local/lib/python3.7/dist-packages/ray/tune/trainable.py", line 237, in train
result = self.step()
File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 379, in step
self._report_thread_runner_error(block=True)
File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 527, in _report_thread_runner_error
("Trial raised an exception. Traceback:\n{}".format(err_tb_str)
ray.tune.error.TuneError: Trial raised an exception. Traceback:
ray::ImplicitFunc.train_buffered() (pid=1376, ip=172.28.0.2, repr=<ray.tune.function_runner.ImplicitFunc object at 0x7f920a17d850>)
File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 260, in run
self._entrypoint()
File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 329, in entrypoint
self._status_reporter.get_checkpoint())
File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 594, in _trainable_func
output = fn()
File "/usr/local/lib/python3.7/dist-packages/ray/tune/utils/trainable.py", line 344, in inner
trainable(config, **fn_kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/integrations.py", line 162, in _objective
local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial)
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1331, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1426, in _maybe_log_save_evaluate
metrics = self.evaluate()
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 2031, in evaluate
metric_key_prefix=metric_key_prefix,
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 2260, in evaluation_loop
metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
File "<ipython-input-38-b8a033e8f995>", line 5, in compute_metrics
NameError: name 'metric' is not defined
Result for _objective_680b5_00000:
{}
== Status ==
Memory usage on this node: 7.9/51.0 GiB
Using FIFO scheduling algorithm.
Resources requested: 0/8 CPUs, 0/1 GPUs, 0.0/30.31 GiB heap, 0.0/15.15 GiB objects (0.0/1.0 accelerator_type:P100)
Result logdir: /root/ray_results/_objective_2021-08-31_08-49-52
Number of trials: 1/1 (1 ERROR)
+------------------------+----------+-------+-----------------+--------------------+
| Trial name | status | loc | learning_rate | num_train_epochs |
|------------------------+----------+-------+-----------------+--------------------|
| _objective_680b5_00000 | ERROR | | 2e-05 | 2 |
+------------------------+----------+-------+-----------------+--------------------+
Number of errored trials: 1
+------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------+
| Trial name | # failures | error file |
|------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------|
| _objective_680b5_00000 | 1 | /root/ray_results/_objective_2021-08-31_08-49-52/_objective_680b5_00000_0_learning_rate=2e-05,num_train_epochs=2_2021-08-31_08-49-52/error.txt |
+------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------+
== Status ==
Memory usage on this node: 7.9/51.0 GiB
Using FIFO scheduling algorithm.
Resources requested: 0/8 CPUs, 0/1 GPUs, 0.0/30.31 GiB heap, 0.0/15.15 GiB objects (0.0/1.0 accelerator_type:P100)
Result logdir: /root/ray_results/_objective_2021-08-31_08-49-52
Number of trials: 1/1 (1 ERROR)
+------------------------+----------+-------+-----------------+--------------------+
| Trial name | status | loc | learning_rate | num_train_epochs |
|------------------------+----------+-------+-----------------+--------------------|
| _objective_680b5_00000 | ERROR | | 2e-05 | 2 |
+------------------------+----------+-------+-----------------+--------------------+
Number of errored trials: 1
+------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------+
| Trial name | # failures | error file |
|------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------|
| _objective_680b5_00000 | 1 | /root/ray_results/_objective_2021-08-31_08-49-52/_objective_680b5_00000_0_learning_rate=2e-05,num_train_epochs=2_2021-08-31_08-49-52/error.txt |
+------------------------+--------------+------------------------------------------------------------------------------------------------------------------------------------------------+
---------------------------------------------------------------------------
TuneError Traceback (most recent call last)
<ipython-input-58-3b35fd348675> in <module>()
3 backend="ray",
4 n_trials= 1,
----> 5 hp_space = my_hp_space_ray
6 )
2 frames
/usr/local/lib/python3.7/dist-packages/transformers/trainer.py in hyperparameter_search(self, hp_space, compute_objective, n_trials, direction, backend, hp_name, **kwargs)
1687
1688 run_hp_search = run_hp_search_optuna if backend == HPSearchBackend.OPTUNA else run_hp_search_ray
-> 1689 best_run = run_hp_search(self, n_trials, direction, **kwargs)
1690
1691 self.hp_search_backend = None
/usr/local/lib/python3.7/dist-packages/transformers/integrations.py in run_hp_search_ray(trainer, n_trials, direction, **kwargs)
243 config=trainer.hp_space(None),
244 num_samples=n_trials,
--> 245 **kwargs,
246 )
247 best_trial = analysis.get_best_trial(metric="objective", mode=direction[:3])
/usr/local/lib/python3.7/dist-packages/ray/tune/tune.py in run(run_or_experiment, name, metric, mode, stop, time_budget_s, config, resources_per_trial, num_samples, local_dir, search_alg, scheduler, keep_checkpoints_num, checkpoint_score_attr, checkpoint_freq, checkpoint_at_end, verbose, progress_reporter, log_to_file, trial_name_creator, trial_dirname_creator, sync_config, export_formats, max_failures, fail_fast, restore, server_port, resume, queue_trials, reuse_actors, trial_executor, raise_on_failed_trial, callbacks, loggers, ray_auto_init, run_errored_only, global_checkpoint_period, with_server, upload_dir, sync_to_cloud, sync_to_driver, sync_on_checkpoint, _remote)
553 if incomplete_trials:
554 if raise_on_failed_trial and not state[signal.SIGINT]:
--> 555 raise TuneError("Trials did not complete", incomplete_trials)
556 else:
557 logger.error("Trials did not complete: %s", incomplete_trials)
TuneError: ('Trials did not complete', [_objective_680b5_00000])`
```<|||||>Hey @mosh98, the failures are due to this:
```
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1331, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1426, in _maybe_log_save_evaluate
metrics = self.evaluate()
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 2031, in evaluate
metric_key_prefix=metric_key_prefix,
File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 2260, in evaluation_loop
metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
File "<ipython-input-38-b8a033e8f995>", line 5, in compute_metrics
NameError: name 'metric' is not defined
```
which looks is coming from the `compute_metrics` function that you are using. What does that function look like?<|||||>Hi, my function looks like this.
```
def compute_metrics(eval_pred):
predictions, labels = eval_pred
predictions = predictions[:, 0]
return metric.compute(predictions=predictions, references=labels)
```
<|||||>Yes just fixed the metric problem, thank you @amogkam i forgot to download the correct metric module from Hugginface.
However i am getting this error from pickle, even though i updated to pickle 5
```
2021-08-31 16:50:26,299 ERROR trial_runner.py:773 -- Trial _objective_87c7c_00000: Error processing event.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/ray/tune/trial_runner.py", line 739, in _process_trial
results = self.trial_executor.fetch_result(trial)
File "/usr/local/lib/python3.7/dist-packages/ray/tune/ray_trial_executor.py", line 746, in fetch_result
result = ray.get(trial_future[0], timeout=DEFAULT_GET_TIMEOUT)
File "/usr/local/lib/python3.7/dist-packages/ray/_private/client_mode_hook.py", line 82, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/ray/worker.py", line 1621, in get
raise value.as_instanceof_cause()
ray.exceptions.RayTaskError(TuneError): ray::ImplicitFunc.train_buffered() (pid=988, ip=172.28.0.2, repr=<types.ImplicitFunc object at 0x7f9c2d0bbed0>)
File "/usr/local/lib/python3.7/dist-packages/ray/tune/trainable.py", line 178, in train_buffered
result = self.train()
File "/usr/local/lib/python3.7/dist-packages/ray/tune/trainable.py", line 237, in train
result = self.step()
File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 379, in step
self._report_thread_runner_error(block=True)
File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 527, in _report_thread_runner_error
("Trial raised an exception. Traceback:\n{}".format(err_tb_str)
ray.tune.error.TuneError: Trial raised an exception. Traceback:
ray::ImplicitFunc.train_buffered() (pid=988, ip=172.28.0.2, repr=<types.ImplicitFunc object at 0x7f9c2d0bbed0>)
File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 260, in run
self._entrypoint()
File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 329, in entrypoint
self._status_reporter.get_checkpoint())
File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 594, in _trainable_func
output = fn()
File "/usr/local/lib/python3.7/dist-packages/ray/tune/utils/trainable.py", line 343, in inner
fn_kwargs[k] = parameter_registry.get(prefix + k)
File "/usr/local/lib/python3.7/dist-packages/ray/tune/registry.py", line 190, in get
return ray.get(self.references[k])
ray.exceptions.RaySystemError: System error: No module named 'datasets_modules'
traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/ray/serialization.py", line 254, in deserialize_objects
obj = self._deserialize_object(data, metadata, object_ref)
File "/usr/local/lib/python3.7/dist-packages/ray/serialization.py", line 190, in _deserialize_object
return self._deserialize_msgpack_data(data, metadata_fields)
File "/usr/local/lib/python3.7/dist-packages/ray/serialization.py", line 168, in _deserialize_msgpack_data
python_objects = self._deserialize_pickle5_data(pickle5_data)
File "/usr/local/lib/python3.7/dist-packages/ray/serialization.py", line 156, in _deserialize_pickle5_data
obj = pickle.loads(in_band, buffers=buffers)
ModuleNotFoundError: No module named 'datasets_modules'
(pid=988) 2021-08-31 16:50:26,242 ERROR serialization.py:256 -- No module named 'datasets_modules'
(pid=988) Traceback (most recent call last):
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/serialization.py", line 254, in deserialize_objects
(pid=988) obj = self._deserialize_object(data, metadata, object_ref)
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/serialization.py", line 190, in _deserialize_object
(pid=988) return self._deserialize_msgpack_data(data, metadata_fields)
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/serialization.py", line 168, in _deserialize_msgpack_data
(pid=988) python_objects = self._deserialize_pickle5_data(pickle5_data)
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/serialization.py", line 156, in _deserialize_pickle5_data
(pid=988) obj = pickle.loads(in_band, buffers=buffers)
(pid=988) ModuleNotFoundError: No module named 'datasets_modules'
(pid=988) 2021-08-31 16:50:26,242 ERROR function_runner.py:266 -- Runner Thread raised error.
(pid=988) Traceback (most recent call last):
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 260, in run
(pid=988) self._entrypoint()
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 329, in entrypoint
(pid=988) self._status_reporter.get_checkpoint())
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 594, in _trainable_func
(pid=988) output = fn()
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/tune/utils/trainable.py", line 343, in inner
(pid=988) fn_kwargs[k] = parameter_registry.get(prefix + k)
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/tune/registry.py", line 190, in get
(pid=988) return ray.get(self.references[k])
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/_private/client_mode_hook.py", line 82, in wrapper
(pid=988) return func(*args, **kwargs)
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/worker.py", line 1623, in get
(pid=988) raise value
(pid=988) ray.exceptions.RaySystemError: System error: No module named 'datasets_modules'
(pid=988) traceback: Traceback (most recent call last):
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/serialization.py", line 254, in deserialize_objects
(pid=988) obj = self._deserialize_object(data, metadata, object_ref)
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/serialization.py", line 190, in _deserialize_object
(pid=988) return self._deserialize_msgpack_data(data, metadata_fields)
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/serialization.py", line 168, in _deserialize_msgpack_data
(pid=988) python_objects = self._deserialize_pickle5_data(pickle5_data)
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/serialization.py", line 156, in _deserialize_pickle5_data
(pid=988) obj = pickle.loads(in_band, buffers=buffers)
(pid=988) ModuleNotFoundError: No module named 'datasets_modules'
(pid=988)
(pid=988) Exception in thread Thread-2:
(pid=988) Traceback (most recent call last):
(pid=988) File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner
(pid=988) self.run()
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 279, in run
(pid=988) raise e
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 260, in run
(pid=988) self._entrypoint()
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 329, in entrypoint
(pid=988) self._status_reporter.get_checkpoint())
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/tune/function_runner.py", line 594, in _trainable_func
(pid=988) output = fn()
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/tune/utils/trainable.py", line 343, in inner
(pid=988) fn_kwargs[k] = parameter_registry.get(prefix + k)
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/tune/registry.py", line 190, in get
(pid=988) return ray.get(self.references[k])
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/_private/client_mode_hook.py", line 82, in wrapper
(pid=988) return func(*args, **kwargs)
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/worker.py", line 1623, in get
(pid=988) raise value
(pid=988) ray.exceptions.RaySystemError: System error: No module named 'datasets_modules'
(pid=988) traceback: Traceback (most recent call last):
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/serialization.py", line 254, in deserialize_objects
(pid=988) obj = self._deserialize_object(data, metadata, object_ref)
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/serialization.py", line 190, in _deserialize_object
(pid=988) return self._deserialize_msgpack_data(data, metadata_fields)
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/serialization.py", line 168, in _deserialize_msgpack_data
(pid=988) python_objects = self._deserialize_pickle5_data(pickle5_data)
(pid=988) File "/usr/local/lib/python3.7/dist-packages/ray/serialization.py", line 156, in _deserialize_pickle5_data
(pid=988) obj = pickle.loads(in_band, buffers=buffers)
(pid=988) ModuleNotFoundError: No module named 'datasets_modules'
(pid=988)
(pid=988)
Result for _objective_87c7c_00000:
{}
```<|||||>Seems like the error is:
```
(pid=1376) output = fn()
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/ray/tune/utils/trainable.py", line 344, in inner
(pid=1376) trainable(config, **fn_kwargs)
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/transformers/integrations.py", line 162, in _objective
(pid=1376) local_trainer.train(resume_from_checkpoint=checkpoint, trial=trial)
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1331, in train
(pid=1376) self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 1426, in _maybe_log_save_evaluate
(pid=1376) metrics = self.evaluate()
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 2031, in evaluate
(pid=1376) metric_key_prefix=metric_key_prefix,
(pid=1376) File "/usr/local/lib/python3.7/dist-packages/transformers/trainer.py", line 2260, in evaluation_loop
(pid=1376) metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
(pid=1376) File "<ipython-input-38-b8a033e8f995>", line 5, in compute_metrics
(pid=1376) NameError: name 'metric' is not defined
```
as a tip, maybe you could consider also doing `hyperparameter_search(..., fail_fast="raise")` to help see errors better.<|||||>Hmm, @Yard1 I thought there was a patch for the datasets_module thing?<|||||>There was - let me see<|||||>@mosh98 Hey can you try updating transformers? The patch to fix this issue was introduced in version v4.9.0 and you have said that you have v4.8.2<|||||>@Yard1 @richardliaw updating transformers did help. Huge thanks!
altho i have one last question, is there a way to specify to use tpu's instead of gpu's in the api?
At the moment i have it like this,
```
trainer.hyperparameter_search(
direction="minimize",
backend="ray",
n_trials= 1,
hp_space = my_hp_space_ray,
resources_per_trial = {"cpu": 1,"gpu": 1},
fail_fast="raise"
)
```<|||||>Hmm, great question; are you running this on Colab?<|||||>yes i am<|||||>do you guys recommend using ray tune for parameter optimization on cloud services rather than running on colab? @richardliaw @amogkam .
Sorry if i spamming you guys with question but i super new to ray tune and especially parameter optimization using Transformers. |
transformers | 13,347 | closed | Predicted Start_index < Predicted End_index in BertForQuestionAnswering | We want to fine-tuned a QA model, which is based on BertForQuestionAnswering.
After training, we can get a span-start/end scores by input_ids/token_type_ids/attention_mask and choose the indices with maximum span-start/end scores as **predicted start_index** and **predicted end_index**.
But, sometimes **predicted start_index** would less than **predicted end_index**.
If any reasonable method to solve this situation, thanks~
Ex:
`span-start scores = [-0.1, -2.1, 0.7, 1.3, 4.1]`
`span-end scores = [-0.7, 3, 5, -0.7, 3.3]`
`=>`
`predicted start_index = 4`
`predicted end_index = 2`
It is not reasonable. | 08-31-2021 06:33:51 | 08-31-2021 06:33:51 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,346 | closed | Bert (sentence classification) output is non-deterministic(have checked previous issue, SET model.eval() ) | ## Environment info
- `transformers` version:
- Platform: Ubuntu 18.04
- Python version: 3.7.6
- PyTorch version (GPU?): 1.5.1
- Tensorflow version (GPU?): /
- Using GPU in script?: Yes for trainning, Both GPU and CPU for testing scripts
- Using distributed or parallel set-up in script?: Yes for trainning
### Who can help
@LysandreJik
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ v] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [v ] my own task or dataset: (give details below)
I'm using chinese bert to match the similar tags and reduce the size of the database. So I use some mannually merged tags as the dataset, trainning a bert with inputting two tags and outputting the probability of they are similar. It did well after training in the calling of test() function I wrote(of course with model.eval()). But when I save the model to a .pth file and load it in another script, the output is non deterministic.
## To reproduce
The whole test scripts is too long, but I have a short test snippet, It should cover the core of this issue.
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("hfl/chinese-bert-wwm-ext")
model = AutoModelForSequenceClassification.from_pretrained("hfl/chinese-bert-wwm-ext")
model.state_dict(torch.load('./weights/best_bert.pth', map_location='cpu'))
# model.cuda()
for i in range(100): # use to control the time to call model.eval()
foo = 1
# model = model.eval()
model.eval()
with torch.no_grad():
srcText = 'ζ₯倩' # 'spring'
tgtText = 'ζ₯ε£' # 'spring time'
predict = model(
**tokenizer(text=srcText, text_pair=tgtText,
truncation=True, return_tensors='pt', max_length=256)
)
# NON DETERMINISTIC
print(torch.softmax(predict.logits, dim=1))
```
Steps to reproduce the behavior:
1. run the script above
2. change the iterative times for foo = 1, or just do nothing
3. run again
4. get different outputs logits and probabilities
## Expected behavior
Get identical outputs in step 1 and 3
## Additional information
I have read issue #4769 and some other similar issues, but I checked again and confirmed I called the function eval()
| 08-31-2021 03:40:58 | 08-31-2021 03:40:58 | You should get the following warning when you instantiate your `AutoModelForSequenceClassification` model:
```
Some weights of BertForSequenceClassification were not initialized from the model checkpoint at hfl/chinese-bert-wwm-ext and are newly initialized: ['classifier.bias', 'classifier.weight']
```
This tells you that the sequence classifier is not in the checkpoint you're loading: it will be initialized randomly everytime you re-initialize it.<|||||>Ok see it. but I have train my model and load in
```python
model.state_dict(torch.load('./weights/best_bert.pth', map_location='cpu'))
```
So it's the pytorch function
torch.save(model.state_dict())
does not save the model.classfier and model.bias and I trained, rights?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I just managed finally to have deterministic results. If you are still struggling, see https://discuss.huggingface.co/t/initializing-the-weights-of-the-final-layer-of-e-g-bertfortokenclassification-with-a-manual-seed/1377/3
```python
def set_seed(seed: Optional[int] = None):
"""Set all seeds to make results reproducible (deterministic mode).
When seed is None, disables deterministic mode.
:param seed: an integer to your choosing
"""
if seed is not None:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
np.random.seed(seed)
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
```<|||||>> You should get the following warning when you instantiate your `AutoModelForSequenceClassification` model:
>
> ```
> Some weights of BertForSequenceClassification were not initialized from the model checkpoint at hfl/chinese-bert-wwm-ext and are newly initialized: ['classifier.bias', 'classifier.weight']
> ```
>
> This tells you that the sequence classifier is not in the checkpoint you're loading: it will be initialized randomly everytime you re-initialize it.
Thank you, I was struggling with trying to figure out why this was happening. I assumed that "random initialization" just meant it was randomly initialized once when the model was instantiated, not every time it's called. Do you know why it has that behavior? Why wouldn't it just be initialized randomly once? What tells it to stop being random? (A round of training? A flag?) |
transformers | 13,345 | closed | Doc mismatch fixed | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #13323
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger | 08-31-2021 03:19:42 | 08-31-2021 03:19:42 | |
transformers | 13,344 | closed | How to use BertForSequenceClassification for the Apect Based Sentiment Analysis | In Aspect based sentiment analysis the sentence is classified in two way one is aspect and one is sentiment.
e.g.
"The food is good but service is poor"
The output must be
aspect food
sentiment positive
aspect service
sentiment
negative
So how to configure the BertForSequenceClassification so that two output can be generated for the apsect classification and the sentiment classification.
| 08-31-2021 02:49:39 | 08-31-2021 02:49:39 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,343 | closed | OverflowError: out of range integral type conversion attempted for run_summarization.py script using t5-small | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Mac OS
- Python version: 3.8.5
- PyTorch version (GPU?): No GPU , >- 1.3
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Models:
- t5: @patrickvonplaten, @patil-suraj
Library:
- tokenizers: @LysandreJik
Looks like this is an issue with the t5Tokenizer possibly? - Seems related to this old github issue as well https://github.com/huggingface/transformers/pull/10046.
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [X ] the official example scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run the run_summarization.py file on the multi_news dataset using t5-small with do_predict and predict_with_generate options set to true
2. In the prediction step, the decode step highlighted in the error stack below gives an OverFlow error and the prediction stops
```
File "run_summarization.py", line 674, in <module>
main()
File "run_summarization.py", line 628, in main
predict_results = trainer.predict(
File "/Users/aiswarya.s/Desktop/transformers_fudge/src/transformers/trainer_seq2seq.py", line 125, in predict
return super().predict(test_dataset, ignore_keys=ignore_keys, metric_key_prefix=metric_key_prefix)
File "/Users/aiswarya.s/Desktop/transformers_fudge/src/transformers/trainer.py", line 2133, in predict
output = eval_loop(
File "/Users/aiswarya.s/Desktop/transformers_fudge/src/transformers/trainer.py", line 2235, in evaluation_loop
loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys)
File "/Users/aiswarya.s/Desktop/transformers_fudge/src/transformers/trainer_seq2seq.py", line 180, in prediction_step
print(self.tokenizer_t5.batch_decode(inputs["labels"], skip_special_tokens=True, clean_up_tokenization_spaces=True))
File "/Users/aiswarya.s/Desktop/transformers_fudge/src/transformers/tokenization_utils_base.py", line 3047, in batch_decode
return [
File "/Users/aiswarya.s/Desktop/transformers_fudge/src/transformers/tokenization_utils_base.py", line 3048, in <listcomp>
self.decode(
File "/Users/aiswarya.s/Desktop/transformers_fudge/src/transformers/tokenization_utils_base.py", line 3086, in decode
return self._decode(
File "/Users/aiswarya.s/Desktop/transformers_fudge/src/transformers/tokenization_utils_fast.py", line 507, in _decode
text = self._tokenizer.decode(token_ids, skip_special_tokens=skip_special_tokens)
OverflowError: out of range integral type conversion attempted```
```
## Expected behavior
Should generate the tokenizer.decode outputs
| 08-30-2021 23:23:11 | 08-30-2021 23:23:11 | @aiswaryasankar - could you attach a **minimal** reproducible code snippet (also in google colab form) that allows us to quickly spot the error? Thank you :-)<|||||>from the stack-trace it looks like you are decoding `labels` in the `prediction_step` method. When
`--ignore_pad_token_for_loss` argument is set, the `labels` will still have -100 in `prediction_step`, so -100 should be replaced by pad token before decoding. The `run_summarization.py` script does that in the `compute_metrics` function which is called after the `prediction_step` method.
https://github.com/huggingface/transformers/blob/c02cd95c56249e9bd38ecb3e4ebcce6d9eebd4a4/examples/pytorch/summarization/run_summarization.py#L509<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,342 | closed | Add the `AudioClassificationPipeline` | # What does this PR do?
This adds the audio classification pipeline needed for `Wav2Vec2ForSequenceClassification` and others (see #13153).
The implementation is mostly based on `ImageClassificationPipeline` with `ffmpeg` audio file loading borrowed from `AutomaticSpeechRecognitionPipeline` by @Narsil
Once merged, model cards like https://hf.co/superb/hubert-base-superb-ks should be able to have an `audio-classification` inference widget.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten @Narsil | 08-30-2021 21:01:24 | 08-30-2021 21:01:24 | |
transformers | 13,341 | closed | Padding labels is wrong when using `pad_to_multiple_of` | I haven't tried it myself, but it looks like this line is wrong:
https://github.com/huggingface/transformers/blob/42f359d015aee3835490bdcfa20df657a4d97049/src/transformers/data/data_collator.py#L285
If `self.pad_to_multiple_of` is set to anything but 1, then the length of the labels and the length of the input won't match anymore. | 08-30-2021 20:38:20 | 08-30-2021 20:38:20 | cc @sgugger @Rocketknight1 <|||||>The length of the label and the length of the input rarely match for seq2seq problem, so this is not an issue.<|||||>π€¦ββοΈYou're right. I was looking for MLM and grabbed the wrong class. |
transformers | 13,340 | closed | Tests fetcher tests | # What does this PR do?
If you received a notification for this PR and are not a reviewer, I apologize: I clicked the button create PR to early :grimacing:
This PR fixes the tests_fetcher utils for the test files dependencies: currently modifying `tests_modeling_common.py` won't trigger any other tests than `tests_modeling_common.py` when we would like it to run all the modeling tests. To fix this, the same logic as in the modules is applied to the tests files: they are screened for dependencies to other tests files and this is all added before we compute the reverse dependency map.
As an example this is properly working, this PR has a diff in `tests_modeling_common.py` and you can check the triggered tests [here](https://circle-production-customer-artifacts.s3.amazonaws.com/picard/5bdabdd888af1f000130874a/612d399a732e644e043dbe7e-0-build/artifacts/~/transformers/test_preparation.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20210830T200953Z&X-Amz-SignedHeaders=host&X-Amz-Expires=59&X-Amz-Credential=AKIAJR3Q6CR467H7Z55A%2F20210830%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=7b5b35edd60b676c46bc3bd0715fadd1089ed4d44f68c2bf987bbcd905faef3d). | 08-30-2021 20:09:57 | 08-30-2021 20:09:57 | Will be working on tests in a few hours and would like to test this out - will merge this into `master` and check everything runs smoothly. |
transformers | 13,339 | closed | Add generate kwargs to Seq2SeqTrainingArguments | # What does this PR do?
This PR adds two new `Seq2SeqTrainingArguments` to control which `max_length` and `num_beams` are used during the intermediate evaluations of the `Seq2SeqTrainer`. This feature has been requested multiple times, the last in date being #13252. | 08-30-2021 19:11:58 | 08-30-2021 19:11:58 | |
transformers | 13,338 | closed | Handle nested dict/lists of tensors as inputs in the Trainer | # What does this PR do?
This PR refactors the `_prepare_inputs` method of the Trainer to make it recursively handle any nested list/dict of tensors.
Fixes #13146 | 08-30-2021 18:46:51 | 08-30-2021 18:46:51 | |
transformers | 13,337 | closed | Fix release utils | # What does this PR do?
The Regex pattern in the release util was wrong for the conf.py file. This PR fixes that. | 08-30-2021 15:51:26 | 08-30-2021 15:51:26 | |
transformers | 13,336 | closed | Fix AutoTokenizer when no fast tokenizer is available | # What does this PR do?
Currently, the `AutoTokenzier` API will not work when a user tries to instantiate a model that does not have a fast tokenizer, due to some wrong logic in the function `tokenizer_class_from_name`. This PR fixes that.
Fixes #13161 | 08-30-2021 15:43:16 | 08-30-2021 15:43:16 | |
transformers | 13,335 | closed | T5 - Flax - Decreasing performance on pretraining | ## Environment info
- `transformers` version: 4.9.0.dev0
- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- Flax version (CPU?/GPU?/TPU?): 0.3.4 (tpu)
- Jax version: 0.2.17
- JaxLib version: 0.68
### Who can help
@patrickvonplaten
### Script
Using a slightly modified version of run_t5_mlm_flax.py that also support streaming large datasets.
## Information
I am posting this as a bug report, since the behaviour is counter intuitive. I am not sure if this is a bug with jax/T5, or if it is actually a behaviour that should be expected from T5.
We are training T5-base (v1.1) on a large, cleaned 250GB Norwegian dataset. We are training from 1M steps, which should equal roughly two complete epochs. With a lr=8e-3, bs=32, seq_length=512, adafactor, we are experiencing a steady decay in loss:

The image above shows the first 250k steps. We needed to restart here, so I have not patched the event-files together. But the final loss after 1M steps ends on 1.349. Eval accuracy is also increasing.
The weird thing is that the final checkpoint has really terrible performance on our downstream task!
Looking into this issue, we evaluated multiple pre-training steps, by finetuning each of them 60k steps on a task of translating between two Norwegian dialects.

The red and blue dots are two models done before and after the t5 optimisation submitted by @patrickvonplaten.
The tendency here is very clear. After roughly 200k steps the model starts to suddenly perform worse on the downstream task, even if the loss is decreasing and the eval accuracy of the pretrained model in improving. The detonation happens before 1 epoch of the pretrain dataset, and though it looks like over-fitting, we find this extremely unlikely.
We have more experience with BERT-like models, and here performance on downstream tasks always improves as long as MLM-accuracy is improving. Is this expected behaviour of T5?
| 08-30-2021 15:36:47 | 08-30-2021 15:36:47 | To me this looks like the model is somehow overfitting on the pretraining data.
Think it's very hard to know exactly what is happening here though as the model is trained on specific Norwegian data...one thing it would try is to take the fully pretrained model and to check if it has a tendency to generate the same output tokens. Maybe some words are overly represented in the training data and the model overfits to those words?!
Maybe using dropout and/or weight_decay (the classic methods for regularization) could help here?
Gently pinging @craffel (hope that's fine) here in case he has seen something similar, has good ideas for analyzing the fully pretrained model and/or other ideas of what could be going on :-)<|||||>@peregilk - also could you link the model repo on the hub here so that I can take a look into the config? <|||||>Thanks. Here is the link to the repo: https://huggingface.co/pere/norwegian-t5-base-NCC-fast
I absolutely agree that this really looks a lot like overfitting. However, I really can not see why this could be happening. The corpus is described [here](https://github.com/NBAiLab/notram/tree/master/corpus) and [here](https://github.com/NBAiLab/notram/blob/master/corpus/official_NCC2.md). It is huge (250GB), and heavily deduplicated and cleaned. It is also a collection from multiple sources. There should really be no repetitive parts here to overfit on.
Please note that English words or phrases often are used in Norwegian today, and that most Norwegian speaks/understands English. We have therefore added roughly 15GB of English text to the corpus. At least in theory, the model should have some basic understanding of English as well.
Ill do some more tests and report the results.
<|||||>Interesting. I will just confirm that the only time we have seen this behavior (train loss going down, MLM accuracy going up, but downstream task performance going down) is when the model is overfitting to the training dataset. When you say "Eval accuracy is also increasing", do you mean you are computing MLM accuracy on a held-out validation set? Or is it on the train set?
Other random things to think about -
1. How many steps are you training on for the downstream task? How are you doing checkpoint selection?
2. What vocabulary are you using?
3. Are you regularizing on the downstream task?<|||||>Thanks both @patrickvonplaten and @craffel for your insightful comments. Highly appreciated.
I have dived into this, and I am now even more confused about what is going on. As Patrick suggested, I looked at how the model doing MLM-like-tasks at various pretraining steps. I am not detecting repeated tokens, or other oddities. I was only able to figure out how to give me the first prediction, so it is a bit hard comparing it directly to BERT, but it looks to be on the same level. Most predictions give grammatical sense. Also for English (<10% of total training corpus).
```
from transformers import AutoTokenizer, T5ForConditionalGeneration, FlaxT5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained('pere/norwegian-t5-base-NCC-fast', from_flax=True)
# Load 250k checkpoint instead
#model_250k = T5ForConditionalGeneration.from_pretrained('pere/norwegian-t5-base-NCC-fast', from_flax=True, revision='49d7631d423fc64770a8c8e0d55216792031c97d')
tokenizer = AutoTokenizer.from_pretrained('pere/norwegian-t5-base-NCC-fast', use_fast=True)
input_ids = tokenizer.encode('This is a small <extra_id_0> explaining how to <extra_id_1> a language model.', return_tensors='pt')
print(tokenizer.batch_decode(model.generate(input_ids)))
#output -> "section" and "write"
```
Since it seems to perform reasonable good on English, I tried running the run_summarization_flax.py example script on the xsum dataset. I modified the script to support t5, and am using the default settings (lr 5e-5, epochs 6). I adjust the batch size to maximum based on vocab size (without adjusting lr), and are getting these ROUGHE2-results.
BART (default example): 16.99
t5-base: 13.66
mt5-base: 10.36
Norwegian-t5: 9.16 (after 100k) - 7.10 (after 1.000k). See graph below:

I think this points in the direction of the error being on the pretrained model (even if the qualitative mlm-test does not detect this).
@craffel: We are using a community script for the streaming of the dataset here. I read though it, and spotted a few inaccuracy. For instance it seem like they are drawing a new eval set at the start of each epoch. This is of course not ideal, but the model is not running more than 2-3 epochs, so I doubt that this is the reason for the eval-accuracy to be improving.
1) The first figure is from running 1 epoch on a 100.000 example parallell corpus. Max performance here is after nearly 10 epochs, I did not have time to do this on all checkpoints. However, I made a few tests, and checkpoints that are doing bad after 1 epoch, seem to be bad at 10 epochs as well.
2) A custom built 50k cased vocab file - Norwegian. I have verified that the tokenisation is reasonable.
3) We are using the default dropout=0.1. I do not think weight decay/L2 is set here.
My main concern here is the added streaming code in the [training script](https://huggingface.co/pere/norwegian-t5-base-NCC-fast/blob/main/run_t5_mlm_flax_streaming.py). If it for some strange reason kept feeding the same examples to the trainer, this is maybe the result that should be expected (?). However, I have been reading through the code carefully, and this does not seem to be happening.
Any ideas? Are there verified T5 pretraining scrips with streaming support that I can test?
<|||||>Is it maybe possible that you can try the official (non-streaming) t5 MLM script: https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_t5_mlm_flax.py? This one is more tested. <|||||>@patrickvonplaten Yes. I have tried that - on a smaller dataset. I am not seeing issues like this.
However, I was unable to get it to run on the 250GB set. I created a TPU VM with an external disk. Unfortunately, I am getting memory issues on the VM. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>There is one small issue with this script. But not sure if that is the reason for the problem.
1. [Here](https://huggingface.co/pere/norwegian-t5-base-NCC-fast/blob/main/run_t5_mlm_flax_streaming.py#L514) : Text stream has been tokenized with an additional `</s>` . I believe we should add an additional argument `add_special_tokens=False`. Here adding `eos` token refers document separator not sample separator.
2. In the [data collator](https://huggingface.co/pere/norwegian-t5-base-NCC-fast/blob/main/run_t5_mlm_flax_streaming.py#L234), at the end of each `src_input`, and `target_input` we should manually add an `eos_token`
@patrickvonplaten @peregilk
<|||||>Just to update this issue as well. We have looked closer at this, and have been running a lot of experiments. I will post the results from this when they are done. The short summary is however that we need to increase regularisation (ie weight_decay) when finetuning our T5s that have been pretrained a long time. I have tested both with a large streaming dataset and smaller non-streaming datasets. It seems like we have the same issue in both.
In the examples reported in the graph above, we are able to get decent results when finetuning a T5 that has been pretrained for 200k steps, right "out of the box". There are optimal learning rates, but all in all it is not very sensitive to this. The T5 that has been pretrained for 1M steps is only possible to finetune if you add a significant amount of regularisation. Adding "weight_decay" seem to be the trick. The optimal dropout seem to be close to 0.1 for all models.
I will post our results when they are completed in a few days. So far, we have no explanation/theory as to why this is happening. Might be related to what @sbmaruf is reporting. I simply do not know.<|||||>I have never used any kind of regularization on T5 except for dropout; dropout=0.1 often helps during fine-tuning (have never tried any other value).<|||||>Thanks @sbmaruf, @craffel and @patrickvonplaten for your comments. Before continuing this thread, I wanted to run more experiments and making sure that we get consistent results. We have now completed these tests.
First let me sum up: We are training a T5 on a large Norwegian corpus from scratch. We are using the Flax example code with the hyperparameters above, and using the T5 v1.1 training regime. We are seeing a good decline in loss and increase in accuracy. This seem to be the case also for the evaluation-set! We have confirmed "the error" on two runs, both on a huge streaming 150GB corpus, and on a smaller 30GB corpus. We are pretty certain that this is a high quality corpus, and it is intensely deduplicated (roughly 20% of the corpus is Norwegian MC4. the major part is born digital public reports). We are pretty sure we not overfitting. We are mainly testing on a simple translation task (BokmΓ₯l vs Nynorsk), but have seen the same issue on other tasks, even if we have not run intensively tests here since the datasets are smaller.
Here is what we are observing: It gets incrementally harder to finetune various checkpoints from the pretraining! We are able to finetune late checkpoints but only after adding weight decay. We have experimented with a lot of different learning rates, and the tendency is the same all over.
Here you can see an example from the 100k pretrain checkpoint:

You can see that it finetunes easily. Adding weight decay just makes the training a bit slower.
Here is from the 500k pretrain checkpoint:

Here we need weight decay to get stable finetuning.
At the 1M pretrain checkpoint, finetuning is also notably slower. Only with weight decay are we able to complete it:

As you see all the yellow lines (no weight decay) are off the chart here. It simply does not converge at all.
Here are the complete results from [W&B](https://wandb.ai/nbailab/norwegian-t5-base-checkpoints/reports/T5-Norwegian-Evaluation--VmlldzoxMTY4NzI4?accessToken=7jsxdmw82kf29vnqzxevkii6y9reuiak8wv591e256oe5ryf904h125g1h202d0x)
For some really strange reason we are able to end up with a decent result also on the late checkpoints, but only after very careful tuning of the hyperparameters. The model seem to be very unstable and a nightmare to finetune.
Do any of you think the bug reported by @sbmaruf above could cause any of this?
<|||||>I don't think that adding or not adding the EOS token makes a difference - so I highly doubt that is the reason for your observations...BTW here is the original preprocessing code for pretraining: https://github.com/google-research/text-to-text-transfer-transformer/blob/main/t5/data/preprocessors.py#L1864<|||||>Thanks @patrickvonplaten for the response.
We have a new version of training corpus available soon. From the figure above it seems like we are able to detact instability after less than a week on a v3-8, maybe even before we reach one epoch. We will see if we are able to get a pytorch version of the T5 pretraining running. Maybe we can train pytorch and flax in parallell and see if they differ.
If you have any idea about what is causing this and/or other tests we can do, please let us know.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 13,334 | closed | Update label2id in the model config for run_glue | # What does this PR do?
This PR fixes a bug in all the `run_glue` examples, where the correspondence id to label was not properly saved in the model config.
Fixes #13298 | 08-30-2021 14:29:07 | 08-30-2021 14:29:07 | |
transformers | 13,333 | closed | Use existing functionality for #13251 | # What does this PR do?
#13251 fixes some issue while re-implementing existing functionality. This PR refactors the fix to re-use the `model_type_to_module_name` implemented. | 08-30-2021 13:24:44 | 08-30-2021 13:24:44 | |
transformers | 13,332 | closed | bug in gpt2 notebook (in tensorflow) | Hello there!
I tried to use the language-modeling-from-scratch notebook https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/language_modeling_from_scratch.ipynb#scrollTo=JEA1ju653l-p
More specifically, I need to run it by using `tensorflow`. The simple strategy of using the `TF` versions of the `huggingface` functions everything seems to work correctly until I reach the `trainer` step and then I get a mysterious cardinality issue.
This looks like a bug... Can you please have a look at the code below?
```
model_checkpoint = "gpt2"
tokenizer_checkpoint = "sgugger/gpt2-like-tokenizer"
from datasets import load_dataset
datasets = load_dataset('wikitext', 'wikitext-2-raw-v1')
def tokenize_function(examples):
return tokenizer(examples["text"])
tokenized_datasets = datasets.map(tokenize_function, batched=True, remove_columns = ['text'])
block_size = 128
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can
# customize this part to your needs.
total_length = (total_length // block_size) * block_size
# Split by chunks of max_len.
result = {
k: [t[i : i + block_size] for i in range(0, total_length, block_size)]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result
lm_datasets = tokenized_datasets.map(
group_texts,
batched=True,
batch_size=1000
)
print(tokenizer.decode(lm_datasets['train'][2]["input_ids"]))
from transformers import AutoConfig, TFAutoModelForCausalLM
config = AutoConfig.from_pretrained(model_checkpoint)
model = TFAutoModelForCausalLM.from_config(config)
from transformers import TFTrainer, TFTrainingArguments
training_args = TFTrainingArguments(
"test-clm",
evaluation_strategy = "epoch",
learning_rate=2e-5)
trainer = TFTrainer(
model=model,
args = training_args,
train_dataset=lm_datasets)
trainer.train()
Traceback (most recent call last):
File "<ipython-input-82-01e49a077e43>", line 11, in <module>
trainer.train()
File "C:\Users\john\anaconda3\envs\keras\lib\site-packages\transformers\trainer_tf.py", line 472, in train
train_ds = self.get_train_tfdataset()
File "C:\Users\john\anaconda3\envs\keras\lib\site-packages\transformers\trainer_tf.py", line 150, in get_train_tfdataset
self.num_train_examples = self.train_dataset.cardinality().numpy()
AttributeError: 'DatasetDict' object has no attribute 'cardinality'
```
What do you think?
Thanks! | 08-30-2021 13:03:38 | 08-30-2021 13:03:38 | summoning the masters @LysandreJik @sgugger @Rocketknight1 π― <|||||>Hey! There are a couple of issues here. The first is that we're trying to move away from TFTrainer towards Keras - there'll be a new version of that notebook coming soon, like I promised!
In the meantime, your approach should work, though. The error you're getting is because `lm_datasets` is actually a `DatasetDict` containing both the train and validation set, so everything downstream gets confused. You probably want to swap out `lm_datasets` for `lm_datasets['train']` in that call to `TFTrainer`. However, like I said, we're trying to deprecate TFTrainer, so I'm trying to avoid doing any more bugfixing for it. I'm working on getting the new examples in ASAP!<|||||>Thanks @Rocketknight1 ! Actually I was getting the same error even when I was using a `dataset` that only contains one set of data. But you are absolutely right: there is no need to fix something that is going to be deprecated soon. Happy to help if you need anything! Thanks!<|||||>The good news is I'm moving to working on those TF notebooks right now, so hopefully I'll have a proper example to show you soon. However, the official launch of the new notebooks might depend on the PR at https://github.com/huggingface/datasets/pull/2731 being accepted and making it to release, since I'm planning to use that new method in a lot of them.
Still, I'll make sure to ping you as soon as I have a LM example ready - just be aware that you might have to install a pre-release version of `datasets` to get it to work!<|||||>got it. happy to try out the beta version of them at my risk and peril ;-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Same question at year 2023 for
https://github.com/huggingface/notebooks/blob/main/examples/language_modeling_from_scratch.ipynb<|||||>Solution:
https://github.com/huggingface/transformers/blob/main/examples/tensorflow/language-modeling/run_clm.py |
transformers | 13,331 | closed | bert:What is the tf version corresponding to tensformers? | I use python3.7, tf2.4.0, cuda11.1 and cudnn 8.0.4 to run bert-base-un and report an error
- albert, bert, xlm: @LysandreJik
- tensorflow: @Rocketkn
| 08-30-2021 11:42:36 | 08-30-2021 11:42:36 | @Rocketknight1 <|||||>Hello! Do you mind providing the error you're seeing? Thank you!<|||||>'''
I:?[35mVENTILATOR?[0m:freeze, optimize and export graph, could take a while...
2021-08-30 20:07:49.360788: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
d:\users\...\pycharmprojects\machine learning i don't study\venv37\lib\site-packages\bert_serving\server\helper.py:176: UserWarning: Tensorflow 2.4.0 is not tested!
It may or may not work. Feel free to submit an issue at https://github.com/hanxiao/bert-as-service/issues/
'Feel free to submit an issue at https://github.com/hanxiao/bert-as-service/issues/' % tf.__version__)
E:?[36mGRAPHOPT?[0m:fail to optimize the graph!
Traceback (most recent call last):
File "D:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "D:\Program Files (x86)\Microsoft Visual Studio\Shared\Python37_64\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "D:\Users\...\PycharmProjects\machine learning I don't study\venv37\Scripts\bert-serving-start.exe\__main__.py", line 7, in <module>
File "d:\users\...\pycharmprojects\machine learning i don't study\venv37\lib\site-packages\bert_serving\server\cli\__init__.py", line 4, in main
with BertServer(get_run_args()) as server:
File "d:\users\...\pycharmprojects\machine learning i don't study\venv37\lib\site-packages\bert_serving\server\__init__.py", line 71, in __init__
self.graph_path, self.bert_config = pool.apply(optimize_graph, (self.args,))
TypeError: cannot unpack non-iterable NoneType object
'''<|||||>Hi, it seems like you're using [bert-as-service](https://github.com/hanxiao/bert-as-service) with an unsupported version of Tensorflow. That isn't a Huggingface project, so we can't really support it here, unfortunately! Try filing an issue at that repo instead!
If you're interested in learning to use the Transformers library, you can check out our [documentation](https://huggingface.co/transformers/quicktour.html), our [course](https://huggingface.co/course/chapter1) or our [example code](https://github.com/huggingface/transformers/tree/master/examples), but we can't really answer questions on any of that in GitHub issues - try the [forums](https://discuss.huggingface.co/) instead!
I'm going to close this issue, but if you believe there's an actual problem in the Transformers library, separate from `bert-as-service`, please feel free to add more info and re-open it. |
transformers | 13,330 | closed | model(**batch) returns loss dictionary that cant be divided by gradient_accu |
Running summarization example with T5 small with the following command produces a dict divided by int error.
to reproduce run:
```
export TASK_NAME=mrpc
accelerate launch run_summarization_no_trainer.py \
--model_name_or_path t5-small \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir ~/tmp/tst-summarization
```
https://github.com/huggingface/transformers/blob/8be921f9de012d6f82f1cf4b2dcd4bdf2262071b/examples/pytorch/summarization/run_summarization_no_trainer.py#L521
```
----> 1 loss = loss / args.gradient_accumulation_steps
TypeError: unsupported operand type(s) for /: 'dict' and 'int'
```
**transformer version==4.9.2**
**GPU: Telsa V100 X 2**
**accelerate config:**
```
compute_environment: LOCAL_MACHINE
deepspeed_config: {}
distributed_type: MULTI_GPU
fp16: true
machine_rank: 0
main_process_ip: null
main_process_port: null
main_training_function: main
num_machines: 1
num_processes: 1
```
should it be?:
```
outputs = model(**batch)
loss = outputs.loss['loss']
```
happy to do the PR, if confirmed. | 08-30-2021 10:14:56 | 08-30-2021 10:14:56 | when using a single GPU the script works.
ie, outputs.loss returns a shape (1,) that can be divided by int.
what is the best way to use `outputs.loss` for multiple GPU settings? <|||||> This PR https://github.com/huggingface/accelerate/pull/149 will help you. |
transformers | 13,329 | closed | GPT-J-6B in run_clm.py | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.10.0.dev0
- Platform: Linux-4.19.0-10-cloud-amd64-x86_64-with-debian-10.5
- Python version: 3.7.8
- PyTorch version (GPU?): 1.7.1+cu110 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
- text generation: @patrickvonplaten
- trainer: @sgugger
- pipelines: @LysandreJik
-->
## Information
The model I am using GPT-J from HuggingFaceHub models, there is KeyError with this model, error listed below:
Traceback (most recent call last):
File "run_clm.py", line 522, in <module>
main()
File "run_clm.py", line 320, in main
config = AutoConfig.from_pretrained(model_args.model_name_or_path, **config_kwargs)
File "/opt/conda/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 514, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "/opt/conda/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py", line 263, in __getitem__
raise KeyError(key)
KeyError: 'gptj'
| 08-30-2021 10:03:43 | 08-30-2021 10:03:43 | Hello @MantasLukauskas, GPT-J is not yet merged into `transformers`, see https://github.com/huggingface/transformers/pull/13022<|||||>@LysandreJik Is there any way to do a workaround for fine-tuning it because as I see merge could take some time<|||||>You could checkout the PR directly and try fine-tuning it with the GPT-J code!
```
git remote add StellaAthena https://github.com/StellaAthena/transformers
git fetch StellaAthena
git checkout -b gptj StellaAthena/master
```<|||||>You can also install directly from my fork with
`pip install -e git+https://github.com/StellaAthena/transformers#egg=transformers`<|||||>@StellaAthena I am trying to fine-tune GPT-J from your branch. But neither Tesla A100 with 40GB GPU RAM (Google Cloud) nor TPU v3-8 allow for this. OOM error in both cases.
I am setting batch_size = 1, gradient_checkpointing, trying different block_sizes `1024`, `512`, `256`. There is OOM error in all cases.
Is it possible to fine-tune it on such devices?<|||||>@dimaischenko Do you use run_clm.py for fine-tune or do that in another way?<|||||>@MantasLukauskas Yes, by run_clm.py<|||||>@dimaischenko I got error "RuntimeError: [enforce fail at CPUAllocator.cpp:65] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 4027105280 bytes. Error code 12 (Cannot allocate memory)" do you had the same?
100 GB RAM + DeepSpeed Zero 3 + T4 15 GB<|||||>> @dimaischenko I got error "RuntimeError: [enforce fail at CPUAllocator.cpp:65] . DefaultCPUAllocator: can't allocate memory: you tried to allocate [4027105280](tel:4027105280) bytes. Error code 12 (Cannot allocate memory)" do you had the same?
>
> 100 GB RAM + DeepSpeed Zero 3 + T4 15 GB
4,027,105,280 <<< 100 GB, so itβs hard to see how thatβs the issue, unless you have something else running. Can you print out the amount of free memory during the loading process?<|||||>Thanks @StellaAthena and @EricHallahan for all your work on the #13022 GPT-J fork!!
Over the past few days I've been playing around with the current state of the fork and I am running into the same OOM issues that are referenced here by @dimaischenko.
Here is some information from my end in case it is helpful debugging what is happening (I'd be happy to put this in a separate issue if that is desired).
_System:_ I am running everything on a compute cluster (i.e., not g-colab) with ~384GB of ram and 8x RTX 6000 GPUs with 24gb vram each. I am using the fork by @StellaAthena and a fresh conda environment with Python 3.9.
My observations:
1. I can't load the float32 model onto my RTX 6000 without running into an OOM error. With `model.half().cuda() ` and/or `torch_dtype=torch.float16` when loading the model it does work. As far as I understand, I should be able to load the float32 model with an RTX 6000 24GB? Given that I can't load the float32 model it might be that my OOM errors are caused by the issue brought up by @oborchers even why trying to use fp16.
2. Irrespective of my training parameters (e.g., everything set to minimum) my training always triggers an OOM error when using `trainer` or the `run_clm.py`script.
For example, these are my parameters:
```batch
python run_clm.py \
--model_name_or_path EleutherAI/gpt-j-6B \
--model_revision float32 \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--do_train \
--do_eval \
--output_dir /mmfs1/gscratch/tdekok/test-clm-j \
--overwrite_output_dir true \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 1 \
--fp16 true \
--fp16_opt_level O1
```
This results in about ~55G RAM usage and I can see in `nvidia-smi` that it fills up my GPU vram beyond the available 23761MiB.
3. I noticed when using `gpt2` instead of `gpt-j-6b` that the memory usage on gpu:0 is substantially higher relative to the rest. I wonder whether this might be part of the issue:

<|||||>> This results in about ~55G RAM usage and I can see in `nvidia-smi` that it fills up my GPU vram beyond the available 23761MiB.
I think it doesn't matter if you are using 8 GPU or 1 GPU cause `batch_size=1`. So at least one sample fits on one video card. I am trying the same params on A100 card with `40Gb` gpu vram and OOM still exists. So I think that your RTX6000 card with 24 gb vram is definitely not enough for fine-tuning.
But let's wait for the answer from the creators of the model<|||||>@dimaischenko yes I have the same problem, I tried to use DeepSpeed Zero3 optimizer for this one but even with batch_size=1 and model_revision = float16 I am out of memory. Interesting that with gpt2-xl I have the same problem but I saw a lot of people fine-tuning this model with T4 + Deepspeed :( <|||||>@dimaischenko I tested a lot of parameters and found that with --block_size 512 I can fine-tune GPT-J model. RAM Usage 100 GB, GPU usage 12 GB (Nvidia T4 total 16 GB), DeepSpeed Zero3 optimizer <|||||>@MantasLukauskas Sounds interesting. Maybe it's the optimizer. And what option is it enabled by, or do you need to modify the run_clm.py code?<|||||>@dimaischenko DeepSpeed in library implemented into huggingface (https://github.com/microsoft/DeepSpeed) and you do not need to modify run_clm.py code you fine-tune model like that:
deepspeed --num_gpus 1 run_clm.py --model_name_or_path EleutherAI/gpt-j-6B --num_train_epochs 10 --model_revision float16 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --train_file train.txt --validation_file test.txt --do_train --do_eval --output_dir GPTJ --save_steps 1000 --logging_steps 100 --logging_dir GPTJ/runs --fp16 --deepspeed zero3ws.json --overwrite_output_dir
My deepspeed config file can be found here: https://github.com/MantasLukauskas/DeepSpeed-Test/blob/main/zero3.json
<|||||>@MantasLukauskas thanks! I'll try today and write about the results.<|||||>@dimaischenko
> @StellaAthena I am trying to fine-tune GPT-J from your branch. But neither Tesla A100 with 40GB GPU RAM (Google Cloud) nor TPU v3-8 allow for this. OOM error in both cases.
If you are working on TPUs, I strongly recommend using the [mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax) library which was written for the purpose of producing GPT-J models. The version on HuggingFace is a PyTorch port of the original Jax code which has been used by many people on TPUs.
I'm not sure why you're having trouble with A100s though, as I have run the code on A100s before. Can you provide further details about how you're running the model? Is it loading the model or the act of fine-tuning that OOMs?<|||||>@StellaAthena Thanks! I'll try `mesh-transformer-jax`. It's just that I already have a reliable fine-tuning pipeline for HuggingFace.
About OOM. I'll repeat my attempts today and will write logs. But the exact loading of the model was successful. And even performed validation with perplexity calculation on validation samples. But when it tried Β«to eatΒ» the first sample in training, OOM would crash.<|||||>> @StellaAthena Thanks! I'll try `mesh-transformer-jax`. It's just that I already have a reliable fine-tuning pipeline for HuggingFace.
>
> About OOM. I'll repeat my attempts today and will write logs. But the exact loading of the model was successful. And even performed validation with perplexity calculation on validation samples. But when it tried Β«to eatΒ» the first sample in training, OOM would crash.
This is really weird, given that you've said the batch size is set to 1. How much memory is allocated before you feed the first datum into the model? Does a different architecture that takes up the same amount of memory also fail?<|||||>@StellaAthena I tried again running run_clm.py from the latest branch on single GPU A100 (40Gb)
```
python run_clm_orig.py \
--model_type gptj \
--model_name_or_path EleutherAI/gpt-j-6B \
--model_revision float16 \
--do_train \
--do_eval \
--train_file ./data/train.txt \
--validation_file ./data/val.txt \
--evaluation_strategy steps \
--logging_step 300 \
--learning_rate 0.00002 \
--save_steps 1500 \
--fp16 \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
--gradient_accumulation_steps 1 \
--num_train_epochs 1 \
--block_size 1024 \
--save_total_limit 1 \
--overwrite_output_dir \
--output_dir ./out/test_gptj_orig
```
and got OOM error
```
[INFO|trainer.py:414] 2021-09-01 11:39:10,987 >> Using amp fp16 backend
[INFO|trainer.py:1168] 2021-09-01 11:39:10,997 >> ***** Running training *****
[INFO|trainer.py:1169] 2021-09-01 11:39:10,997 >> Num examples = 6011
[INFO|trainer.py:1170] 2021-09-01 11:39:10,997 >> Num Epochs = 1
[INFO|trainer.py:1171] 2021-09-01 11:39:10,997 >> Instantaneous batch size per device = 1
[INFO|trainer.py:1172] 2021-09-01 11:39:10,997 >> Total train batch size (w. parallel, distributed & accumulation) = 1
[INFO|trainer.py:1173] 2021-09-01 11:39:10,997 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1174] 2021-09-01 11:39:10,997 >> Total optimization steps = 6011
0%| | 0/6011 [00:00<?, ?it/s$
Traceback (most recent call last):
File "run_clm_orig.py", line 522, in <module>
main()
File "run_clm_orig.py", line 472, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/transformers/trainer.py", line 1284, in train
tr_loss += self.training_step(model, inputs)
File "/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/transformers/trainer.py", line 1787, in training_step
loss = self.compute_loss(model, inputs)
File "/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/transformers/trainer.py", line 1821, in compute_loss
outputs = model(**inputs)
File "/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/transformers/models/gptj/modeling_gptj.py", line 780, in forward
return_dict=return_dict,
File "/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/transformers/models/gptj/modeling_gptj.py", line 631, in forward
output_attentions=output_attentions,
File "/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/transformers/models/gptj/modeling_gptj.py", line 286, in forward
feed_forward_hidden_states = self.mlp(hidden_states)
File "/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/transformers/models/gptj/modeling_gptj.py", line 249, in forward
hidden_states = self.fc_in(hidden_states)
File "/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/torch/nn/modules/linear.py", line 96, in forward
return F.linear(input, self.weight, self.bias)
File "/mnt/disk/projects/gpt/venv/lib/python3.6/site-packages/torch/nn/functional.py", line 1847, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: CUDA out of memory. Tried to allocate 32.00 MiB (GPU 0; 39.59 GiB total capacity; 37.49 GiB already allocated; 19.19 MiB free; 37.73 GiB reserved in total by PyTorch)
0%| | 0/6011 [00:00<?, ?it/s]
```
Today will switch to `mesh-transformer-jax` and try to fine-tune on TPU v3-8 and then convert checkpoint to HuggingFace format.<|||||>You are trying to use the Adam optimizer with a model of 24Gb. With Adam, you have four copies of your model: model, gradients, and in the optimizer state the gradients averaged and square averaged. Even with fp16, all of that is still stored in FP32 because of **mixed** precision training (the optimzier update is in full precision). So unless you use DeepSpeed to offload the optimizer state and the gradient copy in FP32, you won't be able to fit this 4 x 24GB on your 80GB card.<|||||>@sgugger Thanks for clarification! I configured DeepSpeed and everything started up on the A100 GPU. However, now I need 80Gb cpu RAM, but this is solvable π <|||||>There is also the NVME offload if CPU RAM becomes a problem :-) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@sgugger - is there any docs on how to do this - you can point to ? I got rtx 3090 - and hitting the KeyError: 'gptj'
(The error is really obscure. it should really have some thing easier to understand.)
I've got 32gb of RAM - @dimaischenko - did bumping to 80gb fix things? <|||||>@johndpope what is your `transformers` version? It looks like it is outdated and does not have the GPT-J model available.<|||||>@LysandreJik I agree with you. I think that's the problem. @johndpope Yes 80gb ram was enough. To be honest, I don't remember the details anymore, but it seems that it took even less with `DeepSpeed`.<|||||>had trouble with ram - but found this / installing now / supposedly fits 17 / 15gb in VRAM + uses fastapi - https://news.ycombinator.com/item?id=27731266
(it uses tensorflow / but keeps memory footprint lower )
https://gist.githubusercontent.com/kinoc/f3225092092e07b843e3a2798f7b3986/raw/fc0dbe522d09d3797dd2a64e7182003f7d9a7fa8/jserv.py |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.