repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 6,402 | closed | remove lr_scheduler redundancy | This PR solves https://github.com/huggingface/transformers/issues/6374
by removing a hardcoded `lr_scheduler` and switching to using the new method. | 08-10-2020 21:59:38 | 08-10-2020 21:59:38 | @sshleifer has a better version in works, closing. |
transformers | 6,401 | closed | [TF Longformer] Add Multiple Choice, Seq Classification Model | # 🚀 Feature request
`modeling_longformer.py` has the classes `LongformerForSequenceClassification`, `LongformerForMultipleChoice` and `LongformerForTokenClassification` which are not present in `modeling_tf_longformer.py` at the moment.
Those classes should be equally added to `modeling_tf_longformer.py`.
## Motivation
The pretrained weights for TFLongformer are available so that these classes could be used for finetuning.
## Your contribution
This issue is a good first issue because it is not too complicated to add these models. One should take a look at `modeling_tf_roberta.py` to see how these models are implemented for `TFRoberta` and implement them analogous for `TFLongformer`. Please make sure that the docstring is correct and that test are added for each class (again Roberta can serve as an example here, check out `test_modeling_tf_roberta.py`).
I am happy to guide interested community contributors through the PR and help them get it merged.
| 08-10-2020 21:24:24 | 08-10-2020 21:24:24 | Hi !
I'd like to help and work on this if that's ok.<|||||>Awesome, feel free to open an issue :-) <|||||>Hello !
I'm a bit lost here. I've looked at `modeling_tf_roberta.py` and `modeling_longformer.py` to create the class `TFLongformerForSequenceClassification`. I'm not sure if I am going in the right direction here and same goes for the tests.
I used `python -m pytest -n auto --dist=loadfile -s -v ./tests/test_modeling_tf_roberta.py` to get an idea on what should I do for testing but it seems the test for `TFRobertaForSequenceClassification` is skipped and my test on the class I created (which is basically just a copy/paste of the roberta's test) is skipped too.
Here is a link to what I've done so far: https://github.com/Groskilled/transformers/commit/461ee6279433f94868332b1abbfe7875e19f243a
Am I on the right track ? And what am I missing on the tests ?
Sorry to ask such simple questions, it's my first time participating in an open source project.<|||||>No worries ;-). This looks alright! Could you open a PR so that we can see your changes directly on the PR? You can checkout this doc to understand how to do PRs: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md. Would be great if you can ping me on the PR and then we look together!<|||||>HI @Groskilled and @patrickvonplaten, I have been playing a bit around this issue, as I have some familiarity with Keras/TF2 but no previous experience with transformers, and I was figuring out a way to start familiarising with them. As I am interested in classifying long documents Longformer is of interest to me.
I have a draft of my current changes [here](https://github.com/huggingface/transformers/compare/master...Zigur:tf-lonformer-good-first-release). The test suite seems to pass (using Python 3.7.5, they did not on Python 3.8.2 on my Mac machine), but I would need extensive feedback as I have mostly lifted code from `test_modeling_tf_roberta.py` and the testing counterpart.
If it is of interest, I can open a pull request with all the details, or @Groskilled you can feel free to cherry-pick part of it if it's useful for your own pull request (as you were working on this earlier on, apologies for the intromission)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Issue is still open! If stuck, feel free to take a look at the unfinished PR. |
transformers | 6,400 | closed | ZeroDivisionError with Reformer | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.0
- Platform: Linux-5.4.0-42-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
## Information
Model I am using (Bert, XLNet ...): **Reformer**
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below) (well, actually, not my own, but @patrickvonplaten 's)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Execute @patrickvonplaten 's notebook available at https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Reformer_For_Masked_LM.ipynb
2. I've tried to run it on google colab and works fine. The problem appears when I try to run on my machine.
3. I've tried it with two different clean virtual environments (python 3.6 and 3.7), but they've both failed.
4. I haven't change the dataset, nor any model config/training args.
4. After calling trainer.train() I get the following error
```
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
<ipython-input-13-02431faf649a> in <module>
8
9 # train
---> 10 trainer.train()
/data/venv36/lib/python3.6/site-packages/transformers/trainer.py in train(self, model_path)
394 t_total = self.args.max_steps
395 num_train_epochs = (
--> 396 self.args.max_steps // (len(train_dataloader) // self.args.gradient_accumulation_steps) + 1
397 )
398 else:
ZeroDivisionError: integer division or modulo by zero
```
## Expected behavior
The model should begin to train
Thanks! | 08-10-2020 20:47:48 | 08-10-2020 20:47:48 | Hey @eliasjacob, the problem is probably that `self.args.gradient_accumaltion_steps` is set to a value > then `(len(train_dataloader)`<|||||>You were right (although I am unsure why the same notebook yielded different results in collab). Thank you very much! |
transformers | 6,399 | closed | DPR retriever module | I see https://github.com/huggingface/transformers/pull/5279 that describes the DPR flow.
Just checking to see when the retriever module will be available.
Many thanks for making DPR available ! | 08-10-2020 20:12:24 | 08-10-2020 20:12:24 | Pinging @lhoestq!<|||||>Hi @mchari
The retriever is now part of the `nlp` library.
You can install it with
```
pip install nlp
```
and load the retriever:
```python
from nlp import load_dataset
wiki = load_dataset("wiki_dpr", with_embeddings=False, with_index=True, split="train")
```
The retriever is basically a dense index over wikipedia passages.
To query it using the DPR question encoder you can do:
```python
from transformers import DPRQuestionEncoderTokenizer, DPRQuestionEncoder
question_tokenizer = DPRQuestionEncoderTokenizer.from_pretrained('facebook/dpr-question_encoder-single-nq-base')
question_encoder = DPRQuestionEncoder.from_pretrained('facebook/dpr-question_encoder-single-nq-base')
question = "What is love ?"
question_emb = question_encoder(**question_tokenizer(question, return_tensors="pt"))[0].detach().numpy()
passages_scores, passages = wiki.get_nearest_examples("embeddings", question_emb, k=20) # get k nearest neighbors
```
Shall we make a blog post or something to show how to use it with `transformers` @thomwolf ?
EDIT: `nlp` is now renamed to `datasets`<|||||>Hi @lhoestq ,
Which metric does the `FaissIndex` use to compute vector similarity? (i.e. how `passages_scores` values are computed?)
It uses the _inner product_ (as described in DPR paper) or something else?
Thank you<|||||>It uses inner product.
You can see the code that creates the index here https://github.com/huggingface/nlp/blob/6655008c738cb613c522deb3bd18e35a67b2a7e5/datasets/wiki_dpr/wiki_dpr.py#L171<|||||>Thanks for the retriever functionality ! Not sure how it works if I want to use it on my own documents.
<|||||>@lhoestq , any guidance for fine tuning the retriever module on another set of documents ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,398 | closed | Data collator with padding | Add a data collator to dynamically pad samples during batching. This is necessary for the training set since padding can't be applied beforehand if we use shuffling (unless with pad to a fixed `max_length`).
This should make it more straightforward to plug nlp into the Trainer. | 08-10-2020 19:47:54 | 08-10-2020 19:47:54 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6398?src=pr&el=h1) Report
> Merging [#6398](https://codecov.io/gh/huggingface/transformers/pull/6398?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3425936643b157bda181af169b371dcf0a3ad3eb&el=desc) will **increase** coverage by `0.18%`.
> The diff coverage is `50.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6398?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6398 +/- ##
==========================================
+ Coverage 79.55% 79.73% +0.18%
==========================================
Files 148 148
Lines 27206 27226 +20
==========================================
+ Hits 21644 21710 +66
+ Misses 5562 5516 -46
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6398?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6398/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.24% <ø> (ø)` | |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6398/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `89.05% <50.00%> (-7.54%)` | :arrow_down: |
| [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6398/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6398/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.13% <0.00%> (-5.51%)` | :arrow_down: |
| [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6398/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6398/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.18% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6398/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6398/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.84% <0.00%> (+1.39%)` | :arrow_up: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6398/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+8.92%)` | :arrow_up: |
| [src/transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6398/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `97.22% <0.00%> (+9.72%)` | :arrow_up: |
| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6398/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6398?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6398?src=pr&el=footer). Last update [3425936...4bed573](https://codecov.io/gh/huggingface/transformers/pull/6398?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>It's hard to see how to combine it with another data collator since a data collator's function is to create batch, and you can't create batches if your tensors are not padded to the same size.<|||||>Things I tried to fix here were actually addressed by @thomwolf in #6423, so waiting for this PR to be merged before merging this one.<|||||>Rebase made the PR unreadable. Opening a new clean one. |
transformers | 6,397 | closed | Create README.md | For GPT-2 Arabic Poetry - https://huggingface.co/akhooli/gpt2-small-arabic-poetry | 08-10-2020 19:20:50 | 08-10-2020 19:20:50 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6397?src=pr&el=h1) Report
> Merging [#6397](https://codecov.io/gh/huggingface/transformers/pull/6397?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7ea9b2db3732904014b9121fb8a5c896ae00d4cf&el=desc) will **increase** coverage by `0.96%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6397?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6397 +/- ##
==========================================
+ Coverage 77.31% 78.27% +0.96%
==========================================
Files 146 146
Lines 26597 26597
==========================================
+ Hits 20563 20820 +257
+ Misses 6034 5777 -257
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6397?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6397/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.40% <0.00%> (-34.39%)` | :arrow_down: |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6397/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.14% <0.00%> (-25.67%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6397/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.75% <0.00%> (+73.16%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6397?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6397?src=pr&el=footer). Last update [7ea9b2d...141f941](https://codecov.io/gh/huggingface/transformers/pull/6397?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>[Thanks for sharing](https://huggingface.co/akhooli/gpt2-small-arabic-poetry)
If you'd like, you could submit some simple inputs for Arabic to https://github.com/huggingface/widgets-server/blob/master/DefaultWidget.ts – let me know if you need any help |
transformers | 6,396 | closed | switch Hindi-BERT to S3 README | The Markdown parser currently cuts off the CoLab URL (the last char is an underscore) on https://huggingface.co/monsoon-nlp/hindi-bert
There are some other necessary updates, and I'd rather update this model card by pushing to S3 in the future | 08-10-2020 17:51:39 | 08-10-2020 17:51:39 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6396?src=pr&el=h1) Report
> Merging [#6396](https://codecov.io/gh/huggingface/transformers/pull/6396?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3e0fe3cf5c1059c04535de8f04f4efed7251adbe&el=desc) will **increase** coverage by `0.11%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6396?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6396 +/- ##
==========================================
+ Coverage 79.40% 79.51% +0.11%
==========================================
Files 148 148
Lines 27200 27200
==========================================
+ Hits 21598 21628 +30
+ Misses 5602 5572 -30
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6396?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6396/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+7.51%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6396?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6396?src=pr&el=footer). Last update [06bc347...106d0f3](https://codecov.io/gh/huggingface/transformers/pull/6396?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@mapmeld we have a better version control/preview system coming in the future. In the meantime, merging this |
transformers | 6,395 | closed | Bug in the question answering pipeline | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
## Information
The bug appears since transformers 3.0.1 but not before.
Model I am using distilbert-base-cased-distilled-squad:
The problem arises when using:
* [ ] my own modified scripts:
```
from transformers import pipeline
model = "distilbert-base-cased-distilled-squad"
qa_pipeline = pipeline(
"question-answering",
model=model,
tokenizer=model,
)
instance = {
"question": "what is your product?",
"context": " is an amazing new platform that help businesses of students from BarIlan University that are enthusiastic about conversational AI. The difference between our Sprybot platform and other chat bots is that constructing chat bot is a long and hard process and with Sprybot you can do it quickly and eaily. You can construct chatbot using our platform just by feeding textual description of you business that contain any details important for costumers. The time it takes to create a bot using our platform is the time takes you to describe your business. In order to create Sprybot we used natural language processing and state of the art deep learning artificial intelligence. At the moment you cant buy our product because its still under construction. Sprybot can answer questions about your business but it can not talk about anything else other than the information was fed to it."
}
qa_pipeline(instance)
```
Notice: little changes in the context text make the bug to not show up
## To reproduce
Steps to reproduce the behavior:
1. [fully reproduced on google colab](https://colab.research.google.com/drive/1YqamXA6qq8xxWXhq6VqEA9clHsEVW7sh?usp=sharing)
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-5-a5f26c48556d> in <module>()
4 }
5
----> 6 qa_pipeline(instance)
1 frames
/usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in __call__(self, *args, **kwargs)
1314 ),
1315 }
-> 1316 for s, e, score in zip(starts, ends, scores)
1317 ]
1318
/usr/local/lib/python3.6/dist-packages/transformers/pipelines.py in <listcomp>(.0)
1314 ),
1315 }
-> 1316 for s, e, score in zip(starts, ends, scores)
1317 ]
1318
KeyError: 0
```
## Expected behavior
get the qa pipline output with no errors
| 08-10-2020 16:45:57 | 08-10-2020 16:45:57 | Hi! This bug was patched on `master`. Can you install from source and let me know if this fixes your issue?
`pip install git+https://github.com/huggingface/transformers`<|||||>This fixed it! Thank you! |
transformers | 6,394 | closed | Error while loading albert for token classification | ## Environment info
- `transformers` version: 3.0.2
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.6.10
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@LysandreJik
@jplu
## Information
Model I am using albert-base-v2 or albert-base-v1:
The tasks I am working on is:
token classification using albert-base-v2 or v1
## To reproduce
```
>>> from transformers import AlbertTokenizer, TFAlbertForTokenClassification
>>> import tensorflow as tf
>>> tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2', cache_dir = 'cache')
>>> model = TFAlbertForTokenClassification.from_pretrained('albert-base-v2', cache_dir = 'cache')
```
When I run above script I get error:
```
Traceback (most recent call last):
File "C:\Users\703235761\AppData\Local\Continuum\anaconda3\envs\slot\lib\site-packages\transformers\modeling_tf_utils.py", line 581, in from_pretrained
raise EnvironmentError
OSError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "model_utils.py", line 7, in <module>
model = TFAlbertForTokenClassification.from_pretrained('albert-base-v1', cache_dir = 'cache')
File "C:\Users\703235761\AppData\Local\Continuum\anaconda3\envs\slot\lib\site-packages\transformers\modeling_tf_utils.py", line 588, in from_pretrained
raise EnvironmentError(msg)
OSError: Can't load weights for 'albert-base-v1'. Make sure that:
- 'albert-base-v1' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'albert-base-v1' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.
```
## Expected behavior
I think this model was supposed to work with TFAlbertModel as well.
Thanks in advance! :-)
| 08-10-2020 16:17:12 | 08-10-2020 16:17:12 | Quick update:
Above code works just fine in Ubuntu environment with below specs:
- `transformers` version: 3.0.2
- Platform: Linux-5.3.0-1034-azure-x86_64-with-debian-buster-sid
- Python version: 3.6.10
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): 2.0.0 (True)
- Using GPU in script?: yes (Tesla P40)
- Using distributed or parallel set-up in script?: No
I think this issue only occurs in Windows.
<|||||>Hi! This may be related to a network error, can you download other models on your Windows machine?<|||||>set REQUESTS_CA_BUNDLE env var to ca-certificates.crt
In my case I am using Ubuntu, so running the following command on terminal solves the issue:
` export REQUESTS_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt`<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,393 | closed | Add missing docker arg for TPU CI. | Fixes `"docker build" requires exactly 1 argument.` for the path where `$CIRCLE_PR_NUMBER` is unset. | 08-10-2020 16:12:52 | 08-10-2020 16:12:52 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6393?src=pr&el=h1) Report
> Merging [#6393](https://codecov.io/gh/huggingface/transformers/pull/6393?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3e0fe3cf5c1059c04535de8f04f4efed7251adbe&el=desc) will **decrease** coverage by `0.23%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6393?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6393 +/- ##
==========================================
- Coverage 79.40% 79.16% -0.24%
==========================================
Files 148 148
Lines 27200 27200
==========================================
- Hits 21598 21533 -65
- Misses 5602 5667 +65
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6393?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6393/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6393/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.20%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6393/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-11.37%)` | :arrow_down: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6393/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-5.17%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6393/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6393/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6393/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6393/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+7.26%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6393/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.84% <0.00%> (+7.41%)` | :arrow_up: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6393/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+21.91%)` | :arrow_up: |
| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6393/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6393?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6393?src=pr&el=footer). Last update [06bc347...3d1b87b](https://codecov.io/gh/huggingface/transformers/pull/6393?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,392 | closed | seq2seq examples require pytest | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-5.4.0-42-generic-x86_64-with-glibc2.29
- Python version: 3.8.2
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: n/a
- Using distributed or parallel set-up in script?: n/a
### Who can help
examples/seq2seq: @sshleifer
documentation: @sgugger
## To reproduce
Steps to reproduce the behavior:
1. Create a new virtual environment and set it up to run the examples tests. Do _not_ install `pytest` and `pytest-xdist`.
2. Run the tests with `unittest` as [described in the docs](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#tests)
## Expected behavior
The examples tests pass. Actual behavior:
```sh
======================================================================
ERROR: seq2seq.test_bash_script (unittest.loader._FailedTest)
----------------------------------------------------------------------
ImportError: Failed to import test module: seq2seq.test_bash_script
Traceback (most recent call last):
File "/usr/lib/python3.8/unittest/loader.py", line 436, in _find_test_path
module = self._get_module_from_name(name)
File "/usr/lib/python3.8/unittest/loader.py", line 377, in _get_module_from_name
__import__(name)
File "/home/dmlap/projects/transformers/examples/seq2seq/test_bash_script.py", line 8, in <module>
import pytest
ModuleNotFoundError: No module named 'pytest'
======================================================================
ERROR: seq2seq.test_seq2seq_examples (unittest.loader._FailedTest)
----------------------------------------------------------------------
ImportError: Failed to import test module: seq2seq.test_seq2seq_examples
Traceback (most recent call last):
File "/usr/lib/python3.8/unittest/loader.py", line 436, in _find_test_path
module = self._get_module_from_name(name)
File "/usr/lib/python3.8/unittest/loader.py", line 377, in _get_module_from_name
__import__(name)
File "/home/dmlap/projects/transformers/examples/seq2seq/test_seq2seq_examples.py", line 10, in <module>
import pytest
ModuleNotFoundError: No module named 'pytest'
----------------------------------------------------------------------
Ran 16 tests in 179.454s
FAILED (errors=2)
```
Perhaps the documentation should be updated to require `pytest`? | 08-10-2020 15:32:16 | 08-10-2020 15:32:16 | yes, great catch. Will update! |
transformers | 6,391 | closed | Fix links for open in colab | This commit was supposed to be in #6389 but I didn't push hard enough. | 08-10-2020 15:15:18 | 08-10-2020 15:15:18 | |
transformers | 6,390 | closed | Warn if debug requested without TPU fixes (#6308) | Check whether a PyTorch compatible TPU is available before attempting to print TPU metrics after training has completed. This way, users who apply `--debug` without reading the documentation aren't suprised by a stacktrace. | 08-10-2020 15:07:42 | 08-10-2020 15:07:42 | The CircleCI failures look like a pre-existing line length violation in `trainer.py` and a checksum mismatch downloading transformers itself for `run_tests_torch`. I don't believe either are related to my change – I was able to run the examples test suite locally and everything passed. I'd be happy to fix the line length issue, if that helps. I think it would take me awhile to figure out what's going on with the checksum mismatch.<|||||>Hi! There was an issue with the style in your PR, I pushed the fix. Will merge once all the tests are green!<|||||>Thanks for your contribution :)<|||||>No problem! Thanks for the style-fixup, @LysandreJik. |
transformers | 6,389 | closed | Colab button | This PR adds a "open in colab" button on the tutorials of our documentation. For each of those tutorials, three notebooks are available: mixed version (with cells PyTorch and TensorFlow), PyTorch-only and TensorFlow only, so hovering on the button makes a dropdown appear with the three different links.
Those notebooks are generated automatically from the docs rst files and the script in the [notebooks repo](https://github.com/huggingface/notebooks/blob/master/utils/convert_doc_to_notebooks.py). | 08-10-2020 15:00:11 | 08-10-2020 15:00:11 | |
transformers | 6,388 | closed | [T5 3B Covid 19] Adapt T5 TF conversion script to handle covid-19 3b t5 | This PR shows which changes were necessary to convert the 3B (and 11B) T5 model from this issue: https://github.com/huggingface/transformers/tree/adapt_t5_for_covid_19_3b to PyTorch.
It might be possible that the official T5 library has changed in which case this code might be useful again.
For now, this PR stays a draft though, but can be cleaned and merged if more T5 Conversion issues arise.
Pinging @sshleifer @thomwolf for notification. | 08-10-2020 14:22:38 | 08-10-2020 14:22:38 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,387 | closed | Fix docs and bad word tokens generation_utils.py | This PR fixes two issues:
1.
The codes
https://github.com/huggingface/transformers/blob/6028ed92bd9e5471e6f8a1d23cfd95a3a63018fb/src/transformers/generation_utils.py#L224-L228
throw an exception
`AssertionError: Greedy decoding will always produce the same output for num_beams == 1 and num_return_sequences > 1. Please set num_return_sequences = 1`
2.
The codes
```python
from transformers.generation_utils import calc_banned_bad_words_ids
prev_input_ids = torch.tensor([[1, 2, 3, 4, 5]])
bad_words_ids = [[4, 5, 9]]
banned_tokens = calc_banned_bad_words_ids(prev_input_ids, bad_words_ids)
print(banned_tokens)
```
output `[[]]`,but we expect to output `[[9]]`.
| 08-10-2020 14:04:04 | 08-10-2020 14:04:04 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6387?src=pr&el=h1) Report
> Merging [#6387](https://codecov.io/gh/huggingface/transformers/pull/6387?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/155288f04ba9a5d0a0e4d5be4f6d4e808ad8cfff&el=desc) will **increase** coverage by `0.12%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6387?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6387 +/- ##
==========================================
+ Coverage 79.94% 80.07% +0.12%
==========================================
Files 153 153
Lines 27902 27902
==========================================
+ Hits 22307 22343 +36
+ Misses 5595 5559 -36
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6387?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <100.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <100.00%> (ø)` | |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.20%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6387/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `97.41% <0.00%> (+32.94%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6387?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6387?src=pr&el=footer). Last update [155288f...f9ff044](https://codecov.io/gh/huggingface/transformers/pull/6387?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@sshleifer
1. As suggested, I fixed docsrtings.
2. `RUN_SLOW=1 pytest tests/test_modeling_bart.py` outputs `collected 47 items: 45 passed, 2 skipped, 15 warnings`
`RUN_SLOW=1 pytest tests/test_modeling_marian.py` outputs `collected 15 items: 15 passed, 111 warnings`
`RUN_SLOW=1 pytest tests/test_modeling_t5.py` outputs `collected 35 items: 33 passed, 2 skipped, 198 warnings`
`RUN_SLOW=1 pytest tests/test_modeling_mbart.py` outputs `collected 6 items: 1 failed, 3 passed, 2 skipped, 105 warnings`
For the failed test, the detailed output is as follows:
```
___________________________________ MBartEnroIntegrationTest.test_enro_generate ___________________________________
self = <tests.test_modeling_mbart.MBartEnroIntegrationTest testMethod=test_enro_generate>
@slow
def test_enro_generate(self):
batch: BatchEncoding = self.tokenizer.prepare_seq2seq_batch(self.src_text).to(torch_device)
translated_tokens = self.model.generate(**batch)
decoded = self.tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)
self.assertEqual(self.tgt_text[0], decoded[0])
> self.assertEqual(self.tgt_text[1], decoded[1])
E AssertionError: 'Secr[223 chars]înrăutăţească violenţa şi mizeria pentru milioane de oameni.' != 'Secr[223 chars]înrăutăţească violenţele şi mizeria pentru milioane de oameni.'
E Diff is 1089 characters long. Set self.maxDiff to None to see it.
tests\test_modeling_mbart.py:89: AssertionError
```
Even if I restore the modified code, the test still fails. So the failed test has nothing to do with the code I modified.<|||||>Does the failure also happen on your machine on master? Otherwise it does seem like your code causes the failure. That translation is created by the generate function.<|||||>@sshleifer
The failure also happen on master branch and the master brach has been updated to the latest.
My machine is Win10 64 bit and test environment is Python 3.8.3, pytest-6.0.1, py-1.9.0, pluggy-0.13.1,pytorch-1.6.0.
On the other hand , I debugged the failed test code, as shown below:
```python
from transformers import (
AutoModelForSeq2SeqLM,
BartConfig,
BartForConditionalGeneration,
BatchEncoding,
AutoTokenizer,
)
src_text = [
" UN Chief Says There Is No Military Solution in Syria",
""" Secretary-General Ban Ki-moon says his response to Russia's steppedupmilitary support for Syria is that "there is no military solution"to thenearly five-year conflict and more weapons will only worsen theviolenceand misery for millions of people.""",
]
tgt_text = [
"Şeful ONU declară că nu există o soluţie militară în Siria",
'Secretarul General Ban Ki-moon declară că răspunsul său laintensificareasprijinului militar al Rusiei pentru Siria este că "nuexistă o soluţiemilitară" la conflictul de aproape cinci ani şi că noiarme nu vor facedecât să înrăutăţească violenţa şi mizeria pentrumilioane de oameni.',
]
tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-en-ro")
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/mbart-large-en-ro")
batch: BatchEncoding = tokenizer.prepare_seq2seq_batch(src_text)
translated_tokens = model.generate(**batch)
decoded = tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)
assert(tgt_text[0] == decoded[0])
assert(tgt_text[1] == decoded[1])
```
By debugging, I find that `bad_words_ids` is `None` in `generate` function. So the code I modified will not run during this test and will not affect the results of `generate` function.<|||||>Great @ZhuBaohe, thanks for running the tests and fixing the bad word tokens. |
transformers | 6,386 | closed | Create README.md | 08-10-2020 13:53:19 | 08-10-2020 13:53:19 | model card<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6386?src=pr&el=h1) Report
> Merging [#6386](https://codecov.io/gh/huggingface/transformers/pull/6386?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6028ed92bd9e5471e6f8a1d23cfd95a3a63018fb&el=desc) will **decrease** coverage by `0.69%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6386?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6386 +/- ##
==========================================
- Coverage 79.05% 78.36% -0.70%
==========================================
Files 148 148
Lines 27196 27196
==========================================
- Hits 21501 21312 -189
- Misses 5695 5884 +189
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6386?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-70.09%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.46% <0.00%> (-1.01%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |
| [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <0.00%> (+17.47%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.72% <0.00%> (+22.87%)` | :arrow_up: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6386/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `94.24% <0.00%> (+69.10%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6386?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6386?src=pr&el=footer). Last update [6028ed9...08b609e](https://codecov.io/gh/huggingface/transformers/pull/6386?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>merging this, but can you add a few words about which separator tokens you used + maybe a few lines of sample code showing to interact with the model |
|
transformers | 6,385 | closed | [POC] Notebooks cron | Setting up a cron job to create the notebooks on the documentation | 08-10-2020 13:35:28 | 08-10-2020 13:35:28 | |
transformers | 6,384 | closed | AttributeError: type object "BartTokenizer" has no attribute 'name' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Google Colab
- Python version: 3.7
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: don't know
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
tensorflow: @jplu
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Using the script provided on Hugging face library
: https://github.com/ohmeow/ohmeow_website/blob/master/_notebooks/2020-05-23-text-generation-with-blurr.ipynb
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 08-10-2020 12:36:51 | 08-10-2020 12:36:51 | I think the notebook you are linking is trying to access the `name` attribute of `BartTokenizer` which does not exist indeed. It looks like the failure should be reported to the author of that notebook, it's not a bug in transformers.<|||||>Pinging @ohmeow <|||||>Yah, I'm here :)
On Mon, Aug 10, 2020, 7:36 AM Suraj Patil <[email protected]> wrote:
> Pinging @ohmeow <https://github.com/ohmeow>
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/6384#issuecomment-671393810>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAADNMC6B4X2AQBMA7CQIZDSAAAVFANCNFSM4PZ5H4RQ>
> .
>
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,383 | closed | hi | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
tensorflow: @jplu
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 08-10-2020 12:23:10 | 08-10-2020 12:23:10 | |
transformers | 6,382 | closed | Ci GitHub caching | Same as https://github.com/huggingface/transformers/pull/6287 but for Github Actions | 08-10-2020 11:56:31 | 08-10-2020 11:56:31 | |
transformers | 6,381 | closed | Create README.md | 08-10-2020 11:51:19 | 08-10-2020 11:51:19 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6381?src=pr&el=h1) Report
> Merging [#6381](https://codecov.io/gh/huggingface/transformers/pull/6381?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6028ed92bd9e5471e6f8a1d23cfd95a3a63018fb&el=desc) will **decrease** coverage by `0.68%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6381?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6381 +/- ##
==========================================
- Coverage 79.05% 78.37% -0.69%
==========================================
Files 148 148
Lines 27196 27196
==========================================
- Hits 21501 21316 -185
- Misses 5695 5880 +185
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6381?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-70.09%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |
| [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <0.00%> (+17.47%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.72% <0.00%> (+22.87%)` | :arrow_up: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6381/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `94.24% <0.00%> (+69.10%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6381?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6381?src=pr&el=footer). Last update [6028ed9...6b1cce3](https://codecov.io/gh/huggingface/transformers/pull/6381?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 6,380 | closed | Add metadata to be indexed properly | 08-10-2020 11:11:38 | 08-10-2020 11:11:38 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6380?src=pr&el=h1) Report
> Merging [#6380](https://codecov.io/gh/huggingface/transformers/pull/6380?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6028ed92bd9e5471e6f8a1d23cfd95a3a63018fb&el=desc) will **decrease** coverage by `0.67%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6380?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6380 +/- ##
==========================================
- Coverage 79.05% 78.38% -0.68%
==========================================
Files 148 148
Lines 27196 27196
==========================================
- Hits 21501 21317 -184
- Misses 5695 5879 +184
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6380?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6380/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-70.09%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6380/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6380/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6380/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |
| [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6380/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <0.00%> (+17.47%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6380/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.72% <0.00%> (+22.87%)` | :arrow_up: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6380/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6380/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `94.24% <0.00%> (+69.10%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6380?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6380?src=pr&el=footer). Last update [6028ed9...33142a4](https://codecov.io/gh/huggingface/transformers/pull/6380?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 6,379 | closed | Change metadata to be indexed correctly | 08-10-2020 11:10:08 | 08-10-2020 11:10:08 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6379?src=pr&el=h1) Report
> Merging [#6379](https://codecov.io/gh/huggingface/transformers/pull/6379?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6028ed92bd9e5471e6f8a1d23cfd95a3a63018fb&el=desc) will **increase** coverage by `0.47%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6379?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6379 +/- ##
==========================================
+ Coverage 79.05% 79.53% +0.47%
==========================================
Files 148 148
Lines 27196 27196
==========================================
+ Hits 21501 21631 +130
+ Misses 5695 5565 -130
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6379?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6379/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.20%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6379/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.31% <0.00%> (-26.18%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6379/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.26%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6379/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6379/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |
| [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6379/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <0.00%> (+17.47%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6379/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.72% <0.00%> (+22.87%)` | :arrow_up: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6379/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6379/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `94.24% <0.00%> (+69.10%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6379?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6379?src=pr&el=footer). Last update [6028ed9...5e6947b](https://codecov.io/gh/huggingface/transformers/pull/6379?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 6,378 | closed | Create README.md | 08-10-2020 11:07:19 | 08-10-2020 11:07:19 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6378?src=pr&el=h1) Report
> Merging [#6378](https://codecov.io/gh/huggingface/transformers/pull/6378?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6028ed92bd9e5471e6f8a1d23cfd95a3a63018fb&el=desc) will **decrease** coverage by `0.68%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6378?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6378 +/- ##
==========================================
- Coverage 79.05% 78.37% -0.69%
==========================================
Files 148 148
Lines 27196 27196
==========================================
- Hits 21501 21316 -185
- Misses 5695 5880 +185
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6378?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-70.09%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |
| [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `96.11% <0.00%> (+17.47%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.72% <0.00%> (+22.87%)` | :arrow_up: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `94.24% <0.00%> (+69.10%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6378?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6378?src=pr&el=footer). Last update [6028ed9...65e6341](https://codecov.io/gh/huggingface/transformers/pull/6378?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 6,377 | closed | [EncoderDecoderModel] add a `add_cross_attention` boolean to config | The `EncoderDecoderModel` uses models from `AUTO_MODEL_FOR_CAUSAL_LM` as their decoder models. The problem is that these models can be used in two ways:
1) As a stand-alone decoder model (GPT2) like **without** cross-attention layers
2) As part of a `EncoderDecoderModel` **with** cross-attention layers.
Currently it is decided via the parameter `config.is_decoder` whether cross-attention layers should be added. The problem is that `config.is_decoder` is `True` for both 1) and 2), which is correct since both 1) and 2) should use a causal mask, but means that for 1) cross-attention layers are added without ever being used.
This PR solves this problem by introducing a new config param called `add_cross_attention` which is only relevant for models in `AUTO_MODEL_FOR_CAUSAL_LM`.
I also played around with the idea of not having the flag in the config, but just passing it along the `init` function, such as:
```python
super().__init__(config, add_cross_attention=False)
```
in
and then calling setting this param to `True` for all encoder-decoder models. I decided to put the param in the config instead because:
a) The init signature does not have to change and
b) EncoderDecoderModels make extensive use of `AutoModelForCausalLM.from_pretrained(...)` which would have meant that all models that are part of `MODEL_FOR_CAUSAL_LM_MAPPING` have to have this signature.
Taking all this into account I think the first solution (putting `add_cross_attenion` into the config) is the better way to go here.
# IMPORTANT: This PR introduces a breaking change. All `EncoderDecoderModel` models have to be updated with `add_cross_attention=True`.
=> All "bert2bert" models were updated: https://huggingface.co/models?search=bert2bert
## TODO:
After this, I think the framework is flexible enough to handle all other models and I can extend `EncoderDecoderModel` to GPT2, Roberta, Longformer and maybe Reformer as well.
EncoderDecoder is not yet officially released, I think, so this slightly backwards compatibility breaking change is OK. I will updated all Bert2Bert models on the model hub with `add_cross_attention=True` and add a bigger message in this PR when merged. | 08-10-2020 10:27:44 | 08-10-2020 10:27:44 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6377?src=pr&el=h1) Report
> Merging [#6377](https://codecov.io/gh/huggingface/transformers/pull/6377?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1429b920d44d610eaa0a6f48de43853da52e9c03&el=desc) will **decrease** coverage by `0.02%`.
> The diff coverage is `90.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6377?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6377 +/- ##
==========================================
- Coverage 78.38% 78.36% -0.03%
==========================================
Files 148 148
Lines 27196 27202 +6
==========================================
- Hits 21317 21316 -1
- Misses 5879 5886 +7
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6377?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `91.02% <66.66%> (-1.19%)` | :arrow_down: |
| [src/transformers/configuration\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <100.00%> (ø)` | |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.57% <100.00%> (+0.02%)` | :arrow_up: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.45% <100.00%> (+0.05%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.26%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6377/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.18% <0.00%> (-0.26%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6377?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6377?src=pr&el=footer). Last update [1429b92...e2fcc0d](https://codecov.io/gh/huggingface/transformers/pull/6377?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>`All EncoderDecoderModel models have to be updated with add_cross_attention=True.`
How do I _exactly_ do this? I got hit by `AttributeError: 'GPT2Config' object has no attribute 'add_cross_attention'` after updating to newest release.<|||||>Hey @xxbidiao,
You have to set `gpt2.config.add_cross_attention = True` and then save this config. Or you can directly add the parameter `add_cross_attention=True` to the gpt2 config json file |
transformers | 6,376 | closed | Introduce dataset and data collator for Bert pretrain NSP | Follow up from discussion in https://github.com/huggingface/transformers/issues/6330
This PR introduces changes to allow both the LMML and NSP objectives to be run using ```BertForPretraining```. | 08-10-2020 07:26:24 | 08-10-2020 07:26:24 | Superseded by #6644, thanks a lot for your contribution! |
transformers | 6,375 | closed | CUDA Out of Memory | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
https://stackoverflow.com/questions/63335442/how-do-i-deal-with-cuda-out-of-memory-while-finetuning-bart
-->
## Details
<!-- Description of your issue -->
I was trying to finetune BART on google collab using the xsum dataset and the finetuning script and I got this:
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 14.73 GiB total capacity; 13.67 GiB already allocated; 15.88 MiB free; 13.72 GiB reserved in total by PyTorch)
Does this mean I have to use a smaller model?
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**:
https://stackoverflow.com/questions/63335442/how-do-i-deal-with-cuda-out-of-memory-while-finetuning-bart | 08-10-2020 06:28:09 | 08-10-2020 06:28:09 | Yes it seems like the GPU that was allocated to you does not provide enough GPU memory for the model |
transformers | 6,374 | closed | [s2s] remove lr_scheduler redundancy | in `get_train_dataloader` | 08-10-2020 03:02:03 | 08-10-2020 03:02:03 | Proposed solution here: https://github.com/huggingface/transformers/pull/6402
|
transformers | 6,373 | closed | Pegasus finetuning diary | Best score so far .2065 Rouge, much worse than paper. Generations appear to start lower case/be missing words at the beginning.
Clues:
- adding `<pad>` as prefix (like for generation) makes loss nan for at least 1000 steps (I killed it).
- Without prefix, loss is nan for 5 steps, them improves.
- distillation with teacher produces huge hidden state MSE losses. This is probably unrelated and caused by the same large activations that break fp16.
Suspects:
- different causal mask than tf?
- tf doesn't shift labels or add a decoder prefix token. We shift labels but don't add a prefix token. there is a suraj issue where this appears to be suboptimal for t5 (which also has no bos token).
- bug in label smoothing
Best run:


| 08-10-2020 02:59:52 | 08-10-2020 02:59:52 | freeze_embeds didn't matter (before mask fix)
mask fix explanation:
we need `decoder_start_token_id=pad_token_id` to avoid the first word issue, but `decoder_padding_mask` should NOT tell the model to ignore that decoder_start_token_id, else you get nan.
This fix makes 1 example per batch not have eos in decoder_input_ids (t5,bart=same problem).
But maybe that can explain batch_size=1 truncation.
Original repo uses adafactor.<|||||>pegasus finetuning Running on fork branch has rouge2 23 with full beam search after 1.5 epochs
https://app.wandb.ai/sshleifer/transformers_fork-examples_seq2seq/runs/3cz2fe87?workspace=user-sshleifer
XSUM Metrics from today:
Models train on hack-pegasus-batches branch.
```
finetune: {'rouge1': 45.6515, 'rouge2': 22.9858, 'rougeL': 37.7569, 'n_obs': 11333, 'runtime': 4175.217807531357, 'seconds_per_sample': 0.3684}
dpx8 {'rouge1': 45.9739, 'rouge2': 23.1417, 'rougeL': 38.1625, 'n_obs': 11333, 'runtime': 2207.9071719646454, 'seconds_per_sample': 0.1948}
dpx4 {'rouge1': 43.0961, 'rouge2': 20.1954, 'rougeL': 35.5679, 'n_obs': 11333, 'runtime': 1813.8934507369995, 'seconds_per_sample': 0.1601}
```
(10% chance 1st two rows are flipped)<|||||>Adafactor saves a lot of memory. All of those runs use adafactor. |
transformers | 6,372 | closed | Update modeling_tf_utils.py | fix typo: ckeckpoint->checkpoint | 08-09-2020 19:38:21 | 08-09-2020 19:38:21 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6372?src=pr&el=h1) Report
> Merging [#6372](https://codecov.io/gh/huggingface/transformers/pull/6372?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6e8a38568eb874f31eb49c42285c3a634fca12e7&el=desc) will **decrease** coverage by `0.96%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6372?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6372 +/- ##
==========================================
- Coverage 79.34% 78.37% -0.97%
==========================================
Files 148 148
Lines 27196 27196
==========================================
- Hits 21579 21316 -263
- Misses 5617 5880 +263
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6372?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6372/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6372/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-70.09%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6372/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.51%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6372/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6372/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6372/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6372/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6372/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6372/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.84% <0.00%> (+7.41%)` | :arrow_up: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6372/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+21.91%)` | :arrow_up: |
| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6372/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6372?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6372?src=pr&el=footer). Last update [6e8a385...1dc65e3](https://codecov.io/gh/huggingface/transformers/pull/6372?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,371 | closed | the test now works again | `test_finetune_lr_shedulers` can now run after https://github.com/huggingface/transformers/pull/6358 was merged | 08-09-2020 18:28:11 | 08-09-2020 18:28:11 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6371?src=pr&el=h1) Report
> Merging [#6371](https://codecov.io/gh/huggingface/transformers/pull/6371?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6e8a38568eb874f31eb49c42285c3a634fca12e7&el=desc) will **increase** coverage by `0.37%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6371?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6371 +/- ##
==========================================
+ Coverage 79.34% 79.71% +0.37%
==========================================
Files 148 148
Lines 27196 27196
==========================================
+ Hits 21579 21680 +101
+ Misses 5617 5516 -101
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6371?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.20%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-11.37%)` | :arrow_down: |
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.51%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.84% <0.00%> (+7.41%)` | :arrow_up: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+21.91%)` | :arrow_up: |
| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6371/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6371?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6371?src=pr&el=footer). Last update [6e8a385...4d9d35c](https://codecov.io/gh/huggingface/transformers/pull/6371?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,370 | closed | FastTokenizer not returning batch_size for offset_mapping for short texts | ## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-5.4.0-42-generic-x86_64-with-glibc2.10
- Python version: 3.8.2
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
# Who can help
tokenizers: @mfuntowicz
## Information
When working with `padding` and `truncation` on short texts (smaller than `max_len`),
the FastTokenizer will return the batch_size dimension if `return_tensors=None`.
However, when `return_tensors="pt"` or `return_tensors="np"` are enabled (I haven't tested it on Tensorflow), they **won't return the batch dimension**.
## To reproduce
Loading fast tokenizer:
```python3
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased", use_fast=True)
```
Behavior on "short" texts without `return_tensors`:
```python3
out = tokenizer("test text",
padding='max_length',
truncation=True,
return_overflowing_tokens=True,
return_offsets_mapping=True,
)
# Convert to tensor outside the tokenizer
print(torch.tensor(out["offset_mapping"]).shape)
>>> torch.Size([1, 512, 2])
```
Behavior with `return_tensors`:
```python3
out = tokenizer("test text",
padding='max_length',
truncation=True,
return_overflowing_tokens=True,
return_offsets_mapping=True,
return_tensors="pt" # Similarly with "np"
)
print(out["offset_mapping"].shape)
>>> torch.Size([512, 2])
```
## Expected behavior
```python3
out = tokenizer("test text",
padding='max_length',
truncation=True,
return_overflowing_tokens=True,
return_offsets_mapping=True,
return_tensors="pt"
)
print(out["offset_mapping"].shape)
>>> torch.Size([1, 512, 2])
```
| 08-09-2020 17:47:39 | 08-09-2020 17:47:39 | After inspecting the code, it looks that the cause can be found inside `tokenization_utils_base.py`.
Then, in the method `convert_to_tensors` form `BatchEncoding` there are the following lines:
```python3
# Do the tensor conversion in batch
for key, value in self.items():
try:
if prepend_batch_axis:
value = [value]
tensor = as_tensor(value)
# at-least2d
if tensor.ndim > 2:
tensor = tensor.squeeze(0)
elif tensor.ndim < 2:
tensor = tensor[None, :]
```
In this case, right before the squeeze, the `offset_mapping` tensor is of shape [1, 512, 2], which becomes [512, 2] after being squeezed.
This would explain why it doesn't fail with longer sequences, since squeezing a tensor of shape [n, 512, 2] (n>1) leaves the tensor unaltered.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,369 | closed | trainer/lightning_base: Arbitrary config updates through command line | this issue [https://github.com/huggingface/transformers/issues/6367], and a recent one to add dropout to the command line, as well as the usage of task_specific_params during finetuning, are all one off solutions to address a larger problem. During finetuning/training, it is very difficult to arbitrarily set config attributes. For `examples/lightning_base.py`, you need to save a whole new config to json and put it in a directory, which is fairly annoying method for changing hyperparameters, so we add lots of them, like `--dropout --attention_dropout --encoder_layerdrop --decoder_layerdrop` through `argparse.add_argument`.
It would be a better user experience if I could just pass any kwarg without editing the code.
This seems possible with the `fire` package. But I would prefer an `argparse` solution as there is another issue open to delete the `fire` dependency, I also asked a similar question on
[stackoverflow](https://stackoverflow.com/questions/63329044/python-argparse-allow-unregistered-arguments) | 08-09-2020 17:05:24 | 08-09-2020 17:05:24 | see `argparse.remainder` https://stackoverflow.com/questions/22850332/getting-the-remaining-arguments-in-argparse/46250042#46250042
h/t @stas00
<|||||>You could probably extract all the known args via `argparse`'s:
```
args, unknown = parser.parse_known_args()
```
and then use another tool to parse `unknown` (which is just `argv` minus known args) e.g. could even use `fire` or write a core function to do that.
surely a cheat, but it would make your query work, while having `arparse` as the main solution still.<|||||>You could also take a look at https://github.com/huggingface/transformers/blob/155288f04ba9a5d0a0e4d5be4f6d4e808ad8cfff/src/transformers/hf_argparser.py#L128-L146<|||||>If I read the code correctly, it doesn't do anything with `remaining_args`. It either just returns them as is (`argv` list) or throws an error.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Fixed by @stas00 for both trainers, thanks! |
transformers | 6,368 | closed | Can't load a saved tokenizer with AutoTokenizer.from_pretrained without saving Config as well | ### Environment info
- `transformers` version: master (https://github.com/huggingface/transformers/commit/6e8a38568eb874f31eb49c42285c3a634fca12e7)
### Who can help
tokenizers: @mfuntowicz
### Information
When saving a tokenizer with .save_pretrained, it can be loaded with the class it was saved with but not with AutoTokenizer:
```
from transformers import BertTokenizer, AutoTokenizer
BertTokenizer.from_pretrained("bert-base-cased").save_pretrained(".")
BertTokenizer.from_pretrained(".") # works
AutoTokenizer.from_pretrained(".") # throws exception
```
The error is:
```
Traceback (most recent call last):
File "/home/transformers/src/transformers/configuration_utils.py", line 333, in get_config_dict
local_files_only=local_files_only,
File "/home/transformers/src/transformers/file_utils.py", line 684, in cached_path
raise EnvironmentError("file {} not found".format(url_or_filename))
OSError: file ./config.json not found
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/transformers/src/transformers/tokenization_auto.py", line 205, in from_pretrained
config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/home/transformers/src/transformers/configuration_auto.py", line 203, in from_pretrained
config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/home/transformers/src/transformers/configuration_utils.py", line 346, in get_config_dict
raise EnvironmentError(msg)
OSError: Can't load config for '.'. Make sure that:
- '.' is a correct model identifier listed on 'https://huggingface.co/models'
- or '.' is the correct path to a directory containing a config.json file
```
If a configuration is saved as well, then loading with AutoTokenizer does work:
```
from transformers import BertTokenizer, BertConfig, AutoTokenizer
BertConfig.from_pretrained("bert-base-cased").save_pretrained(".")
BertTokenizer.from_pretrained("bert-base-cased").save_pretrained(".")
AutoTokenizer.from_pretrained(".") # works
```
### Expected behavior
I'd expect that loading a tokenizer with AutoTokenizer would require the same files as a dedicated tokenizer class (e.g. BertTokenizer) requires.
| 08-09-2020 17:05:20 | 08-09-2020 17:05:20 | @eladsegal I think that the `AutoTokenizer` requires the config file to determine what model to use. In https://huggingface.co/transformers/model_doc/auto.html it states that:
> The from_pretrained() method takes care of returning the correct tokenizer class instance based on the model_type property of the config object, or when it’s missing, falling back to using pattern matching on the pretrained_model_name_or_path string.
So I think that if your model path variable includes the name of the model that it was using, it should be able to load the right tokenizer. If it doesn't it expects to have a config file.<|||||>@TarasPriadka When providing a path, a config file is required even if the model name is in the path (https://github.com/huggingface/transformers/blob/6e8a38568eb874f31eb49c42285c3a634fca12e7/src/transformers/tokenization_auto.py#L205).
The model name in the path is used only when an existing config file is missing the model_type property (https://github.com/huggingface/transformers/blob/6e8a38568eb874f31eb49c42285c3a634fca12e7/src/transformers/configuration_auto.py#L203-L212).
<|||||>I bumped on that as well.
~~I believe the issue is purely due to mismatch in filename convention AutoTokenizer throws an exception of './config.json' missing, while the file saved is called 'tokenizer_config.json'~~
Maybe it is a different case - looks like when you want to instantiate BertTokenizer it just needs tokenizer_config.json but when you want to instantiate AutoTokenizer it requires config.json - the config of whole model.
So the simplest steps to reproduce are just:
```
from transformers import AutoTokenizer
AutoTokenizer.from_pretrained("bert-base-cased").save_pretrained(".")
AutoTokenizer.from_pretrained(".") # throws exception
```
looking at the source code - a workaround is to call
```
AutoTokenizer.from_pretrained(tokenizer_path, config=AutoConfig.from_pretrained(model_path))
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Here is another workaround by using directly the corresponding tokenizer class such as BertTokenizer.from_pretrained instead of AutoTokenizer.
https://stackoverflow.com/questions/62472238/autotokenizer-from-pretrained-fails-to-load-locally-saved-pretrained-tokenizer/62664374#62664374<|||||>I had the same issue and I realized some wired things going on. I'm running IDLEX and Jupyter Note book, both on Windows 10. I installed my python system on "D:\WPy64-3740". IDLEX can successfully loads pretrained model but Jupyter Notebook can not. But for some reason, it does load pretrained model when I load .py file with import directive.
# Issue
Usually, I directly launch IDEX.exe from the path above. In that case, it doesn't cause any problem. For example, some code like:
```python
# On IDLEX.exe
>>> tokenizer = AutoTokenizer.from_pretrained('prajjwal1/bert-tiny')
```
works fine. But when I use Jupyter Notebook, usually launch from the same directory, causes an Error. This is the part of the error message
```python
# On Jupyter Notebook.exe
tokenizer = AutoTokenizer.from_pretrained('prajjwal1/bert-tiny')
# Output of the cell
Could not locate the tokenizer configuration file, will try to use the model config instead.
loading configuration file https://huggingface.co/prajjwal1/bert-tiny/resolve/main/config.json from cache at D:\WPy64-3740\settings/.cache\huggingface\transformers\3cf34679007e9fe5d0acd644dcc1f4b26bec5cbc9612364f6da7262aed4ef7a4.a5a11219cf90aae61ff30e1658ccf2cb4aa84d6b6e947336556f887c9828dc6d
Model config BertConfig {
"_name_or_path": "prajjwal1/bert-tiny",
...
"transformers_version": "4.20.1",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
}
loading file https://huggingface.co/prajjwal1/bert-tiny/resolve/main/vocab.txt from cache at D:\WPy64-3740\settings/.cache\huggingface\transformers\585ac1c3dedc6b808dd35e8770afafe10905d3e723a02617af749d39db780e09.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99
loading file https://huggingface.co/prajjwal1/bert-tiny/resolve/main/tokenizer.json from cache at None
loading file https://huggingface.co/prajjwal1/bert-tiny/resolve/main/added_tokens.json from cache at None
loading file https://huggingface.co/prajjwal1/bert-tiny/resolve/main/special_tokens_map.json from cache at None
loading file https://huggingface.co/prajjwal1/bert-tiny/resolve/main/tokenizer_config.json from cache at None
loading configuration file https://huggingface.co/prajjwal1/bert-tiny/resolve/main/config.json from cache at D:\WPy64-3740\settings/.cache\huggingface\transformers\3cf34679007e9fe5d0acd644dcc1f4b26bec5cbc9612364f6da7262aed4ef7a4.a5a11219cf90aae61ff30e1658ccf2cb4aa84d6b6e947336556f887c9828dc6d
Model config BertConfig {
"_name_or_path": "prajjwal1/bert-tiny"
...
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
}
loading configuration file https://huggingface.co/prajjwal1/bert-tiny/resolve/main/config.json from cache at D:\WPy64-3740\settings/.cache\huggingface\transformers\3cf34679007e9fe5d0acd644dcc1f4b26bec5cbc9612364f6da7262aed4ef7a4.a5a11219cf90aae61ff30e1658ccf2cb4aa84d6b6e947336556f887c9828dc6d
Model config BertConfig {
"_name_or_path": "prajjwal1/bert-tiny",
"attention_probs_dropout_prob": 0.1,
...
```
I thought maybe Notebook cause an error because it working on some different directory. So I checked the current working directory on both environment. Here is the code, I used for it.
```python
import os
os.getcwd()
```
As the result, I confirmed both program working on the same directory (or folder, whatever). I also confirmed Python version on shell/Notebook and it was the same, too. By the way, the location of python.exe is "D:\WPy64-3740\python-3.7.4.amd64". Both IDEX and Notebook uses same python.exe....I suppose.
# Wired behaviour
The funny thing about the issue is when I load .py file from Jupyter Notebook, it can load pretrained model. For example,
```python
# On Jupyter Notebook
# load some module loads pretrained model.
# This code will import symbol, "tokenizer", an instance of AutoTokenizer initialized with from_pretrained method.
from transformer_fine_tune_2 import *
tokenizer
# Output of the console
reTrainedTokenizerFast(name_or_path='prajjwal1/bert-tiny', vocab_size=30522, model_max_len=1000000000000000019884624838656, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'unk_token': '[UNK]', 'sep_token': '[SEP]', 'pad_token': '[PAD]', 'cls_token': '[CLS]', 'mask_token': '[MASK]'})
```
works file. I even trained my model in this way. So I suspect this could be a bug, somehow path related, or maybe one of those "Windows things" or something else.
Hope this information helps.
|
transformers | 6,367 | closed | [s2s] pass max_length to config through command line | Problem:
In summarization, ideal beam search params vary between finetuning datasets. If you are finetuning pegasus-large on xsum, you want config.max_length=56, if you are finetuning pegasus-large on cnn-dailymail you want config.max_length=128.
### Solutions
- the command line arg should be called `max_generate_length`
- This could also be addressed through adding `task_specific_params` for every dataset. Then you could pass `--task summarize_xsum` to finetune.py and things would work. Kinda lame though. | 08-09-2020 16:35:17 | 08-09-2020 16:35:17 | |
transformers | 6,366 | closed | [WIP] Lm loss feed forward chunking | Final word embedding layer in LM loss calculation presents a bottleneck as the entire time dim - can be chunked similar to feed forward layers. | 08-09-2020 15:16:38 | 08-09-2020 15:16:38 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6366?src=pr&el=h1) Report
> Merging [#6366](https://codecov.io/gh/huggingface/transformers/pull/6366?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6e8a38568eb874f31eb49c42285c3a634fca12e7&el=desc) will **decrease** coverage by `0.96%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6366?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6366 +/- ##
==========================================
- Coverage 79.34% 78.38% -0.97%
==========================================
Files 148 148
Lines 27196 27196
==========================================
- Hits 21579 21317 -262
- Misses 5617 5879 +262
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6366?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6366/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-70.09%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6366/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6366/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6366/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6366/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6366/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6366/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6366/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.84% <0.00%> (+7.41%)` | :arrow_up: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6366/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+21.91%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6366/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.72% <0.00%> (+22.87%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6366?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6366?src=pr&el=footer). Last update [6e8a385...a258372](https://codecov.io/gh/huggingface/transformers/pull/6366?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@patrickvonplaten quick question here: Did you mean to chunk the projection onto vocab_size as in:
https://github.com/huggingface/transformers/blob/6e8a38568eb874f31eb49c42285c3a634fca12e7/src/transformers/modeling_bert.py#L527-L530
or the transformation that happens before that:
https://github.com/huggingface/transformers/blob/6e8a38568eb874f31eb49c42285c3a634fca12e7/src/transformers/modeling_bert.py#L506-L510
<|||||>This one is a bit harder actually. I meant the both the calculation in https://github.com/huggingface/transformers/blob/6e8a38568eb874f31eb49c42285c3a634fca12e7/src/transformers/modeling_bert.py#L795 and https://github.com/huggingface/transformers/blob/6e8a38568eb874f31eb49c42285c3a634fca12e7/src/transformers/modeling_bert.py#L1015 should be put into one chunk function. Here a lot of memory can be saved because currently go from the `last_hidden_state` tensor of size `[batch_size, seq_len, hidden_size]` to a `[batch_size, seq_len, vocab_size]` logit tensor and then reduce it to `[1]` loss scalar. Note that `vocab_size` is much larger than `hidden_size` and often is the bottleneck of a model. We don't need to compute `[batch_size, seq_len, vocab_size]` though if we apply chunked "loss" calculation from `last_hidden_states` to `loss` directly. Here we could greatly reduce memory consumption. But definitely leave this PR for later, we need to more carefully think about possible design changes as the two lines in question (linked above) are in different spots in the code. When we have the other `feed_forward_chunking` implemented we can take a look at this again :-) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,365 | closed | Feed forward chunking others | Adding feed forward chunking to other models. Based on #6024 | 08-09-2020 15:13:46 | 08-09-2020 15:13:46 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6365?src=pr&el=h1) Report
> Merging [#6365](https://codecov.io/gh/huggingface/transformers/pull/6365?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fb7330b30ebfbb3f07b87203f0405ee09905eeda&el=desc) will **increase** coverage by `2.04%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6365?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6365 +/- ##
==========================================
+ Coverage 78.42% 80.47% +2.04%
==========================================
Files 156 156
Lines 28129 28152 +23
==========================================
+ Hits 22061 22655 +594
+ Misses 6068 5497 -571
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6365?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.26% <ø> (-0.17%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.35% <ø> (+0.19%)` | :arrow_up: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.62% <100.00%> (+1.38%)` | :arrow_up: |
| [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `83.50% <100.00%> (+0.13%)` | :arrow_up: |
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.84% <100.00%> (+1.65%)` | :arrow_up: |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `92.02% <100.00%> (+0.07%)` | :arrow_up: |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `96.09% <100.00%> (ø)` | |
| [src/transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `91.31% <100.00%> (+0.07%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6365/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `83.42% <100.00%> (+0.11%)` | :arrow_up: |
| ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/6365/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6365?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6365?src=pr&el=footer). Last update [fb7330b...80c6b27](https://codecov.io/gh/huggingface/transformers/pull/6365?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>https://github.com/huggingface/transformers/pull/6024 is merged :-) Great work @Pradhy729! It would be a good idea to rebase this PR to current master so that you can easily leverage the tests that were added in https://github.com/huggingface/transformers/pull/6024 just by setting the flag `test_chunking=True` for all models you want to add here.<|||||>Yes - definitely will do. Was just waiting for the merge. Thanks for adding the tests.<|||||>@patrickvonplaten Feed forward chunking has been added for the following:
1. Albert
2. Distillbert
3. Longformer
4. XLNet
5. XLM
Also, changed model signature to have callable as first positional argument.<|||||>Hi @patrickvonplaten --> Can you review and approve if this looks good?
<|||||>Hey @Pradhy729 - this looks great!
1) Can you add the docstrings for `chunk_size_feed_forward` as explained in the comment above and delete the corresponding config param in Reformer and the Reformer docstring (You can just cut & paste the Reformer docstring here)
2) Can you please remove the `test_chunking=True` statements in the model specific test files -> I think it's only in test_modeling_bert.py actually.
3) It would be awesome if you try to rebase the branch to master (`git fetch upstream master`, `git rebase upstream/master`).
If you have too many merge conflicts - then I'll do it :-) <|||||>@patrickvonplaten
Done. Please review and let me know if there's anything else.<|||||>LGTM! @Pradhy729 - great work!<|||||>Merging! Good job @Pradhy729 |
transformers | 6,364 | closed | correct pl link in readme | 08-09-2020 13:11:28 | 08-09-2020 13:11:28 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6364?src=pr&el=h1) Report
> Merging [#6364](https://codecov.io/gh/huggingface/transformers/pull/6364?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6e8a38568eb874f31eb49c42285c3a634fca12e7&el=desc) will **decrease** coverage by `0.98%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6364?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6364 +/- ##
==========================================
- Coverage 79.34% 78.36% -0.99%
==========================================
Files 148 148
Lines 27196 27196
==========================================
- Hits 21579 21312 -267
- Misses 5617 5884 +267
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6364?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6364/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-70.09%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6364/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.76%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6364/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6364/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6364/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6364/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6364/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6364/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6364/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.84% <0.00%> (+7.41%)` | :arrow_up: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6364/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+21.91%)` | :arrow_up: |
| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6364/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6364?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6364?src=pr&el=footer). Last update [6e8a385...86f91f3](https://codecov.io/gh/huggingface/transformers/pull/6364?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 6,363 | closed | [s2s] add BartTranslationDistiller for distilling mBART | New class `BartTranslationDistiller` does the same distillation method as `SummarizationDistiller`, but computes BLEU scores instead of ROUGE scores. It also accepts `--src_lang` and `--tgt_lang` arguments from the command line.
There is one strong checkpoint already posted at `sshleifer/distillmbart-12-6/`. I will post more in the coming days. | 08-09-2020 06:29:57 | 08-09-2020 06:29:57 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6363?src=pr&el=h1) Report
> Merging [#6363](https://codecov.io/gh/huggingface/transformers/pull/6363?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/be1520d3a3c09d729649c49fa3163bd938b6a238&el=desc) will **decrease** coverage by `1.55%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6363?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6363 +/- ##
==========================================
- Coverage 79.93% 78.37% -1.56%
==========================================
Files 153 148 -5
Lines 27888 27196 -692
==========================================
- Hits 22293 21316 -977
- Misses 5595 5880 +285
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6363?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-70.09%)` | :arrow_down: |
| [src/transformers/tokenization\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `41.66% <0.00%> (-40.00%)` | :arrow_down: |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `92.17% <0.00%> (-7.00%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `81.74% <0.00%> (-6.61%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `94.24% <0.00%> (-1.84%)` | :arrow_down: |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `89.21% <0.00%> (-1.64%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `95.48% <0.00%> (-1.10%)` | :arrow_down: |
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.75% <0.00%> (-0.49%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `66.66% <0.00%> (-0.21%)` | :arrow_down: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (-0.18%)` | :arrow_down: |
| ... and [21 more](https://codecov.io/gh/huggingface/transformers/pull/6363/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6363?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6363?src=pr&el=footer). Last update [be1520d...0718179](https://codecov.io/gh/huggingface/transformers/pull/6363?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,362 | closed | [TFTrainer] Error "iterating over `tf.Tensor` is not allowed" | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2 (from pip)
- Platform: Linux-4.15.0-91-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.6
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.3.0 (True) (Same error on TF2.2 and TF2.1)
- Using GPU in script?: Yes - GeForce GTX 1080 Ti
- Using distributed or parallel set-up in script?: No
### Who can help
Trainer: @sgugger tensorflow: @jplu
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
tensorflow: @jplu
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Install Tensorflow 2.3.0, Transformers 3.0.2
1. Run the following code:
```python3
from transformers import TFGPT2LMHeadModel, TFTrainer, TFTrainingArguments
import tensorflow as tf
tfds_train_dataset = tf.data.Dataset.from_tensor_slices(
tf.random.uniform([4000, 1024], minval=1, maxval=10, dtype=tf.int32))
model = TFGPT2LMHeadModel.from_pretrained("gpt2")
training_args = TFTrainingArguments(
output_dir='./results',
num_train_epochs=3,
per_device_train_batch_size=16,
per_device_eval_batch_size=64,
warmup_steps=500,
weight_decay=0.01,
logging_dir='./logs',
)
trainer = TFTrainer(
model=model,
args=training_args,
train_dataset=tfds_train_dataset,
)
trainer.train()
```
2. Results in the following output + error:
```
2020-08-09 01:41:28.331697: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-08-09 01:41:30.461375: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-08-09 01:41:30.466239: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s
2020-08-09 01:41:30.466271: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-08-09 01:41:30.468575: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-08-09 01:41:30.470629: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-08-09 01:41:30.471013: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-08-09 01:41:30.473522: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-08-09 01:41:30.474947: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-08-09 01:41:30.481193: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-08-09 01:41:30.482710: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-08-09 01:41:30.483080: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-08-09 01:41:30.512602: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3210790000 Hz
2020-08-09 01:41:30.514335: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4c678f0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-08-09 01:41:30.514408: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-08-09 01:41:30.648534: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4c92000 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-08-09 01:41:30.648597: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1080 Ti, Compute Capability 6.1
2020-08-09 01:41:30.650365: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 10.92GiB deviceMemoryBandwidth: 451.17GiB/s
2020-08-09 01:41:30.650446: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-08-09 01:41:30.650523: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-08-09 01:41:30.650586: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-08-09 01:41:30.650646: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-08-09 01:41:30.650708: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-08-09 01:41:30.650767: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-08-09 01:41:30.650829: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-08-09 01:41:30.653179: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2020-08-09 01:41:30.653232: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-08-09 01:41:31.392168: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-08-09 01:41:31.392212: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0
2020-08-09 01:41:31.392225: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N
2020-08-09 01:41:31.393566: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7389 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
2020-08-09 01:41:34.003855: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
2020-08-09 01:41:34.145974: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
All model checkpoint weights were used when initializing TFGPT2LMHeadModel.
All the weights of TFGPT2LMHeadModel were initialized from the model checkpoint at gpt2.
If your task is similar to the task the model of the ckeckpoint was trained on, you can already use TFGPT2LMHeadModel for predictions without further training.
Traceback (most recent call last):
File "gpt2-training_bug.py", line 26, in <module>
trainer.train()
File "/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/transformers/trainer_tf.py", line 412, in train
for step, training_loss in enumerate(self._training_steps(train_ds, optimizer)):
File "/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/transformers/trainer_tf.py", line 459, in _training_steps
for i, loss in enumerate(self._accumulate_next_gradients(ds)):
File "/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/transformers/trainer_tf.py", line 492, in _accumulate_next_gradients
yield _accumulate_next()
File "/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 780, in __call__
result = self._call(*args, **kwds)
File "/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 823, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 697, in _initialize
*args, **kwds))
File "/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 2855, in _get_concrete_function_internal_garbage_collected
graph_function, _, _ = self._maybe_define_function(args, kwargs)
File "/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 3213, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/tensorflow/python/eager/function.py", line 3075, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 986, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py", line 600, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py", line 973, in wrapper
raise e.ag_error_metadata.to_exception(e)
tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: in user code:
/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/transformers/trainer_tf.py:486 _accumulate_next *
per_replica_features, per_replica_labels = next(iterator)
/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:503 __iter__
self._disallow_iteration()
/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:496 _disallow_iteration
self._disallow_when_autograph_enabled("iterating over `tf.Tensor`")
/home/gabriel/venv/GPT-Hug/lib/python3.7/site-packages/tensorflow/python/framework/ops.py:474 _disallow_when_autograph_enabled
" indicate you are trying to use an unsupported feature.".format(task))
OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature.
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Start Training
<!-- A clear and concise description of what you would expect to happen. -->
| 08-09-2020 04:43:46 | 08-09-2020 04:43:46 | The following bug on Tensorflow could be related: https://github.com/tensorflow/tensorflow/issues/42119<|||||>Was just a Dataset setup issue. The correct setup for the Dataset can be seen here https://github.com/huggingface/transformers/issues/6551 |
transformers | 6,361 | closed | lr_schedulers: add get_polynomial_decay_schedule_with_warmup | this PR adds a new scheduler plus test, code is based on amalgamation of a few different implementations
I'm not sure it's 100% correct - needs more experimenting - but feedback is welcome.
For reference here are 3 different implementations of this scheduler:
1. https://github.com/pyprob/pyprob/blob/master/pyprob/nn/inference_network.py#L357
2. https://github.com/cmpark0126/pytorch-polynomial-lr-decay/blob/master/torch_poly_lr_decay/torch_poly_lr_decay.py#L5
3. https://github.com/pytorch/fairseq/blob/master/fairseq/optim/lr_scheduler/polynomial_decay_schedule.py - this one has an extra feature `--force-anneal`
@sshleifer | 08-09-2020 04:07:09 | 08-09-2020 04:07:09 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6361?src=pr&el=h1) Report
> Merging [#6361](https://codecov.io/gh/huggingface/transformers/pull/6361?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bd0eab351a338175053998ddfc059f1cb6424ab4&el=desc) will **increase** coverage by `0.48%`.
> The diff coverage is `81.05%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6361?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6361 +/- ##
==========================================
+ Coverage 79.29% 79.77% +0.48%
==========================================
Files 146 148 +2
Lines 26684 27214 +530
==========================================
+ Hits 21158 21710 +552
+ Misses 5526 5504 -22
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6361?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6361/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.33% <0.00%> (-0.38%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_camembert.py](https://codecov.io/gh/huggingface/transformers/pull/6361/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jYW1lbWJlcnQucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6361/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6361/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG1fcm9iZXJ0YS5weQ==) | `100.00% <ø> (ø)` | |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6361/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `81.75% <ø> (ø)` | |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6361/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `92.17% <0.00%> (-1.64%)` | :arrow_down: |
| [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6361/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `100.00% <ø> (+14.28%)` | :arrow_up: |
| [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6361/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.37% <14.81%> (-0.08%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6361/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.02% <32.25%> (-1.04%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6361/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <39.13%> (+0.06%)` | :arrow_up: |
| ... and [45 more](https://codecov.io/gh/huggingface/transformers/pull/6361/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6361?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6361?src=pr&el=footer). Last update [bd0eab3...6e0b1dc](https://codecov.io/gh/huggingface/transformers/pull/6361?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>All done, just need to decide whether to use the default of 1.0 for `power` as in fairseq, or 2.0 (or another value) as it actually does something polynomial.<|||||>I run fairseq as recommended [here](https://github.com/pytorch/fairseq/blob/master/examples/mbart/README.md#finetune-on-en-ro) and no, it is using the default power=1.0 at runtime.
I double checked their code, it doesn't get overriden anywhere:
```
fairseq/optim/lr_scheduler/polynomial_decay_schedule.py: self.power = args.power
fairseq/optim/lr_scheduler/polynomial_decay_schedule.py: parser.add_argument('--power', default=1.0, type=float)
fairseq/optim/lr_scheduler/polynomial_decay_schedule.py: print("POWER:", self.power)
fairseq/optim/lr_scheduler/polynomial_decay_schedule.py: lr = lr_range * pct_remaining ** (self.power) + self.end_learning_rate
```
I will open an issue there and report back. https://github.com/pytorch/fairseq/issues/2466
<|||||>👍 |
transformers | 6,360 | closed | Bug in squad example with XLNet | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-3.10.0-957.21.2.el7.x86_64-x86_64-with-centos-7.6.1810-Core
- Python version: 3.7.4
- PyTorch version (GPU?): 1.4.0 (False)
- Tensorflow version (GPU?): 2.0.0 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
examples/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
XLNet
The problem arises when using:
the official example scripts: (give details below)
The tasks I am working on is:
an official GLUE/SQUaD task: (give the name)
## To reproduce
Steps to reproduce the behavior:
1. run_squad.py with xlnet as the mode type
2. I think because the AutoModelForQuestionAnswering is mapping xlnet to XLNetForQuestionAnsweringSimple, it will have input error. Since XLNetForQuestionAnsweringSimple does not require cls_index, it will throw an error
3. https://github.com/huggingface/transformers/blob/d9149f00d1a4650bafa7e1cd73e10398193c852c/examples/question-answering/run_squad.py#L194
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 08-09-2020 02:51:35 | 08-09-2020 02:51:35 | see also #3535<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,359 | closed | Mult rouge by 100: standard units | 08-09-2020 02:17:10 | 08-09-2020 02:17:10 | I have no knowledge of ROUGE and why this would be necessary, so probably not the best person to review :-) |
|
transformers | 6,358 | closed | [s2s] fix --gpus clarg collision | ### Problem
finetune.py adds all the default pl args with this line
```
parser = pl.Trainer.add_argparse_args(parser)
```
and all the generic args from `add_generic_args`.
### Solution
This moves the overlapping arg from lightning_base.py to the 2 pl examples that need it.
CC @stas00 @patil-suraj | 08-08-2020 23:54:40 | 08-08-2020 23:54:40 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6358?src=pr&el=h1) Report
> Merging [#6358](https://codecov.io/gh/huggingface/transformers/pull/6358?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1aec991643a6fec0e7d504626fc68347fe93b658&el=desc) will **increase** coverage by `0.17%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6358?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6358 +/- ##
==========================================
+ Coverage 78.20% 78.38% +0.17%
==========================================
Files 148 148
Lines 27196 27196
==========================================
+ Hits 21269 21317 +48
+ Misses 5927 5879 -48
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6358?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6358/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6358/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6358/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.84% <0.00%> (+1.11%)` | :arrow_up: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6358/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+8.92%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6358/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+9.27%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6358/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.72% <0.00%> (+22.87%)` | :arrow_up: |
| [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6358/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+23.42%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6358?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6358?src=pr&el=footer). Last update [1aec991...b466e25](https://codecov.io/gh/huggingface/transformers/pull/6358?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This is the issue I opened about it: https://github.com/huggingface/transformers/issues/6310
it's more than just `--gpus`<|||||>which other ones besides `--gpus`?<|||||>Anything else that is defined both in `lightening_base.add_generic_args` and PL's `pl.Trainer.add_argparse_args(parser)`, if both get called.
With your PR nothing collides at the moment.
If we go into the direction of each module (and the base) defining its own args, most likely `finetune.py`needs to do the same and not use `pl.Trainer.add_argparse_args(parser)`.
On the other hand, copying the same common args to every module is less than optimal. If transformers support `--gpus`, it shouldn't be too difficult to make all examples support it - or fail it's passed and it can't support it. then these common args can go into `lightening_base` and not be redefined by each module.
Additionally, we can make any of these args optional like it was done recently with https://github.com/huggingface/transformers/pull/6149, so if the arg is not there, it will not fail if the example doesn't support it.<|||||>I don't understand exactly what you're proposing I don't think. This is just meant to fix a bug.
I agree that the current setup where only finetune.py uses `Trainer.from_argparse_args` is suboptimal, but I don't really want to mess with it since it's working and our test coverage isn't good enough to know if we've broken things.<|||||>I'm trying to communicate that currently adding new args is difficult because they are scattered in various places. It's not easy to tell when to put them in `lightning_base`, and when inside an example class and the issue https://github.com/huggingface/transformers/issues/6310 points to further collision with `pl.Trainer.add_argparse_args(parser)` use in `finetune.py`.
This PR duplicated a cl arg `--gpus` that ideally should be registered only once in `lightning_base`, and not repeated in every example, IMHO. You had to do it because `finetune.py` does things differently than the rest of examples and so it can't use `lightening_base` normally. And it's not over since other examples will want `--gpus` too.
Replacing `pl.Trainer.add_argparse_args(parser)` in `finetune.py` with the approach all other examples use will quickly uncover any missing cl args that it needs to register, and a quick grep will show them all:
```
perl -lne 'm|hparams\.(\w+)| && print $1' finetune.py | sort | uniq
accumulate_grad_batches
data_dir
eval_batch_size
freeze_embeds
freeze_encoder
git_sha
gpus
label_smoothing
max_epochs
max_source_length
max_target_length
n_test
n_train
num_workers
n_val
output_dir
pkl
sortish_sampler
src_lang
test_checkpoint
test_max_target_length
tgt_lang
train_batch_size
val_max_target_length
warmup_steps
```
|
transformers | 6,357 | closed | Create Model Card File | 08-08-2020 23:08:35 | 08-08-2020 23:08:35 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6357?src=pr&el=h1) Report
> Merging [#6357](https://codecov.io/gh/huggingface/transformers/pull/6357?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1aec991643a6fec0e7d504626fc68347fe93b658&el=desc) will **increase** coverage by `1.38%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6357?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6357 +/- ##
==========================================
+ Coverage 78.20% 79.59% +1.38%
==========================================
Files 148 148
Lines 27196 27196
==========================================
+ Hits 21269 21646 +377
+ Misses 5927 5550 -377
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6357?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6357/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: |
| [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6357/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: |
| [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6357/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6357/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.84% <0.00%> (+1.11%)` | :arrow_up: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6357/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+8.92%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6357/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+9.27%)` | :arrow_up: |
| [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6357/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+23.42%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6357/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `94.63% <0.00%> (+70.08%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6357?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6357?src=pr&el=footer). Last update [1aec991...712b725](https://codecov.io/gh/huggingface/transformers/pull/6357?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 6,356 | closed | Create Model Card | 08-08-2020 22:33:55 | 08-08-2020 22:33:55 | ||
transformers | 6,355 | closed | Create Model Card File | 08-08-2020 22:26:13 | 08-08-2020 22:26:13 | ||
transformers | 6,354 | closed | GPU memory consumption increases while training | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Google Colab
- Python version: 3.6
- PyTorch version (GPU?): 1.6
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@LysandreJik @sgugger
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
tensorflow: @jplu
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): XLM Multi-lingual
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
Please see below steps to reproduce
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Importing Python Libraries and preparing the environment
```python
!pip install git+https://github.com/huggingface/transformers
from transformers import (
AutoTokenizer,
AutoConfig,
AutoModelForSequenceClassification
)
from torch import cuda
device = 'cuda' if cuda.is_available() else 'cpu'
```
2. Loading a pretrained model "xlm-mlm-tlm-xnli15-1024"
```python
MODEL_NAME_OR_PATH = 'xlm-mlm-tlm-xnli15-1024'
CACHE_DIR='cache'
config = AutoConfig.from_pretrained(
MODEL_NAME_OR_PATH,
num_labels=7,
cache_dir=CACHE_DIR,
)
tokenizer = AutoTokenizer.from_pretrained(
MODEL_NAME_OR_PATH,
cache_dir=CACHE_DIR,
)
model = AutoModelForSequenceClassification.from_pretrained(
MODEL_NAME_OR_PATH,
from_tf=bool(".ckpt" in MODEL_NAME_OR_PATH),
config=config,
cache_dir=CACHE_DIR
)
```
3. Check GPU usage
```
!nvidia-smi
```
4. Moving the model to CUDA
```
model.to(device)
```
then check GPU usage again
```
!nvidia-smi
```
5. Creating test inputs
```python
texts = [
"aloe vera , wassernabelkrautextrakt , ackerschachtelhalm extrakt , geranium extract , dandelion extract , natriummethyl two sulfolaurate dinatrium two sulfolaurate , sodium cocoyl isethionat , cocamidopropylbetain , cocamidopropylhydroxysultain , kokosglucoside , natrium chlorid , glyceryl oleat , natriumenzoat , guar hydroxypropyltrimonium chloride , tetrasodium glutamat diacetat , decyl glucoside , sodium levulinate , hydroxamsäure , sodium pca , caprylyl glycol , zitronensäure , als koscher zertifizierte pflanzliches glycerin , eukalyptusöl , pfefferminzöl , zitronengrassöl . zertifiziert als organisch . wir verwenden nur die besten natürlichen zutaten . wenn möglich , verwenden wir onezerozero percent zertifizierte organische zutaten und niemals : petrochemikalien , sulfate , parabene , phthalate oder synthetische duftstoffe oder farben , tea , dea , glycol , silikon oder pegs . nur an menschen getested . in anderen worten : wir stellen nur absolut reine produkte her und garantieren mit onezerozero percent sicherheit , dass sie ihrem körper keine chemikalien zuführen .",
"was es bewirkt das waschgel auf kokosnussbasis entfernt überschüssiges hautfett , während das darin enthaltene aloe vera gel die haut erneuert . das gesichtspflege gel für eine tiefenwirksame porenreinigung . "
"stimmungsaufhellendes orangenöl für die massage ( kein ätherisches öl für duftlampen ) . ohne paraffin ohne mineralöl , ohne parabene , ohne konservierungsmittel , selbstverständlich ohne tierversuche , vegan",
"onezerozero percent natives kaltgepresstes biomandelöl aus one . kaltpressung . sanfte und schonende mechanische verarbeitung in deutschland . ",
"aloe vera , wassernabelkrautextrakt , ackerschachtelhalm extrakt , geranium extract , dandelion extract , natriummethyl two sulfolaurate dinatrium two sulfolaurate , sodium cocoyl isethionat , cocamidopropylbetain , cocamidopropylhydroxysultain , kokosglucoside , natrium chlorid , glyceryl oleat , natriumenzoat , guar hydroxypropyltrimonium chloride , tetrasodium glutamat diacetat , decyl glucoside , sodium levulinate , hydroxamsäure , sodium pca , caprylyl glycol , zitronensäure , als koscher zertifizierte pflanzliches glycerin , eukalyptusöl , pfefferminzöl , zitronengrassöl . zertifiziert als organisch . wir verwenden nur die besten natürlichen zutaten . wenn möglich , verwenden wir onezerozero percent zertifizierte organische zutaten und niemals : petrochemikalien , sulfate , parabene , phthalate oder synthetische duftstoffe oder farben , tea , dea , glycol , silikon oder pegs . nur an menschen getested . in anderen worten : wir stellen nur absolut reine produkte her und garantieren mit onezerozero percent sicherheit , dass sie ihrem körper keine chemikalien zuführen .",
"was es bewirkt das waschgel auf kokosnussbasis entfernt überschüssiges hautfett , während das darin enthaltene aloe vera gel die haut erneuert . das gesichtspflege gel für eine tiefenwirksame porenreinigung . "
"stimmungsaufhellendes orangenöl für die massage ( kein ätherisches öl für duftlampen ) . ohne paraffin ohne mineralöl , ohne parabene , ohne konservierungsmittel , selbstverständlich ohne tierversuche , vegan",
"onezerozero percent natives kaltgepresstes biomandelöl aus one . kaltpressung . sanfte und schonende mechanische verarbeitung in deutschland . ",
]
encode = tokenizer(texts, padding='max_length', max_length=200, truncation=True, return_tensors='pt')
for k in encode:
encode[k] = encode[k].to(device)
```
6. Re-run steps below to see the GPU usage increases every time we run
```python
model.train()
model(**encode)
!nvidia-smi
```
Got error as below:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-242-decae3a1d2bf> in <module>()
1 model.train()
----> 2 model(**encode)
8 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in linear(input, weight, bias)
1674 ret = torch.addmm(bias, input, weight.t())
1675 else:
-> 1676 output = input.matmul(weight.t())
1677 if bias is not None:
1678 output += bias
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 11.17 GiB total capacity; 10.10 GiB already allocated; 13.81 MiB free; 10.74 GiB reserved in total by PyTorch)
```
However, if you modify the code in the step 6 as follow:
```
model.train()
output = model(**encode)
print(output)
del output
!nvidia-smi
```
The GPU usage will be stable and the same as every run.
When I'm using batch_size >= 16 with Trainer class I have been facing this issue
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The GPU usage should stay the same for every run. So that we can run a much bigger batch size.
Right now, I can only use per_device_batch_size <=12 with Trainer class.
Looking forward to learning from you and thank you so much!
<!-- A clear and concise description of what you would expect to happen. -->
| 08-08-2020 19:37:19 | 08-08-2020 19:37:19 | Hey @sangnguyen7,
Is there a reason to rerun these steps:
```python
model.train()
model(**encode)
!nvidia-smi
```
over and over again. I think what might happen here is that pytorch is saving more and more activations of the forward pass in each node of the model and thus will run out of memory eventually. Not sure why you would have do re-run the above steps again and again though.<|||||>Hey @patrickvonplaten, thanks for your response and sorry for late reply.
Your point might be the case. However, if that is case then it should not be affected by the batch size right? Because if I understood correctly , activation functions are only saved on parameters/weights of the model and they are fixed on each model.
```
Not sure why you would have do re-run the above steps again and again though.
```
The reason why I'm doing this because I want to mimic the training step on the Trainer class to debug which causing the run out of memory... not sure that I missing anything... <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,353 | closed | BartModel decodes sequence of incorrect length when decoder_input_ids is specified / Output shape mismatch due to when `use_cache` True/False | From the [Bart docs](https://huggingface.co/transformers/model_doc/bart.html#bartmodel), the `decoder_input_ids` attribute should be a tensor of shape `(batch_size, target_sequence_length)`. If we call a `BartModel` without specifying `decoder_input_ids`, the decoded sequence length correctly matches that of `input_ids`. When it is specified, the output sequence is not of shape `target_sequence_length`.
## Environment
Name: torch
Version: 1.6.0+cu101
Name: transformers
Version: 3.0.2
Name: tokenizers
Version: 0.8.1rc1
The error can be reproduced in Colab or Kaggle. See [this notebook ](https://colab.research.google.com/gist/xhlulu/dd989fc7f96b777c01c083762375dfbe/bart-sequence-problems.ipynb)for example.
## Example
```python
import transformers as tfm
model = tfm.BartModel.from_pretrained('facebook/bart-base')
tokenizer = tfm.BartTokenizer.from_pretrained('facebook/bart-base')
input_seq = [
"What's the capital of Canada?",
"What's the capital of USA?"
]
output_seq = [
"It's Ottawa",
"It's Washington"
]
input_tokens = tokenizer.batch_encode_plus(input_seq, return_tensors='pt', padding=True)
input_ids = input_tokens['input_ids']
output_tokens = tokenizer.batch_encode_plus(output_seq, return_tensors='pt', padding=True)
output_ids = output_tokens['input_ids']
print(input_ids.size(), output_ids.size()) # Returns torch.Size([2, 9]) torch.Size([2, 5])
# Okay
outputs = model.forward(input_ids)
outputs[0].size() # Returns `torch.Size([2, 9, 768])`
# Incorrect
outputs = model.forward(input_ids, decoder_input_ids=output_ids)
outputs[0].size() # Returns torch.Size([2, 1, 768])
``` | 08-08-2020 18:36:02 | 08-08-2020 18:36:02 | @patrickvonplaten Actually, going over [the source code ](https://huggingface.co/transformers/_modules/transformers/modeling_bart.html#BartModel), I found that the exact line in the definition of `class BartModel(PretrainedBartModel)` that was causing this problem is:
```python
use_cache = use_cache if use_cache is not None else self.config.use_cache
```
In the `forward` method. Since use_cache is set to `False` when `decoder_input_ids` is `None`, this line forces `use_cache` value to always be True if `decoder_input_ids` is a tensor.
I posted my [experiments here](https://www.kaggle.com/xhlulu/bart-experiments) in case they are useful.<|||||>I realized that this is actually wrong:
> In the forward method. Since use_cache is set to False when decoder_input_ids is None, this line forces use_cache value to always be True if decoder_input_ids is a tensor.
Re-reading the code made me realize that my problem could be solved by explicitly specifying `use_cache=False` when calling `model.forward`. This is likely because when the `use_cache` attribute in `model.forward` is `None`, it falls back to `model.config.use_cache`, which is set to True by default.
I'm not sure whether what we have here is the intended behavior for BART, so I'll let @sshleifer @patrickvonplaten make the decision to close this :)<|||||>This seems to be related to https://github.com/huggingface/transformers/issues/6348. @sshleifer do you want to take a look at this?<|||||>@sshleifer I think this is a problem because in the first pass when the `cache` is still empty, `use_cache=True` and `decoder_input_ids` is of length 9 then the `last_hidden_state` should also be of size 9 **and** the cache should be returned. I can take a look this week if you are very busy - let me know!<|||||>Yes that would be helpful @patrickvonplaten !<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@patrickvonplaten Make sure to remove `use_cache` in this PR to solve problem. |
transformers | 6,352 | closed | [GPT2] Correct typo in docs | 08-08-2020 18:30:28 | 08-08-2020 18:30:28 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6352?src=pr&el=h1) Report
> Merging [#6352](https://codecov.io/gh/huggingface/transformers/pull/6352?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9f57e39f7165fa8bd6ac911852221a76d4b79ebe&el=desc) will **decrease** coverage by `0.31%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6352?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6352 +/- ##
==========================================
- Coverage 79.79% 79.47% -0.32%
==========================================
Files 148 148
Lines 27196 27196
==========================================
- Hits 21701 21615 -86
- Misses 5495 5581 +86
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6352?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.29% <ø> (ø)` | |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.43% <0.00%> (-7.42%)` | :arrow_down: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.07% <0.00%> (-0.45%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.00% <0.00%> (-0.41%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6352?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6352?src=pr&el=footer). Last update [9f57e39...acc3016](https://codecov.io/gh/huggingface/transformers/pull/6352?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 6,351 | closed | Why is distillbart-cnn done with no teacher and distilbart-xsum has a teacher? | @sshleifer Can you expand on why distillbart-xsum is done with a teacher and distillbart-cnn is not?
| 08-08-2020 18:06:53 | 08-08-2020 18:06:53 | It's purely empirical. `distilbart-xsum` variants perform about 1-2 ROUGE pts worse without a teacher, the gap is basically 0 for the `distilbart-cnn` variants. For translation, it seems like teacher also helps a bit.
<|||||>In the future, you can tag me on discussion questions on discuss.huggingface.co !<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,350 | closed | Add model card for electra-base-turkish-cased-ner | 08-08-2020 16:54:33 | 08-08-2020 16:54:33 | Why is the test for `build_doc` failing?<|||||>The CI failure is unrelated. Thanks for sharing! |
|
transformers | 6,349 | closed | [testing] USE_CUDA default and intuitive skip decorators | This library's primarily use is for gpu work, and currently many tests won't run even if gpu is available, since the current setup wants env var `USE_CUDA` to be true for anything to happen. It's easy to forget to manually add this env var to pytest command line.
To maximize the testing potential I propose that only if `USE_CUDA=False` then the test is skipped (for CI jobs that need to test library's work on cpu), otherwise if `if torch.cuda.is_available()` cuda tests can be run.
In a brief discussion @julien-c suggested that:
> The original thinking was that we wanted to make sure that when we wanted to run on GPU it actually ran on GPU. i.e. it should even fail if you do `USE_CUDA` and there's no GPU, to prevent silent failures on GPU
and the discussion stopped there. This ticket was opened to complete this discussion.
@julien-c, could you please share a specific scenario based on the design intention you shared?
also `CUDA_VISIBLE_DEVICES=""` could be used to easily emulate a non-gpu environment if need be, w/o introducing new env vars. i.e. it'd be a built-in equivalent of `USE_CUDA=False`.
Further, `USE_CUDA` is currently only used for skip decorators. This setting cannot currently be respected from within a test. e.g. in a test I'm currently working on I have:
```
if torch.cuda.is_available():
testargs += ['--fp16', '--gpus=1']
```
so it'll ignore `USE_CUDA`, as the test must always run whether there is a gpu or not, so no skip decorator was used. This ignoring conceptually won't do the right thing then as it'll run the test on gpu even if `USE_CUDA==False` (or unset). So if `USE_CUDA`-functionalty remains, there is a need for an accessor that is not a [skip decorator](https://github.com/huggingface/transformers/blob/master/src/transformers/testing_utils.py#L126).
`require_multigpu` also currently ignores `USE_CUDA`
| 08-08-2020 16:29:59 | 08-08-2020 16:29:59 | Perhaps, while we are at it, it'd be good to discuss the related skip decorators. **Please let me know whether this is different enough and should be in its own issue.**
These are the torch-related skip decorators we [currently have](https://github.com/huggingface/transformers/blob/master/src/transformers/testing_utils.py#L73):
* `require_torch`
* `require_multigpu`
* `require_torch_and_cuda`
* `require_torch_tpu`
Currently there is `require_multigpu`, but no `require_gpu` - I tried using `require_torch_and_cuda` but it only works if USE_CUDA is set. And `require_torch_and_cuda` name is non-intuitive/inconsistent next to `require_multigpu`.
And `require_multigpu` should behave like `require_torch_and_cuda`, except require `gpus>1` - i.e. whatever the USE_CUDA discussion outcome will be - it should behave the same. Currently it **does not** respect the `USE_CUDA ` setting!
The `require_torch` decorator name is somewhat ambiguous - is it asking for just having `torch` installed or choosing torch vs tf?
Finally `require_torch_tpu` is again weirdly mismatching other decorator naming - should all of them have `_torch_` in the name or some of them? `require_multigpu` is torch-specific.
My thinking is perhaps we need:
1. `require_torch` - this test will run only under torch
2. `require_torch_gpu` - as `require_torch` plus at least 1 gpu
3. `require_torch_multigpu` - as `require_torch` plus at least 2 gpus
4. `require_torch_tpu` - as `require_torch` plus at least 1 tpu
that's if we sort out `USE_CUDA` to not need `require_torch_and_cuda`.
And perhaps there might be a situation where we want: `gpu|tpu` - that is skip this test unless at least 1 gpu or 1 tpu is available, as perhaps it'd be too slow on cpu. `require_torch_gpu_or_tpu` or `require_torch_non_cpu`? Is there a common name/term for an environment that has either gpu or tpu?
And then `require_torch_cpu_only` - skip this test if either gpu or tpu is available? i.e. this test needs to be run under cpu.
So 2 more:
5. `require_torch_non_cpu` - as `require_torch` plus at least 1 gpu or 1 tpu
6. `require_torch_cpu_only`- as `require_torch` plus must have neither gpus nor tpus
And as discussed at the end of the comment above, in addition to the skip decorators we will find a good use for `has_` accessors with the same names (e.g. `has_torch_gpu`), so that a test could potentially behave differently depending on the environment, which could be changed globally by `USE_CUDA` or `CUDA_VISIBLE_DEVICES`.<|||||>I think life would be marginally better if we used `CUDA_VISIBLE_DEVICES` and your first 4 `@require` decorators. Basically just delete `USE_CUDA`. But @julien-c has more context.
<|||||>@julien-c suggested we check in with @LysandreJik, @patrickvonplaten, @sgugger and @thomwolf so there is a full agreement, before we make the change.
<|||||>I agree with this change. Not sure we need decorators 5 and 6 though. I'd wait for an occasion to see if they are needed.<|||||>Ok for clarifying this and making it more robust. I'm also not opposed to changing the `USE_CUDA` flag to `True` by default either.<|||||>I agree with @sgugger here<|||||>@thomwolf, @julien-c asked to confirm that you're in agreement with this proposal. Thank you! <|||||>I think you have waited long enough to PR this @stas00 .
Apologies in advance if there is already a PR that I have not seen.<|||||>Thank you for affirming that, @sshleifer. <|||||>I agree with @sshleifer, feel free to open a PR @stas00!<|||||>Thank you, @LysandreJik. I will work on that once I finish sorting out the fsmt nuances. |
transformers | 6,348 | closed | [Bart] Cannot use Bart decoder cache with torchscript | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-4.15.0-111-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.5
- PyTorch version (GPU?): 1.6.0+cpu (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Bart: @sshleifer
## Information
Model I am using (Bert, XLNet ...): Bart
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
When trying to use torchscript for Bart while passing `decoder_input_ids`:
```python
from transformers import BartModel
import torch
model = BartModel.from_pretrained("sshleifer/bart-tiny-random")
input_ids = decoder_input_ids = torch.tensor([19 * [1] + [model.config.eos_token_id]])
traced_model = torch.jit.trace(model, (input_ids, decoder_input_ids))
```
the following error occurs:
```
RuntimeError: Only tensors, lists, tuples of tensors, or dictionary of tensors can be output from traced functions
```
On the other hand if one disables the past via `model.config.use_cache = False`, then no
error occurs. This could mean that the cache data structure should be updated to correctly work with Torchscript.
## Expected behavior
No error should occur when using Bart + Torchscript in the way explained above.
| 08-08-2020 15:24:15 | 08-08-2020 15:24:15 | After having fixed the bug it would be great if this line: https://github.com/huggingface/transformers/blob/ac001c48b8df1f5aadcc8cf2c71d7c1116c05250/tests/test_modeling_common.py#L252 can be removed so that a test for Bart + `past_key_value` is enabled.<|||||>thanks for writing such a good issue, I'll take a look tomorrow.<|||||>This should be looked at again after https://github.com/huggingface/transformers/pull/7474 is merged<|||||>Refactor resolves the problem -> should be fine after merge |
transformers | 6,347 | closed | ModuleNotFoundError: No module named 'transformers' on Google Colab | I installed **transformers** using the command `!pip install transformers` on **Google Colab Notebook**
But then I try to `import transformers` it throws an error.
This is the output of the pip install command:
Requirement already satisfied: transformers in /usr/local/lib/python3.6/dist-packages/transformers-3.0.2-py3.6.egg (3.0.2)
Requirement already satisfied: dataclasses in /usr/local/lib/python3.6/dist-packages (from transformers) (0.7)
Requirement already satisfied: filelock in /usr/local/lib/python3.6/dist-packages (from transformers) (3.0.12)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from transformers) (1.18.5)
Requirement already satisfied: packaging in /usr/local/lib/python3.6/dist-packages (from transformers) (20.4)
Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.6/dist-packages (from transformers) (2019.12.20)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from transformers) (2.23.0)
Requirement already satisfied: sacremoses in /usr/local/lib/python3.6/dist-packages/sacremoses-0.0.43-py3.6.egg (from transformers) (0.0.43)
Requirement already satisfied: sentencepiece!=0.1.92 in /usr/local/lib/python3.6/dist-packages/sentencepiece-0.1.91-py3.6-linux-x86_64.egg (from transformers) (0.1.91)
Requirement already satisfied: tokenizers==0.8.1.rc1 in /usr/local/lib/python3.6/dist-packages/tokenizers-0.8.1rc1-py3.6-linux-x86_64.egg (from transformers) (0.8.1rc1)
Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.6/dist-packages (from transformers) (4.41.1)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from packaging->transformers) (1.15.0)
Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.6/dist-packages (from packaging->transformers) (2.4.7)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (2.10)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (1.24.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->transformers) (2020.6.20)
Requirement already satisfied: click in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (7.1.2)
Requirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from sacremoses->transformers) (0.16.0) | 08-08-2020 14:45:03 | 08-08-2020 14:45:03 | @Mohd-Misran seems to be working for me
.
Maybe try to open a new colab notebook?
<|||||>You need to restart your Colab runtime after installing new dependencies |
transformers | 6,346 | closed | Create README.md | 08-08-2020 14:43:55 | 08-08-2020 14:43:55 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6346?src=pr&el=h1) Report
> Merging [#6346](https://codecov.io/gh/huggingface/transformers/pull/6346?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9f57e39f7165fa8bd6ac911852221a76d4b79ebe&el=desc) will **decrease** coverage by `0.18%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6346?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6346 +/- ##
==========================================
- Coverage 79.79% 79.61% -0.19%
==========================================
Files 148 148
Lines 27196 27196
==========================================
- Hits 21701 21652 -49
- Misses 5495 5544 +49
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6346?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6346/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `66.95% <0.00%> (-25.22%)` | :arrow_down: |
| [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6346/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6346/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6346/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6346?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6346?src=pr&el=footer). Last update [9f57e39...9591bf9](https://codecov.io/gh/huggingface/transformers/pull/6346?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 6,345 | closed | Is it necessary to provide attention_mask, or model will calculate itself? | Is it necessary to provide attention_mask, or model will calculate itself? | 08-08-2020 11:15:25 | 08-08-2020 11:15:25 | If you do not pass an `attention_mask` then the `attention_mask` will automatically set to all ones (`[1, 1, 1, ...,]`). Meanning that every token is attended to (no token is masked). Also check out the docs here: https://huggingface.co/transformers/glossary.html#attention-mask.
If your `input_ids` contain <PAD> tokens, then the `attention_mask` will not automatically be calculated. You can leverage the tokenizers though to automatically retrieve the correct `attention_mask`. |
transformers | 6,344 | closed | [s2s] fix label_smoothed_nll_loss | Regarding issue #4576
Regarding reduction, fairseq does reduce using 'sum' for both [cross_entropy](https://fairseq.readthedocs.io/en/latest/_modules/fairseq/criterions/cross_entropy.html#CrossEntropyCriterion) and [label_smoothed_cross_entropy](https://fairseq.readthedocs.io/en/latest/_modules/fairseq/criterions/label_smoothed_cross_entropy.html) . In transformers, `CrossEntropy` does the default `mean` reduction. Should we do `mean` or `sum` here ?
@sshleifer | 08-08-2020 08:02:27 | 08-08-2020 08:02:27 | I think `sum` is good. Ideally, we should divide by the number of non pad tokens, but I'm gunna merge this and then we can experiment with more complicated transformations. Thanks for the fix!<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6344?src=pr&el=h1) Report
> Merging [#6344](https://codecov.io/gh/huggingface/transformers/pull/6344?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/99f73bcc71e73d747124c476f9028db752fb05f3&el=desc) will **increase** coverage by `0.11%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6344?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6344 +/- ##
==========================================
+ Coverage 79.47% 79.59% +0.11%
==========================================
Files 148 148
Lines 27196 27196
==========================================
+ Hits 21614 21646 +32
+ Misses 5582 5550 -32
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6344?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: |
| [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: |
| [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.84% <0.00%> (+7.41%)` | :arrow_up: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+21.91%)` | :arrow_up: |
| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6344/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6344?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6344?src=pr&el=footer). Last update [99f73bc...cfa9adc](https://codecov.io/gh/huggingface/transformers/pull/6344?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,343 | closed | The default cache directory is lack of disk capacity, I need change the configure of the default cache directory. | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 08-08-2020 08:00:49 | 08-08-2020 08:00:49 | Please try to write better posts in the future. This is just lazy.
You can set the directory for a cache with the `TRANSFORMERS_CACHE` environment variable.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,342 | closed | [marian] converter supports models from new Tatoeba project | - no state dict change, just need to read model metadata from a new path
- we only accept models with 7 letter names... like `ara-eng`. This is the new format.
- added integration test for `ara-eng`
Done:
- [x] upload 300 models: all new ones that were not "dominated" by a model we already have, where dominated means same langpair but the first model has a higher BLEU score.
Todo:
- [ ] switch integration test for `ara-eng` -> `ar-en`.
- [ ] automated model cards with correct `tags`, more info on all possible language codes.
- [ ] automated conflict resolution: Don't convert models that are worse than predecessors.
- [ ] decide what to do about naming: move all to 3 letter/all to 2 letter?
- [ ] notebook -> pyfile
- [ ] tweet
cc @julien-c
Dict where keys are old names, and values are new names, filtered to situations where new name has higher BLEU than old name:
```
{'bg-es': 'bul-spa',
'es-eu': 'spa-eus',
'eu-es': 'eus-spa',
'es-bg': 'spa-bul',
'ilo-en': 'ilo-eng',
'es-mk': 'spa-mkd',
'es-ca': 'spa-cat',
'es-af': 'spa-afr',
'lt-es': 'lit-spa',
'bn-en': 'ben-eng',
'th-en': 'tha-eng',
'fr-ca': 'fra-cat',
'ga-en': 'gle-eng',
'en-ga': 'eng-gle',
'ko-fi': 'kor-fin',
'es-uk': 'spa-ukr',
'gl-es': 'glg-spa',
'eo-sv': 'epo-swe',
'ca-de': 'cat-deu',
'az-en': 'aze-eng',
'sv-eo': 'swe-epo',
'de-is': 'deu-isl',
'ceb-en': 'ceb-eng',
'ca-fr': 'cat-fra',
'tl-en': 'tgl-eng',
'is-de': 'isl-deu',
'ko-en': 'kor-eng',
'is-es': 'isl-spa',
'es-gl': 'spa-glg',
'bg-fr': 'bul-fra',
'de-af': 'deu-afr',
'ko-es': 'kor-spa',
'es-is': 'spa-isl',
'af-es': 'afr-spa',
'gl-en': 'glg-eng',
'fi-en': 'fin-eng',
'en-bg': 'eng-bul',
'mk-es': 'mkd-spa',
'ka-en': 'kat-eng',
'en-eu': 'eng-eus',
'de-ca': 'deu-cat',
'ar-de': 'ara-deu'}
``` | 08-08-2020 07:24:20 | 08-08-2020 07:24:20 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6342?src=pr&el=h1) Report
> Merging [#6342](https://codecov.io/gh/huggingface/transformers/pull/6342?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/fb7330b30ebfbb3f07b87203f0405ee09905eeda&el=desc) will **increase** coverage by `0.99%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6342?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6342 +/- ##
==========================================
+ Coverage 78.42% 79.41% +0.99%
==========================================
Files 156 156
Lines 28129 28129
==========================================
+ Hits 22061 22340 +279
+ Misses 6068 5789 -279
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6342?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6342/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6342/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-1.26%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6342/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.31% <0.00%> (-0.98%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6342/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `83.71% <0.00%> (+0.18%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6342/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.35% <0.00%> (+0.19%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6342/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.44% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6342/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6342/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <0.00%> (+0.83%)` | :arrow_up: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6342/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.59% <0.00%> (+1.36%)` | :arrow_up: |
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6342/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.82% <0.00%> (+1.63%)` | :arrow_up: |
| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/6342/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6342?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6342?src=pr&el=footer). Last update [fb7330b...c3288e2](https://codecov.io/gh/huggingface/transformers/pull/6342?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,341 | closed | [s2s] tiny QOL improvement: run_eval prints scores | its annoying to have to cat a file to see the scores after calling run_eval.py | 08-08-2020 04:59:44 | 08-08-2020 04:59:44 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6341?src=pr&el=h1) Report
> Merging [#6341](https://codecov.io/gh/huggingface/transformers/pull/6341?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/322dffc6c9a44fd504b24b0efcbcaa419b577a93&el=desc) will **increase** coverage by `0.13%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6341?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6341 +/- ##
==========================================
+ Coverage 78.37% 78.51% +0.13%
==========================================
Files 148 148
Lines 27196 27196
==========================================
+ Hits 21316 21354 +38
+ Misses 5880 5842 -38
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6341?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6341/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.20%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6341/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6341/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `97.41% <0.00%> (+32.94%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6341?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6341?src=pr&el=footer). Last update [322dffc...37aad56](https://codecov.io/gh/huggingface/transformers/pull/6341?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,340 | closed | PegasusForConditionalGeneration (torch version) | This PR adds, [pegasus](https://arxiv.org/abs/1912.08777), a SOTA summarization model ported from [tf1] (https://github.com/google-research/pegasus) in collaboration with @JingqingZ .
More info on the model can be found in `pegasus.rst` under Files changed.
Config: [here](https://s3.amazonaws.com/models.huggingface.co/bert/google/pegasus-xsum/config.json)
#### TODO This PR:
- [x] convert to bart state dict format
- [x] working sentencepiece to
- [x] integration test with good summary on xsum data. (Haven't checked parity).
- [x] beam_alpha -> length_penalty approximation.
- [x] check xsum rouge with length penalty 1. 24.34 vs 24.56 Rouge 2 in paper (very good, no bug). Gap likely from different length penalty.
- [x] convert other checkpoints besides xsum
- [x] tokenizer must know max_source_length (`tokenizer_config.json`)
- [x] `model_doc/pegasus.rst` (document known fp16 issue)
- [x] move all checkpoints to `google/pegasus/{dataset}/`
- [ ] model_cards (S3)
#### Future PR(s):
- [ ] TF 2.0
- [ ] `tokenizer.add_tokens` doesn't work.
- [ ] support for finetuning pegasus-large (WIP see `finetune_pegasus.sh`)
- [ ] potentially add pegasus's `length_normalization` logic if it helps metrics substantially (over equivalent length_penalty).
- [ ] faster tokenizer tests (with smaller sentencepiece model.)
- [ ] try to find a clean way to add the pegasus length penalty.
- [ ] pick checkpoint for summarization pipeline default -- probably cnndm.
#### Known FP16 Issue
fp16 generation doesn't work for most sequences. We have an activation that is 101,610 in both fp32 and fp16 (the limit is 65,504).
In `#pegasus-collab`, the authors responded that they never used fp16 during pretraining/finetuning.
Things I tried that didn't help:
- never use `FusedLayerNorm`
- increase `layernorm_eps` to 1 (from 1e-5)
Things I haven't tried:
- change all softmaxes to dtype=torch.float32
- manually divide by 100 and finetune more with some loss that discourages large activations.
#### Implementation Choices
- I inherited from Bart with 0 change to bart, but added a new config/modeling file for namespace consistency/control.
- `PegasusTokenizer` inherits from `ReformerTokenizer` -- both just use a single `spiece.model`.
- added common test coverage for the tokenizer, not the model since it is 0 LOC.
- added integration tests for xsum.
### Inference API
datasets will vary between checkpoints, but otherwise, I think these are almost correct front matter
```
---
language: en
datasets:
- xsum
tags:
- summarization
---
```
This doesn't seem to be helping since [xsum](https://huggingface.co/google/pegasus-xsum) still thinks its for mask filling.
| 08-08-2020 04:54:59 | 08-08-2020 04:54:59 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6340?src=pr&el=h1) Report
> Merging [#6340](https://codecov.io/gh/huggingface/transformers/pull/6340?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f6cb0f806efecb64df40c946dacaad0adad33d53&el=desc) will **increase** coverage by `1.80%`.
> The diff coverage is `94.50%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6340?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6340 +/- ##
==========================================
+ Coverage 77.51% 79.32% +1.80%
==========================================
Files 150 153 +3
Lines 27789 27877 +88
==========================================
+ Hits 21542 22113 +571
+ Misses 6247 5764 -483
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6340?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `90.00% <ø> (-0.91%)` | :arrow_down: |
| [src/transformers/configuration\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3BlZ2FzdXMucHk=) | `90.90% <90.90%> (ø)` | |
| [src/transformers/tokenization\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `93.54% <93.54%> (ø)` | |
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.27% <100.00%> (+0.01%)` | :arrow_up: |
| [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.33% <100.00%> (+0.15%)` | :arrow_up: |
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `94.23% <100.00%> (+0.48%)` | :arrow_up: |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.61% <100.00%> (+14.65%)` | :arrow_up: |
| [src/transformers/modeling\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19wZWdhc3VzLnB5) | `100.00% <100.00%> (ø)` | |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.45% <100.00%> (-2.33%)` | :arrow_down: |
| [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6340/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |
| ... and [27 more](https://codecov.io/gh/huggingface/transformers/pull/6340/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6340?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6340?src=pr&el=footer). Last update [f6cb0f8...95e8544](https://codecov.io/gh/huggingface/transformers/pull/6340?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I'm going to merge after 2 hours of docs work, then take another pass to document prepare_seq2seq_batch consistently when other tokenizers implement it. |
transformers | 6,339 | closed | refactor almost identical tests | in preparation for adding more schedulers this PR refactors these almost identical tests.
Unfortunately [can't use `pytest.mark.parametrize`](https://docs.pytest.org/en/latest/unittest.html#pytest-features-in-unittest-testcase-subclasses), so the only drawback that it makes them all into a single test. It'd have been nice to parametrize instead. | 08-08-2020 03:36:10 | 08-08-2020 03:36:10 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6339?src=pr&el=h1) Report
> Merging [#6339](https://codecov.io/gh/huggingface/transformers/pull/6339?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/322dffc6c9a44fd504b24b0efcbcaa419b577a93&el=desc) will **decrease** coverage by `0.03%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6339?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6339 +/- ##
==========================================
- Coverage 78.37% 78.34% -0.04%
==========================================
Files 148 148
Lines 27196 27196
==========================================
- Hits 21316 21307 -9
- Misses 5880 5889 +9
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6339?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6339/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.26%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6339?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6339?src=pr&el=footer). Last update [322dffc...92d3825](https://codecov.io/gh/huggingface/transformers/pull/6339?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>could also modify `unwrap_schedule` and `unwrap_and_save_reload_schedule` to return a clean list of numbers, and then it'd be just:
```
for scheduler_func, data in scheds.items():
kwargs, expected_learning_rates = data
scheduler = scheduler_func(self.optimizer, **kwargs)
lrs_1 = unwrap_schedule(scheduler, self.num_steps)
self.assertListAlmostEqual(lrs_1, expected_learning_rates, tol=1e-2)
scheduler = scheduler_func(self.optimizer, **kwargs)
lrs_2 = unwrap_and_save_reload_schedule(scheduler, self.num_steps)
self.assertListEqual(lrs_1, lrs_2)
```
but perhaps it'd be less intuitive for those reading the test code.<|||||>Does this impact tracebacks in a bad way? Previously I would know which scheduler I broke if `test_warmup_constant_scheduler` failed.<|||||>That's super-imporant, @sshleifer, thank you for flagging that!
Added an assert msg to make it clear what fails, e.g. if I break data for the sake of demo, we now get:
```
for scheduler_func, data in scheds.items():
kwargs, expected_learning_rates = data
scheduler = scheduler_func(self.optimizer, **kwargs)
lrs_1 = unwrap_schedule(scheduler, self.num_steps)
self.assertEqual(len(lrs_1[0]), 1)
self.assertListAlmostEqual(
> [l[0] for l in lrs_1], expected_learning_rates, tol=1e-2, msg=f"failed for {scheduler_func}"
)
tests/test_optimization.py:126:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_optimization.py:92: in assertListAlmostEqual
self.assertAlmostEqual(a, b, delta=tol, msg=msg)
E AssertionError: 2.5 != 3.5 within 0.01 delta (1.0 difference) : failed for <function get_constant_schedule_with_warmup at 0x7f5da6f0bdd0>
```<|||||>hmm, not sure whether the last commit, to make the assert message even more specific, was needed.
Also, alternatively, I can move the code out of unittest class and then use pytest parametrization so it'll be self-documenting on assert. Ala: https://github.com/huggingface/transformers/blob/175cd45e13b2e33d1efec9e2ac217cba99f6ae58/examples/seq2seq/test_seq2seq_examples.py#L238
<|||||>LGTM as is, but won't merge it myself. |
transformers | 6,338 | closed | remove a TODO item to use a tiny model | as discussed with @sshleifer, removing this TODO to switch to a tiny model, since it won't be able to test the qualitative results of the evaluation (i.e. the results are meaningless). | 08-08-2020 00:57:38 | 08-08-2020 00:57:38 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6338?src=pr&el=h1) Report
> Merging [#6338](https://codecov.io/gh/huggingface/transformers/pull/6338?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1f8e8265188de8b76f5c28539056d6eb772e4e0f&el=desc) will **increase** coverage by `0.32%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6338?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6338 +/- ##
==========================================
+ Coverage 78.79% 79.12% +0.32%
==========================================
Files 148 148
Lines 27196 27196
==========================================
+ Hits 21430 21519 +89
+ Misses 5766 5677 -89
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6338?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6338/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-69.11%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6338/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.20%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6338/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6338/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6338/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6338/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6338/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6338/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6338/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6338/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |
| ... and [4 more](https://codecov.io/gh/huggingface/transformers/pull/6338/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6338?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6338?src=pr&el=footer). Last update [1f8e826...8721f03](https://codecov.io/gh/huggingface/transformers/pull/6338?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,337 | closed | [CI] add manual workflow dispatch option to github actions runners | https://github.blog/changelog/2020-07-06-github-actions-manual-triggers-with-workflow_dispatch/ | 08-08-2020 00:43:53 | 08-08-2020 00:43:53 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,336 | closed | broken ONNX slow test | ```
def test_quantize_pytorch(self):
for model in OnnxExportTestCase.MODEL_TO_TEST:
path = self._test_export(model, "pt", 12)
> quantized_path = quantize(Path(path))
```
tests/test_onnx.py:75: `path` is None
https://github.com/huggingface/transformers/runs/960368281?check_suite_focus=true | 08-08-2020 00:42:31 | 08-08-2020 00:42:31 | This seems to be a bit of a flaky test, doesn't it?<|||||>There is strange try/except syntax in `_test_export ` that I think can be trivially improved. |
transformers | 6,335 | closed | delete unused tiny models | ```
[ok] bart-tiny-random/
[ok] tiny-marian-en-de/
[ok] tiny-mbart/
[deleted] distilbert_tiny_random/
[ok] tiny-ctrl/
PRE tiny-dbmdz-bert-large-cased-finetuned-conll03-english/
[ok] tiny-distilbert-base-cased-distilled-squad/
[ok] tiny-distilbert-base-cased/
[ok] tiny-distilbert-base-uncased-finetuned-sst-2-english/
[ok] tiny-distilroberta-base/
[ok] tiny-gpt2/
[ok] tiny-xlnet-base-cased/
```
and make sure the ones that remain are usable/have tokenizer files. | 08-08-2020 00:32:51 | 08-08-2020 00:32:51 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,334 | closed | [WIP] Avoid call to torch.triu | 08-07-2020 23:52:37 | 08-07-2020 23:52:37 | ||
transformers | 6,333 | closed | add tests/test_tokenization_reformer.py | I don't think there is any common test coverage for ReformerTokenizer. besides through integration tests.
Good source for copy/modification is `XLMRobertaTokenizationTest`
| 08-07-2020 22:51:14 | 08-07-2020 22:51:14 | I can help with this.<|||||>Awesome!<|||||>@sshleifer I put together the test code and find that the following test is failing:
```
self = < tests.test_tokenization_reformer.ReformerTokenizationTest
testMethod = test_torch_encode_plus_sent_to_model
@slow
@require_torch
def test_torch_encode_plus_sent_to_model(self):
import torch
from transformers import MODEL_MAPPING, TOKENIZER_MAPPING
MODEL_TOKENIZER_MAPPING = merge_model_tokenizer_mappings(MODEL_MAPPING, TOKENIZER_MAPPING)
tokenizers = self.get_tokenizers(do_lower_case=False)
for tokenizer in tokenizers:
with self.subTest(f"{tokenizer.__class__.__name__}"):
if tokenizer.__class__ not in MODEL_TOKENIZER_MAPPING:
return
config_class, model_class = MODEL_TOKENIZER_MAPPING[tokenizer.__class__]
config = config_class()
if config.is_encoder_decoder or config.pad_token_id is None:
return
model = model_class(config)
# Make sure the model contains at least the full vocabulary size in its embedding matrix
is_using_common_embeddings = hasattr(model.get_input_embeddings(), "weight")
assert (
(model.get_input_embeddings().weight.shape[0] >= len(tokenizer))
if is_using_common_embeddings
else True
)
AssertionError:
assert False
```
Upon further investigation I found a discrepancy between the pre-trained tokenizer and pre-trained model config around the pad token id and resulting vocab size. Please see the example below:
`
from transformers import ReformerTokenizer, ReformerModel
model = ReformerModel.from_pretrained("google/reformer-crime-and-punishment")
tokenizer = ReformerTokenizer.from_pretrained("google/reformer-crime-and-punishment")
print(tokenizer.vocab_size) 320
print(len(tokenizer)) 321
print(model.config.vocab_size) 320
print(model.get_input_embeddings().weight.shape[0]) 320
print(tokenizer.get_vocab()['<pad>']) 320
print(model.config.pad_token_id) 0
print(tokenizer.get_vocab()['<unk>']) 0
`
What is your suggestion for moving forward?<|||||>My suggestion would be to check in `tokenization_utils_base ` how `__len__` works, and try to make it so that ReformerTokenizer's __len__ is 320.<|||||>@sshleifer Test merged.<|||||>Thx @D-Roberts ! |
transformers | 6,332 | closed | [CI] Self-scheduled runner also pins torch | ```bash
pip install torch!=1.6.0 --no-cache-dir
```
| 08-07-2020 22:31:01 | 08-07-2020 22:31:01 | merging to fix CI<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6332?src=pr&el=h1) Report
> Merging [#6332](https://codecov.io/gh/huggingface/transformers/pull/6332?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6695450a23545bc9d5416f39ab39609c7811c653&el=desc) will **increase** coverage by `0.57%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6332?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6332 +/- ##
==========================================
+ Coverage 78.54% 79.11% +0.57%
==========================================
Files 148 148
Lines 27196 27196
==========================================
+ Hits 21361 21517 +156
+ Misses 5835 5679 -156
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6332?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6332/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-70.97%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6332/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-32.95%)` | :arrow_down: |
| [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6332/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6332/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6332/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6332/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `94.72% <0.00%> (+22.87%)` | :arrow_up: |
| [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6332/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.77% <0.00%> (+23.94%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6332/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `94.63% <0.00%> (+70.08%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6332?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6332?src=pr&el=footer). Last update [6695450...b9d3b99](https://codecov.io/gh/huggingface/transformers/pull/6332?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,331 | closed | Delete this line in label_smoothed_nll_loss | ```python
bs = pad_mask.long().sum()
``` | 08-07-2020 20:41:14 | 08-07-2020 20:41:14 | |
transformers | 6,330 | closed | BertForPreTraining with NSP | # ❓ Questions & Help
## Details
I am trying to train BERT from scratch following a modification of https://huggingface.co/blog/how-to-train, where I use a BertTokenizer and BertForPreTraining. The [documentation for BertForPreTraining](https://huggingface.co/transformers/model_doc/bert.html#transformers.BertForPreTraining) states that it has two heads on top for both pre-training processes (MLM and NSP), but [the example provided](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L874-L884) only provides an example of MLM.
Based on [a comment](https://github.com/huggingface/transformers/issues/2693#issuecomment-580870278) provided by @LysandreJik in a previous issue, it seems that none of the provided datasets (i.e. LineByLineTextDataset) will handle the NSP objective and this objective is excluded because the RoBERTa paper has proven that the NSP objective was not particularly helpful.
@LysandreJik additionally noted that anyone who wants to implement the NSP objective can do so by changing the dataset/training loop, and I was wondering if there were any plans to add support for NSP for the sake of completeness?
It seems that something similar to what is going on in a PR (https://github.com/huggingface/transformers/pull/6168) for Albert SOP can be done. Is this correct and can anyone provide me with some guidance moving forward? | 08-07-2020 18:00:19 | 08-07-2020 18:00:19 | Hi! Supporting the NSP objective is not on our roadmap, due to the reason you've linked and because of insufficient bandwidth.
However, similar to the work in #6168 for SOP, we're very open to contributions and would accept a PR adding the BERT NSP objective to the datacollators/datasets.<|||||>Awesome, I've been working on something similar. Will open a PR, thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@choidongyeon May i ask if the work on dataset part using in BertForPreTraining APIs is finished? Any example codes like run_mlm.py (is there a run_mlm_nsp.py?) can help, looking forward to your reply, thx! |
transformers | 6,329 | closed | OSError: Model name 'lonePatient/albert_chinese_small' was not found in tokenizers model | my code:
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("lonePatient/albert_chinese_small")
model = AutoModel.from_pretrained("lonePatient/albert_chinese_small")
model.save_pretrained("lonePatient+albert_chinese_small")
+++++++++++
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 633/633 [00:00<00:00, 113kB/s]
Traceback (most recent call last):
File "hg_download.py", line 30, in <module>
tokenizer = AutoTokenizer.from_pretrained("lonePatient/albert_chinese_small")
File "/usr/local/lib/python3.8/site-packages/transformers/tokenization_auto.py", line 217, in from_pretrained
return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/usr/local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1140, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "/usr/local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1239, in _from_pretrained
raise EnvironmentError(
OSError: Model name 'lonePatient/albert_chinese_small' was not found in tokenizers model name list (albert-base-v1, albert-large-v1, albert-xlarge-v1, albert-xxlarge-v1, albert-base-v2, albert-large-v2, albert-xlarge-v2, albert-xxlarge-v2). We assumed 'lonePatient/albert_chinese_small' was a path, a model identifier, or url to a directory containing vocabulary files named ['spiece.model'] but couldn't find such vocabulary files at this path or url.
~/ub16_prj % | 08-07-2020 16:17:32 | 08-07-2020 16:17:32 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,328 | closed | Small docfile fixes | Nothing major, just a few fixes to make the files work with the coming notebook conversion. | 08-07-2020 15:46:31 | 08-07-2020 15:46:31 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6328?src=pr&el=h1) Report
> Merging [#6328](https://codecov.io/gh/huggingface/transformers/pull/6328?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2f2aa0c89cab9a77560e6845578f917a61081c67&el=desc) will **increase** coverage by `0.27%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6328?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6328 +/- ##
==========================================
+ Coverage 79.14% 79.42% +0.27%
==========================================
Files 148 148
Lines 27191 27191
==========================================
+ Hits 21521 21596 +75
+ Misses 5670 5595 -75
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6328?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.20%)` | :arrow_down: |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `68.14% <0.00%> (-25.67%)` | :arrow_down: |
| [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.84% <0.00%> (+7.41%)` | :arrow_up: |
| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6328/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6328?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6328?src=pr&el=footer). Last update [2f2aa0c...71324f6](https://codecov.io/gh/huggingface/transformers/pull/6328?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,327 | closed | Batched pipeline | Hi,
Is there a way to run batches with QuestionAnsweringPipeline rather than just one example?
Thanks. | 08-07-2020 15:10:56 | 08-07-2020 15:10:56 | Yes, building off of the model and example [here](https://huggingface.co/twmkn9/albert-base-v2-squad2):
```
from transformers.pipelines import pipeline
model_name = "twmkn9/albert-base-v2-squad2"
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'Why is model conversion important?',
'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'}, {
'question': 'What is the name of the repository ?',
'context': 'Pipeline have been included in the huggingface/transformers repository. '
}
res = nlp(QA_input, handle_impossible_answer=True)
print(res)
# [{'score': 0.2479676753282547, 'start': 59, 'end': 132, 'answer': 'gives freedom to the user and let people easily switch between frameworks.'}, {'score': 0.5168691277503967, 'start': 35, 'end': 71, 'answer': 'huggingface/transformers repository.'}]
```
<|||||>Hi.
I used your example for testing. It seems like even though I put multiple question-context pairs in as input, it really is just doing a one-by-one prediction on them in the background.
So for 1 example the inference time is: 0.56 sec
For 2 examples the inference time is: 1.05 sec
For 16 examples it is: 8.4 sec., etc..
Is there a way to do batch inference with the model to save some time ? (I use 12 GB gpu, transformers 2.4.0 or 3.2.0)
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Possible duplicate of #3007<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>still an open point. highly required, any information on the progress?<|||||>This is implemented in recent versions: https://huggingface.co/docs/transformers/master/en/main_classes/pipelines#pipeline-batching
cc @Narsil <|||||>For the sake of completion:
```python
pipe = pipeline('question-answering', model=model_name, tokenizer=model_name)
questions = [{"question": "Who am I ?", "context": "There is something about me"}, .... ]
for answer in pipe(questions, batch_size=16):
print(answer)
```` |
transformers | 6,326 | closed | Patch models | 08-07-2020 15:04:53 | 08-07-2020 15:04:53 | Side note, if you rebase, you can remove those models from the special ignore list in the `check_repo` script. |
|
transformers | 6,325 | closed | Text-to-SQL Query | # ❓ Questions & Help
Hello everyone, I got a task that I want to use NLP for convert text into a SQL query. Anyone knows how to do this or got any suggestion? Thanks. | 08-07-2020 13:18:05 | 08-07-2020 13:18:05 | pinging @mrm8488 .<|||||>You can try this model: https://huggingface.co/mrm8488/t5-base-finetuned-wikiSQL
Thanks for ping me @patil-suraj <|||||>If you go to the model hub and type 'SQL', you get [this](https://huggingface.co/models?search=Sql). There are currently 3 variants of the T5 model fine-tuned on WikiSQL (a large dataset that contains sentence - SQL pairs).<|||||>Hi @mrm8488 I am having some issue with "torch_xla" module, is there any way to run this model on local windows, without this TPU module? Thanks.<|||||>hi @thiagomoeng, can you try setting the `xla_device` argument in `config.json` to `False`.
```python3
config = T5Config.from_pretrained("mrm8488/t5-base-finetuned-wikiSQL")
config.xla_device = False
model = T5ForConditionalGeneration.from_pretrained("mrm8488/t5-base-finetuned-wikiSQL", config=config)<|||||>Ofc, I can but the basic structure band the most of the code is from the public colab about fine tuning T5 on TPU by @patil-suraj <|||||>Hi @mrm8488 I got a SQL data on Oracle, do you can give me some way of how to prepare this sql data to finetune on your model? I am beginner on training models.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@thiagomoeng if you want to convert natural language to sql , here is one implementation : https://github.com/abhijithneilabraham/tableQA
Drop me any comments on the slack channel in the readme there. |
transformers | 6,324 | closed | Create README.md | 08-07-2020 11:32:26 | 08-07-2020 11:32:26 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6324?src=pr&el=h1) Report
> Merging [#6324](https://codecov.io/gh/huggingface/transformers/pull/6324?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7e9861f7f4ab137cf102dae9cf6957c1c402c022&el=desc) will **decrease** coverage by `0.12%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6324?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6324 +/- ##
==========================================
- Coverage 79.23% 79.11% -0.13%
==========================================
Files 148 148
Lines 27195 27195
==========================================
- Hits 21548 21515 -33
- Misses 5647 5680 +33
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6324?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.43% <0.00%> (-7.42%)` | :arrow_down: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.07% <0.00%> (-0.45%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.00% <0.00%> (-0.41%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.46% <0.00%> (+5.26%)` | :arrow_up: |
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `95.23% <0.00%> (+10.71%)` | :arrow_up: |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `92.17% <0.00%> (+25.21%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6324?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6324?src=pr&el=footer). Last update [7e9861f...66d050d](https://codecov.io/gh/huggingface/transformers/pull/6324?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 6,323 | closed | Hi , I am having trouble locating the transformers/examples/summarization/bart/ file. I was wondering if it has been renamed or changed? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 08-07-2020 10:40:53 | 08-07-2020 10:40:53 | Think it was moved to the seq2seq directory: https://github.com/huggingface/transformers/tree/master/examples/seq2seq<|||||>Thanks! |
transformers | 6,322 | closed | Transformer-XL: Improved tokenization with sacremoses | Fixes #5136
As explained in the above issue, this PR fixes the tokenization of the TransfoXLTokenizer by using the sacremoses library with an extended feature of tokenizing comma-separated and floating point numbers. That way the input text is tokenized the same way as in the WikiText-103 dateset used for pretraining.
Changes in a nutshell:
* The TransfoXLTokenizer is now using sacremoses for tokenization
* Added tokenization of comma-separated and floating point numbers.
* Removed prepare_for_tokenization() from tokenization_transfo_xl.py because punctuation is handled by sacremoses
* Added corresponding tests
* Removed test comapring TransfoXLTokenizer and TransfoXLTokenizerFast (as discussed in #5302)
* Added deprecation warning to TransfoXLTokenizerFast (as discussed in #5302)
@TevenLeScao | 08-07-2020 09:26:47 | 08-07-2020 09:26:47 | `ci/circleci: check_code_quality` fails for me due to Python 3.6 is not compatible with PyTorch 1.6. Any ideas how to fix this?<|||||>Pinging @n1t0 and @TevenLeScao (in holidays right now, will be back next week!)<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6322?src=pr&el=h1) Report
> Merging [#6322](https://codecov.io/gh/huggingface/transformers/pull/6322?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/930153e7d2d658267b7630a047a4bfc85b86042d?el=desc) will **increase** coverage by `0.37%`.
> The diff coverage is `96.96%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6322?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6322 +/- ##
==========================================
+ Coverage 79.36% 79.74% +0.37%
==========================================
Files 157 157
Lines 28569 28587 +18
==========================================
+ Hits 22675 22797 +122
+ Misses 5894 5790 -104
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6322?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `41.74% <96.96%> (-0.75%)` | :arrow_down: |
| [src/transformers/commands/env.py](https://codecov.io/gh/huggingface/transformers/pull/6322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9lbnYucHk=) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
| [src/transformers/commands/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9fX2luaXRfXy5weQ==) | `0.00% <0.00%> (-100.00%)` | :arrow_down: |
| [src/transformers/commands/transformers\_cli.py](https://codecov.io/gh/huggingface/transformers/pull/6322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy90cmFuc2Zvcm1lcnNfY2xpLnB5) | `0.00% <0.00%> (-87.50%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.94% <0.00%> (-74.32%)` | :arrow_down: |
| [src/transformers/commands/download.py](https://codecov.io/gh/huggingface/transformers/pull/6322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9kb3dubG9hZC5weQ==) | `0.00% <0.00%> (-65.00%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |
| [src/transformers/commands/serving.py](https://codecov.io/gh/huggingface/transformers/pull/6322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9zZXJ2aW5nLnB5) | `0.00% <0.00%> (-55.89%)` | :arrow_down: |
| [src/transformers/commands/run.py](https://codecov.io/gh/huggingface/transformers/pull/6322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9ydW4ucHk=) | `0.00% <0.00%> (-53.34%)` | :arrow_down: |
| [src/transformers/commands/user.py](https://codecov.io/gh/huggingface/transformers/pull/6322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy91c2VyLnB5) | `0.00% <0.00%> (-36.56%)` | :arrow_down: |
| ... and [22 more](https://codecov.io/gh/huggingface/transformers/pull/6322/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6322?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6322?src=pr&el=footer). Last update [930153e...3efbfcf](https://codecov.io/gh/huggingface/transformers/pull/6322?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,321 | closed | [Community notebooks] Add notebook on fine-tuning Electra and interpreting with IG | Adding a link to a community notebook containing an example of:
- fine-tuning Electra on GLUE SST-2 with Trainer,
- running Captum Integrated Gradients token importance attribution on the results ,
- visualizing attribution with captum.attr.visualization. | 08-07-2020 09:24:07 | 08-07-2020 09:24:07 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6321?src=pr&el=h1) Report
> Merging [#6321](https://codecov.io/gh/huggingface/transformers/pull/6321?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c72f9c90a160e74108d50568fa71e1f216949846&el=desc) will **decrease** coverage by `0.25%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6321?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6321 +/- ##
==========================================
- Coverage 79.52% 79.27% -0.26%
==========================================
Files 148 148
Lines 27194 27194
==========================================
- Hits 21627 21559 -68
- Misses 5567 5635 +68
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6321?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6321/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6321/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.43% <0.00%> (-7.42%)` | :arrow_down: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6321/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `38.73% <0.00%> (-3.76%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6321/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6321/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6321/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.07% <0.00%> (-0.45%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6321/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.00% <0.00%> (-0.41%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6321/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.18% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6321/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `95.77% <0.00%> (+35.21%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6321?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6321?src=pr&el=footer). Last update [c72f9c9...6505a38](https://codecov.io/gh/huggingface/transformers/pull/6321?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Hey @elsanns - thanks a lot for your notebook! It looks great :-)
Also cc @LysandreJik, you might be interested in this!<|||||>Thank you;) |
transformers | 6,320 | closed | Multi-gpu LM finetuning | Hello,
how can I run LM finetuning with more than one gpu (specifically I want to train gpt2-medium on Google Cloud with four nvidia T4, 64GB).
What are the arguments to pass to `run_language_modeling.py` script?
Thanks | 08-07-2020 08:10:46 | 08-07-2020 08:10:46 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,319 | closed | num_beams error in GPT2DoubleHead model | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.1
- Platform: Linux
- Python version: 3.6
- PyTorch version (GPU?): 1.5
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
@LysandreJik @patil-suraj
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
tensorflow: @jplu
documentation: @sgugger
-->
## Information
I am trying to use `model.generate()` for the GPT2DoubleHeadModel but the beam search is giving an error.
Setting the `num_beams > 1` results in the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1125, in generate
model_specific_kwargs=model_specific_kwargs,
File "/home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1481, in _generate_beam_search
past = self._reorder_cache(past, beam_idx)
File "/home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1551, in _reorder_cache
return tuple(layer_past.index_select(1, beam_idx) for layer_past in past)
File "/home/hdd1/vibhav/anaconda3/envs/vesnli/lib/python3.7/site-packages/transformers/modeling_utils.py", line 1551, in <genexpr>
return tuple(layer_past.index_select(1, beam_idx) for layer_past in past)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
```
However, things are working fine for `num_beams=1` and for GPT2LMHeadModel(both beam search and non beam search)
| 08-07-2020 07:27:08 | 08-07-2020 07:27:08 | encountered the same issue<|||||>I think @patrickvonplaten might have some ideas. |
transformers | 6,318 | closed | TFBert runs slower than keras-bert, any plan to speed up? | classification task run with keras-bert using 7ms, but run with TFBert using 50+ms, both of them runing on GPU, detail arguments:
hidden layers: 6
max_seq_length: 64
cuda: 2080ti
As I see, most of time using cost by bert encoder, average 6 ms per encoder layer, while keras-bert encoder per layer use less than 1ms
do we have any plan to solve this problem?
ths! | 08-07-2020 02:40:21 | 08-07-2020 02:40:21 | This is related: https://github.com/huggingface/transformers/pull/6877<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,317 | closed | codecov invalid reports due to inconsistent code coverage outputs (non-idempotent test-suite) | Currently PRs get a codecov report
1. Observing various commits - especially pure doc commits - It's either not working right or it needs to be configured.
e.g., this PR has 0.00% change to the code:
https://github.com/huggingface/transformers/pull/6315/files
yet, codecov found -0.51% decrease in coverage - this makes no sense. (edit: it updates itself with other commits to master, now it shows -0.31%)
2. Does it have to send a notification and not just comment on the PR?
It appears that it can be finetuned, and notifications sent only if a desired threshold is passed: https://docs.codecov.io/docs/notifications#standard-notification-fields - so that it actually flags an issue when there is one.
Here is a ready conf file from a random project: https://github.com/zulip/zulip/blob/master/.codecov.yml
except perhaps adjusting threshold to 1%? (edited) and not sure whether we want it to comment by default. | 08-07-2020 02:38:13 | 08-07-2020 02:38:13 | Here is another gem - PR to remove a single comment from a test file https://github.com/huggingface/transformers/pull/6338 - guess what codecov's report was - it will increase coverage by 0.32%! Sounds like its output would make a pretty good RNG.<|||||>Pinging @thomasrockhu
We've seen such reports for a while now, could you explain why these diffs in coverage happen, or provide a link that explains why? Thank you!<|||||>I took at look at #6338 because that is extremely strange.
-- Codecov --
First, I took a look at the commits to make sure we were comparing against the right commits SHAs here: https://codecov.io/gh/huggingface/transformers/pull/6338/commits

which matches roughly to the commit stack on `master` (merging commits changes the SHA, I'm fairly sure, but the commit messages are consistent) https://github.com/huggingface/transformers/commits/master?after=3f071c4b6e36c4f2d4aee35d76fd2196f82b7936+34&branch=master

So, I thought that maybe we read the coverage reports wrong. I focused on this file `src/transformers/modeling_tf_electra.py`, because it had the most changes. Going into the build tab of the [base commit](https://codecov.io/gh/huggingface/transformers/commit/1f8e8265188de8b76f5c28539056d6eb772e4e0f/build) and the [head commit](https://codecov.io/gh/huggingface/transformers/commit/8721f03d83b58c52db266be1f10bc0de2dea5a10/build), I noticed that the coverage reports uploaded to `Codecov` show different coverages
**Base commit**

**Head commit**

To further confirm, I went into the CircleCI builds and compared the coverage generated by running `python -m pytest -n 8 --dist=loadfile -s ./tests/ --cov | tee output.txt`
**Base commit**
https://app.circleci.com/pipelines/github/huggingface/transformers/10177/workflows/c12a8e4b-4ec1-4c7c-be7a-e54b0d6b9835/jobs/70077

**Head commit**
https://app.circleci.com/pipelines/github/huggingface/transformers/10180/workflows/90610a5f-d9b1-4468-8c80-fcbd874dbe22/jobs/70104

I don't know the codebase well enough here, but my suspicion is that your test suite is not idempotent<|||||>As for notifications, could I get some more details here? One thing to note is `target` and `threshold` are not the same. `Target` is the coverage percentage to hit (like 80% of the total project), while `threshold` is the "wiggle" room (if set to 1%, it allows a 1% drop from the `target` to be considered acceptable)<|||||>> As for notifications, could I get some more details here?
My thinking was that the project could set a threshold so that when it's crossed codecov makes itself heard, say -1% decrease would raise a flag. That way codecov becomes a useful ally and not something that most start ignoring because it's always there. but that's just IMHO.
<|||||>> To further confirm, I went into the CircleCI builds and compared the coverage generated by running python -m pytest -n 8 --dist=loadfile -s ./tests/ --cov | tee output.txt
Thank you for pointing out how we could use coverage data to explain this discrepancy, @thomasrockhu
> I don't know the codebase well enough here, but my suspicion is that your test suite is not idempotent
Is there a tool, that can narrow down which tests cause the idempotent behavior? Other then doing a binary search, which often fails in such complex situation of many tests.
Thank you!
<|||||>If you are talking about a `notification` not in `GitHub` ([comments](https://docs.codecov.io/docs/pull-request-comments) and [status checks](https://docs.codecov.io/docs/commit-status)), you could do something like this in the [codecov.yml](https://github.com/huggingface/transformers/blob/master/codecov.yml) file
```
coverage:
notify:
{{ notification_provider (e.g. slack) }}:
default:
threshold: 1%
```
This should only notify in cases of a 1% drop. (https://docs.codecov.io/docs/notifications#threshold)<|||||>> Is there a tool, that can narrow down which tests cause the idempotent behavior? Other then doing a binary search, which often fails in such complex situation of many tests.
Unfortunately, if there is one, we are not aware of it. I wish we could be a little more helpful here right now.<|||||>> > Is there a tool, that can narrow down which tests cause the idempotent behavior? Other then doing a binary search, which often fails in such complex situation of many tests.
>
> Unfortunately, if there is one, we are not aware of it. I wish we could be a little more helpful here right now.
So, the brute force approach would be to run groups of tests on the same code base, comparing the coverage before and after, narrowing it down to the smallest group of tests that cause the coverage to vary - Am I correct?
Probably to make an intelligent guess instead of the brute force, I'd look at the covebot reports for PRs that had no changes in code and yet wild swings were reported in some files. And from those files, consistently reported at the top, deduct the suspect tests.
edit: I looked a bit and most likely this issue has to do with TF tests, as most of the time the large coverage changes get reported in `src/transformers/modeling_tf_*py`, when the changes have nothing to do with TF.
<|||||>@thomasrockhu, I run and re-run a bunch of tests, comparing the coverage reports and I can't reproduce the suggested possible lack of idempotency in the test suite.
However, if I look at for example https://codecov.io/gh/huggingface/transformers/pull/6505 it says it doesn't have a base to compare to, yet it produces a (invalid) codecov report https://github.com/huggingface/transformers/pull/6505#issuecomment-674440545. So to me it tells that something else is broken. i.e. it's not comparing that PR to the base, but comparing it to some totally unrelated nearest code branch that codecov happened to have the coverage file for. Does it make sense?<|||||>Hi @stas00, basically what that's saying is that in this [PR](https://github.com/huggingface/transformers/pull/6505), GitHub told us the parent was `24107c2` (https://codecov.io/gh/huggingface/transformers/pull/6505/commits). Unfortunately, we did not receive coverage reports or the CI might have failed. So we took the next parent from the `master` branch

This is an unfortunate consequence of not having coverage for a base commit.<|||||>Thank you for confirming that, @thomasrockhu.
> Unfortunately, we did not receive coverage reports or the CI might have failed. So we took the next parent from the master branch
Given that the report is misleading then, would it be possible to let the user configure codecov to not provide any report in such situation or a note that a report couldn't be generated? And perhaps make that the default?
Or, perhaps, there should be a special internal action triggered that will go back to the base hash, run a CI on it, generate the coverage report, and now codecov can compare the 2 reports it was awesomely designed for. If that is possible at all.
It oddly seems to happen a lot, here is just a sample of a few recent PRs.
- https://codecov.io/gh/huggingface/transformers/pull/6494?src=pr&el=desc
- https://codecov.io/gh/huggingface/transformers/pull/6505?src=pr&el=desc
- https://codecov.io/gh/huggingface/transformers/pull/6504?src=pr&el=desc
- https://codecov.io/gh/huggingface/transformers/pull/6511?src=pr&el=desc
I did find a few recent ones that were fine, i.e. there was a base coverage report.<|||||>> Given that the report is misleading then, would it be possible to let the user configure codecov to not provide any report in such situation or a note that a report couldn't be generated? And perhaps make that the default?
@stas00, unfortunately this is not possible in the Codecov UI. It is possible on the comments sent by Codecov in PRs via [require_base](https://docs.codecov.io/docs/pull-request-comments#configuration).
> Or, perhaps, there should be a special internal action triggered that will go back to the base hash, run a CI on it, generate the coverage report, and now codecov can compare the 2 reports it was awesomely designed for. If that is possible at all.
We depend on users to make this determination. We often find that users will use [blocking status checks](https://docs.codecov.io/docs/commit-status#target) to enforce a failed commit which would imply that Codecov receives a coverage report.
> It oddly seems to happen a lot, here is just a sample of a few recent PRs.
Looking at these PRs, they all depend on the same commit `24107c2c83e79d195826f18f66892feab6b000e9` as their base, so it makes sense that it would be breaking for those PRs.<|||||>Thank you very much, @thomasrockhu! Going to try to fix this issue by adding `require_base=yes` as you suggested: https://github.com/huggingface/transformers/pull/6553
Thank you for your awesome support!
<|||||>For sure, let me know if there's anything else I can do to help!<|||||>@thomasrockhu, could you please have another look at the situation of this project?
After applying https://github.com/huggingface/transformers/pull/6553 it should now not generate invalid reports when the base is missing - this is good.
However, the problem of code coverage diff when there should be none is still there. e.g. here are some recent examples of pure non-code changes:
- https://codecov.io/gh/huggingface/transformers/pull/6650/changes
- https://codecov.io/gh/huggingface/transformers/pull/6650/changes
- https://codecov.io/gh/huggingface/transformers/pull/6629/changes
- https://codecov.io/gh/huggingface/transformers/pull/6649/changes
I did multiple experiments and tried hard to get the test suite to behave in a non-idempotent way, but I couldn't get any such results other than very minor 1-line differences in coverage. This was done on the same machine. I'm not sure how to approach this issue - perhaps CI ends up running different PRs on different types of hardware/different libraries - which perhaps could lead to significant discrepancies in coverage.
If changes in hardware and system software libraries could cause such an impact, is there some way of doing a fingerprinting of the host setup so that we know the report came from the same type of setup?
Thank you!
<|||||>Apologies here @stas00 this got lost. Do you have a more recent pull request to take a look at so I can dig into the logs?<|||||>Yes, of course, @thomasrockhu.
Here is a fresh one: https://codecov.io/gh/huggingface/transformers/pull/6852 (no code change)
Let me know if it'd help to have a few.<|||||>@stas00 this is really strange. I was focusing in on `src/transformers/trainer.py`
Most recently on `master`, [this commit](https://codecov.io/gh/huggingface/transformers/src/367235ee52537ff7cada5e1c5c41cdd78731f092/src/transformers/trainer.py) is showing much lower coverage than normal (13.55% vs ~50%)
I'm comparing it to the commit [right after](https://codecov.io/gh/huggingface/transformers/commit/a497dee6f52f3b8f308675a50601added7e738c3)
The [CI build](https://app.circleci.com/pipelines/github/huggingface/transformers/11349/workflows/9ba002b6-c63e-4078-96d0-0feb988b304f/jobs/79674/steps) for the first, shows that there are fewer tests run by 1

Compared to the `a497de` [run](https://app.circleci.com/pipelines/github/huggingface/transformers/11356/workflows/1ba4518f-e8d0-4970-9ef6-b8bba290f9bb/jobs/79726).

Maybe there's something here?<|||||>Thank you, @thomasrockhu!
This is definitely a great find. I'm requesting to add `-rA` to `pytest` runs https://github.com/huggingface/transformers/pull/6861
then we can easily diff which tests aren't being run. I will follow up once this is merged and we have new data to work with.<|||||>OK, the `-rA` report is active now, so we can see and diff the exact tests that were run and skipped.
Have a look at these recent ones with no code changes:
- https://github.com/huggingface/transformers/pull/6861
- https://github.com/huggingface/transformers/pull/6867
I double checked that same number of tests were run in both, but codecov report is reporting huge coverage differences.
This is odd:
- https://codecov.io/gh/huggingface/transformers/pull/6861/changes
- https://codecov.io/gh/huggingface/transformers/pull/6867/changes
It seems to be reporting a huge number of changes, which mostly cancel each other.
<|||||>@thomasrockhu? Please, let me know when you can have a look at it - otherwise your logs will be gone again and the provided examples per your request will be unusable again. Thanks.<|||||>Hi @stas00, apologies I took a look a few days ago, but I really couldn't find a good reason or another step to figure out what is going on in your testing setup. I'll take a look again today.<|||||>@thomasrockhu, thank you for all your attempts so far. As you originally correctly guessed `transformers` tests suite is not idempotent. I finally was able to reproduce that first in a large sub-set of randomly run tests and then reduced it to a very small sub-set. So from here on it's totally up to us to either sort it out or let `codecov` go.
# reproducing the problem
note: this is not the only guilty sub-test, there are others (I have more sub-groups that I haven't reduced to a very small sub-set yet), but it's good to enough to demonstrate the problem and see if we can find a solution.
## Step 1. prep
```
pip install pytest-flakefinder pytest-randomly
```
note: make sure you `pip uninstall pytest-randomly` when you're done here, since it'll randomize your tests w/o asking you - i.e. no flags to enable it - you installed it, all your tests suites are now random.
**why randomize? because `pytest -n auto` ends up running tests somewhat randomly across the many processors**
`flakefinder` is the only pytest plugin that I know of that allows repetition of unittests, but this one you can leave around - it doesn't do anything on its own, unless you tell it to.
## Case 1. multiprocess
We will run 2 sub-tests in a random order:
```
export TESTS="tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_encoder_decoder_with_configs \
tests/test_benchmark_tf.py::TFBenchmarkTest::test_trace_memory"
pytest $TESTS --cov --flake-finder --flake-runs=5 | tee k1; \
pytest $TESTS --cov --flake-finder --flake-runs=5 | tee k2; \
diff -u k1 k2 | egrep "^(\-|\+)"
```
and we get:
```
--- k1 2020-09-11 20:00:32.246210967 -0700
+++ k2 2020-09-11 20:01:31.778468283 -0700
-Using --randomly-seed=1418403633
+Using --randomly-seed=1452350401
-src/transformers/benchmark/benchmark_tf.py 152 62 59%
-src/transformers/benchmark/benchmark_utils.py 401 239 40%
+src/transformers/benchmark/benchmark_tf.py 152 50 67%
+src/transformers/benchmark/benchmark_utils.py 401 185 54%
-src/transformers/configuration_t5.py 32 16 50%
+src/transformers/configuration_t5.py 32 4 88%
-src/transformers/modeling_tf_t5.py 615 526 14%
+src/transformers/modeling_tf_t5.py 615 454 26%
-src/transformers/modeling_tf_utils.py 309 214 31%
+src/transformers/modeling_tf_utils.py 309 212 31%
-TOTAL 32394 24146 25%
+TOTAL 32394 23994 26%
-================== 10 passed, 3 warnings in 71.87s (0:01:11) ===================
+======================= 10 passed, 3 warnings in 58.82s ========================
```
Whoah! Non-Idempotent test suite it is! A whooping 1% change in coverage over no change in code.
Saving the seeds I'm now able to reproduce this at will by adding the specific seeds of the first run:
```
export TESTS="tests/test_benchmark_tf.py::TFBenchmarkTest::test_inference_encoder_decoder_with_configs \
tests/test_benchmark_tf.py::TFBenchmarkTest::test_trace_memory"
pytest $TESTS --cov --flake-finder --flake-runs=5 --randomly-seed=1418403633 | tee k1; \
pytest $TESTS --cov --flake-finder --flake-runs=5 --randomly-seed=1452350401 | tee k2; \
diff -u k1 k2 | egrep "^(\-|\+)"
```
getting the same results.
## Case 2. randomization issue
Here are some other tests with the same problem, but the cause is different - randomization
```
CUDA_VISIBLE_DEVICES="" pytest -n 3 --dist=loadfile tests/test_data_collator.py --cov | tee c1; \
CUDA_VISIBLE_DEVICES="" pytest -n 3 --dist=loadfile tests/test_data_collator.py --cov | tee c2; \
diff -u c1 c2 | egrep "^(\-|\+)"
```
this time w/o using flake-finder, but instead relying on `-n 3` + randomly.
```
--- c1 2020-09-11 19:00:00.259221772 -0700
+++ c2 2020-09-11 19:00:14.103276713 -0700
-Using --randomly-seed=4211396884
+Using --randomly-seed=3270809055
-src/transformers/data/datasets/language_modeling.py 168 23 86%
+src/transformers/data/datasets/language_modeling.py 168 25 85%
-src/transformers/tokenization_utils_base.py 750 321 57%
+src/transformers/tokenization_utils_base.py 750 316 58%
-TOTAL 32479 23282 28%
+TOTAL 32479 23279 28%
-======================= 9 passed, 13 warnings in 13.10s ========================
+======================= 9 passed, 13 warnings in 13.44s ========================
```
a much smaller diff, but a diff nevertheless
Next is to try to resolve this or give up codecov.
The preliminary reading points the blaming finger to `multiprocessing` (`Pool`, and others).
Thank you for reading.
<|||||>@stas00 this is absolutely incredible. I'll admit that I wouldn't be able to have found this myself, you've done a hell of an investigation here. How can I be useful?<|||||>Thank you for the kind words, @thomasrockhu. I have a solution for the random issue (need to set a fixed seed before the test), but not yet for the multiproc. It's all the tools that fork sub-processes that are the issue potentially, as they don't all get accounted for consistently. I need more time staring at the screen doing experiments.
But I do need your help here: https://codecov.io/gh/huggingface/transformers/pull/7067/changes
What does it mean? If you look at +X/-X - they all are identical numbers, and should add up to 0. Yet, we get 2.41% diff in coverage. How does that get calculated and why are those identical numbers but flipped up - clearly there is something off there, not sure if it's related to coverage as they are perfectly complementary.
I did see many others cases where they weren't complementary, but in this case it's 100% so. Ideas?
Or perhaps if I rephrase this: how on that screen can I see the 2.41% difference if I look at it as a black box. I imagine the numbers are the same, but perhaps they are not the same lines in the code, hence the difference. But it's impossible to see that from that presentation. Clicking on the specific diff makes no sense to me. it's just one screen of one type/color - I can't see the actual diff.<|||||>@thomasrockhu? Could you please have a look that issue mentioned in my last comment? Thank you.<|||||>@stas00 apologies will take a look today/tomorrow<|||||>@stas00, so the +X/-X are actually showing coverage change for that file. So as an example,

you see -1 -> +1. This means in total, one line that was not covered is now covered (this is not always a zero-sum game if a line is removed). You can see that two lines have added coverage (red to green) and one line has no coverage (green to red).
So taking the total over all those changes actually leads to a -777 line coverage drop. You can see that in the commits of this PR
base -> https://codecov.io/gh/huggingface/transformers/tree/8fcbe486e1592321e868f872545c8fd9d359a515

head -> https://codecov.io/gh/huggingface/transformers/tree/a4dd71ef19033ec8e059a0a76c7141a8a5840e66

Does this make more sense?<|||||>The case you're are showing makes total sense. I'm absolutely clear on that one.
But your example doesn't work for https://codecov.io/gh/huggingface/transformers/pull/7067/changes
Let's pick a small one: `Changes in src/transformers/modelcard.py`

As you can see there is only addition, I don't see any subtraction. i.e I only see red lines - where are the green ones? If it's +2/-2 I'd expect to see 2 in red and 2 in green. Does it make sense which parts of the reports I'm struggling to understand?
<|||||>@stas00 I see the source of your confusion. This is not a code diff, it's a coverage diff. What you are seeing is two lines that were previously covered (the green 2) are now no longer covered (the red 2). The +/- signs are confusing, I'm going to bring that back to my team. Does this make sense now?<|||||>Absolutely. I know it's a coverage diff.
If I may suggest it'd be much clearer if it were to only show +1 / -1 if there was one line covered anew and another different line was not, like a normal diff would. And only +1 / 0 and 0 / -1 in the other cases.
So in the particular case of [the pull we are discussing]((https://codecov.io/gh/huggingface/transformers/pull/7067/changes) it's most likely just a bunch of +Xs/0 for most of the coverage and then a few 0/-Xs at the end.
Thank you for clarifying.<|||||>Hi @stas00, yeah you are right. I've passed along the feedback to our product team to re-think that widget in the next iteration of our UI and that page in particular. Thanks for your help here!<|||||>Thank you, @thomasrockhu |
transformers | 6,316 | closed | Dataloader number of workers in Trainer | https://github.com/huggingface/transformers/blob/175cd45e13b2e33d1efec9e2ac217cba99f6ae58/src/transformers/trainer.py#L252
If you want to use the Trainer from trainer.py, you only have the option to use only 0 number of workers for your dataloader.
However, even if I change the source code to have 10 number or workers for the data loader, the model still uses the same thread.
| 08-07-2020 02:29:13 | 08-07-2020 02:29:13 | +1.
I suppose it should not be difficult to add additional argument to ```self.args``` specifying number of workers to use and to use it as:
```
return DataLoader(
self.train_dataset,
batch_size=self.args.train_batch_size,
sampler=train_sampler,
collate_fn=self.data_collator,
drop_last=self.args.dataloader_drop_last,
num_workers=self.args.num_workers
)
```
Is there any particular reason why this was not yet implemented? <|||||>+1<|||||>+1<|||||>No particular reason why it was not implemented, would welcome a PR! |
transformers | 6,315 | closed | [examples] consistently use --gpus, instead of --n_gpu | - some docs were wrongly suggesting to use `--n_gpu`, when the code is `--gpus`
- `examples/distillation/` had `--n_gpu`, in the code - switched it and the doc to `--gpus` | 08-07-2020 02:02:07 | 08-07-2020 02:02:07 | I had to update all the files since n_gpu seems like a common param.<|||||>You could create an issue, suggesting to use `n_gpu` everywhere instead, supporting it with some stats that would be in favor of this naming. As long as it's consistent across the project either way works, IMHO. |
transformers | 6,314 | closed | [pl] restore lr logging behavior for glue, ner examples | 2 more fixes for https://github.com/huggingface/transformers/pull/6027
1. restore the original code and add what was there already, instead of a complex line of code.
2. restore removed `rate` field - solve the missing bit
| 08-07-2020 01:54:39 | 08-07-2020 01:54:39 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6314?src=pr&el=h1) Report
> Merging [#6314](https://codecov.io/gh/huggingface/transformers/pull/6314?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/be1520d3a3c09d729649c49fa3163bd938b6a238&el=desc) will **decrease** coverage by `0.85%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6314?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6314 +/- ##
==========================================
- Coverage 79.93% 79.08% -0.86%
==========================================
Files 153 153
Lines 27888 27888
==========================================
- Hits 22293 22054 -239
- Misses 5595 5834 +239
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6314?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6314/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6314/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `66.66% <0.00%> (-32.50%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6314/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.06% <0.00%> (-27.52%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6314/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: |
| [src/transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6314/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `87.50% <0.00%> (-9.73%)` | :arrow_down: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6314/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6314/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.08% <0.00%> (-1.39%)` | :arrow_down: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6314/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.26% <0.00%> (-0.17%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6314/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `97.41% <0.00%> (+32.94%)` | :arrow_up: |
| [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6314/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `87.50% <0.00%> (+58.65%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6314?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6314?src=pr&el=footer). Last update [be1520d...6321287](https://codecov.io/gh/huggingface/transformers/pull/6314?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>How do we know this works?<|||||>> How do we know this works?
I described what it did in the first comment:
1. use public API instead of digging into PL internals
2. restoring bits removed by https://github.com/huggingface/transformers/pull/6027 you have to compare against the original (pre-6027) see the second to last part of the diff: https://github.com/huggingface/transformers/pull/6027/files#diff-6cf9887b73b621b2d881039a61ccfa5fR47
```
# tensorboard_logs = {"loss": loss, "rate": self.lr_scheduler.get_last_lr()[-1]}
tensorboard_logs = {"loss": loss}
```
why `rate` was removed?
i.e. this PR is restoring, not changing anything. PR6027 did change behavior w/o testing the change.<|||||>note: This does not effect seq2seq/ because of `Seq2SeqLoggingCallback`<|||||>Thanks @stas00 ! |
transformers | 6,313 | closed | Error trying to import SquadDataset | ## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-4.15.0-108-generic-x86_64-with-glibc2.10
- Python version: 3.8.2
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@sgugger @julien-c
## Information
I am trying to follow the run_squad_trainer example. However I am unable to import the SquadDataset from transformers. I tried updating to 3.0.2 but got the same error.
https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad_trainer.py
```
from transformers import SquadDataset
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-6-13f8e9ce9352> in <module>
----> 1 from transformers import SquadDataset
ImportError: cannot import name 'SquadDataset' from 'transformers' (/home/brian/miniconda3/envs/ML38/lib/python3.8/site-packages/transformers/__init__.py)
```
## Expected behavior
Import runs without error. | 08-07-2020 00:20:07 | 08-07-2020 00:20:07 | Hi @brian8128 , `SquadDataset` was added after the 3.0.2 release. You'll need to install from source to use it<|||||>Thanks! It's working now. You guys have written some code around this stuff! |
transformers | 6,312 | closed | clarify shuffle | 08-07-2020 00:18:28 | 08-07-2020 00:18:28 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6312?src=pr&el=h1) Report
> Merging [#6312](https://codecov.io/gh/huggingface/transformers/pull/6312?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0eecaceac7a2cb3c067a435a7571a2ee0de619b9?el=desc) will **increase** coverage by `0.24%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6312?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6312 +/- ##
==========================================
+ Coverage 79.72% 79.96% +0.24%
==========================================
Files 157 157
Lines 28586 28586
==========================================
+ Hits 22790 22859 +69
+ Misses 5796 5727 -69
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6312?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6312/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6312/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `32.20% <0.00%> (-66.95%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6312/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6312/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6312/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6312/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6312/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.00% <0.00%> (-0.67%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6312/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.06% <0.00%> (-0.35%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6312/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6312/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.94% <0.00%> (+0.27%)` | :arrow_up: |
| ... and [11 more](https://codecov.io/gh/huggingface/transformers/pull/6312/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6312?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6312?src=pr&el=footer). Last update [0eecace...5871cac](https://codecov.io/gh/huggingface/transformers/pull/6312?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 6,311 | closed | modify ``val_loss_mean`` | revised by this warning
***/lib/python3.*/site-packages/pytorch_lightning/utilities/distributed.py:25: RuntimeWarning: The metric you returned 1.234 must be a `torch.Tensor` instance, checkpoint not saved HINT: what is the value of val_loss in validation_epoch_end()? | 08-07-2020 00:11:50 | 08-07-2020 00:11:50 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,310 | closed | collision between different cl arg definitions in examples | The `examples` have an incosistency of how the cl args are defined and parsed. Some rely on PL's main args as `finetune.py` does: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.py#L410
```
parser = argparse.ArgumentParser()
parser = pl.Trainer.add_argparse_args(parser)
```
others like `run_pl_glue.py` rely on `lightening_base.py`'s main args: https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_pl_glue.py#L176
```
parser = argparse.ArgumentParser()
add_generic_args(parser, os.getcwd())
```
now that we pushed `--gpus` into `lightening_base.py`'s main args the scripts that run PL's main args collide and we have:
```
fail.argparse.ArgumentError: argument --gpus: conflicting option string: --gpus
```
i.e. PL already supplies `--gpus` and many other args that some of the scripts in `examples` re-define.
So either the example scripts need to stop using `pl.Trainer.add_argparse_args(parser)` and rely exclusively on `lightning_base.add_generic_args`, or we need a different clean approach. It appears that different scripts have different needs arg-wise. But they all use `lightning_base`.
The problem got exposed in: https://github.com/huggingface/transformers/pull/6027 and https://github.com/huggingface/transformers/pull/6307 | 08-06-2020 23:01:42 | 08-06-2020 23:01:42 | Here is a potential idea of how to keep all the common cl arg definitions in `BaseTransformer` and then let each example subclass tell which ones it wants to support, w/o needing to duplicate the same thing everywhere.
```
import argparse
# removes an option from the parser after parser.add_argument's are all done
#https://stackoverflow.com/a/49753634/9201239
def remove_option(parser, arg):
for action in parser._actions:
if (vars(action)['option_strings']
and vars(action)['option_strings'][0] == arg) \
or vars(action)['dest'] == arg:
parser._remove_action(action)
for action in parser._action_groups:
vars_action = vars(action)
var_group_actions = vars_action['_group_actions']
for x in var_group_actions:
if x.dest == arg:
var_group_actions.remove(x)
return
# another way to remove an arg, but perhaps incomplete
#parser._handle_conflict_resolve(None, [('--bar',parser._actions[2])])
# tell the parser which args to keep (the rest will be removed)
def keep_arguments(parser, supported_args):
for act in parser._actions:
arg = act.dest
if not arg in supported_args:
remove_option(parser, arg)
parser = argparse.ArgumentParser()
# superclass can register all kinds of options
parser.add_argument('--foo', help='foo argument', required=False)
parser.add_argument('--bar', help='bar argument', required=False)
parser.add_argument('--tar', help='bar argument', required=False)
# then a subclass can choose which of them it wants/can support
supported_args = ('foo bar'.split()) # no --tar please
keep_arguments(parser, supported_args)
args = parser.parse_args()
```
Granted, there is no public API to remove args once registered. This idea uses a hack that taps into an internal API.
----
Alternatively, `BaseTransformer` could maintain a dict of all the common args with help/defaults/etc w/o registering any of them, and then the subclass can just tell it which cl args it wants to be registered. This will be just a matter of formatting the dict and then a subclass would call:
```
# a potential new function to be called by a subclass
register_arguments(parser, 'foo bar'.split())
```
or if no abstraction is desired it could go as explicit as:
```
defs = self.args_def() # non-existing method fetching the possible args
parser.add_argument(defs['foo'])
parser.add_argument(defs['bar'])
```
but this probably defeats the purpose, just as well copy the whole thing.
---
One thing to consider in either solution is that a subclass may want to have different defaults, so the new API could provide for defaults override as well.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,309 | closed | pl version: examples/requirements.txt is single source of truth | PL git master is unstable:
```
cd examples/text-classification
./run_pl.sh
```
```
File "run_pl_glue.py", line 12, in <module>
from lightning_base import BaseTransformer, add_generic_args, generic_train
File "/mnt/nvme1/code/huggingface/transformers-master/examples/lightning_base.py", line 7, in <module>
import pytorch_lightning as pl
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/__init__.py", line 76, in <module>
__import__('pkg_resources').declare_namespace(__name__)
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2301, in declare_namespace
_handle_ns(packageName, path_item)
File "/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pkg_resources/__init__.py", line 2234, in _handle_ns
loader.load_module(packageName)
File "/mnt/nvme1/code/github/00pytorch/pytorch-lightning/pytorch_lightning/__init__.py", line 56, in <module>
from pytorch_lightning.core import LightningDataModule, LightningModule
ImportError: cannot import name 'LightningDataModule' from 'pytorch_lightning.core' (/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/pytorch_lightning/core/__init__.py)
```
the scripts will now rely on:
```
grep pytorch-l examples/requirements.txt
```
```
pytorch-lightning==0.8.5
```
whenever the requirement removed by this PR was added (it also helps to add why it was added):
```
# Install newest ptl.
pip install -U git+http://github.com/PyTorchLightning/pytorch-lightning/
```
seems to no longer be needed - at least the code runs to completion. | 08-06-2020 22:18:13 | 08-06-2020 22:18:13 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6309?src=pr&el=h1) Report
> Merging [#6309](https://codecov.io/gh/huggingface/transformers/pull/6309?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/175cd45e13b2e33d1efec9e2ac217cba99f6ae58&el=desc) will **increase** coverage by `0.07%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6309?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6309 +/- ##
==========================================
+ Coverage 79.44% 79.52% +0.07%
==========================================
Files 148 148
Lines 27193 27193
==========================================
+ Hits 21604 21625 +21
+ Misses 5589 5568 -21
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6309?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.20%)` | :arrow_down: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `60.56% <0.00%> (-35.22%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.00%)` | :arrow_up: |
| [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6309/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `97.41% <0.00%> (+32.94%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6309?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6309?src=pr&el=footer). Last update [175cd45...6337bdf](https://codecov.io/gh/huggingface/transformers/pull/6309?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I am strongly in favor. |
transformers | 6,308 | closed | Debug flag to `run_language_modeling` triggers error | ## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-5.4.0-42-generic-x86_64-with-glibc2.29
- Python version: 3.8.2
- PyTorch version (GPU?): 1.6.0+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes, run_language_modeling.py
- Using distributed or parallel set-up in script?: no
### Who can help
I'd guess @sgugger or @julien-c
## Information
I'm using [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) and turned on debug output to double check things were working as I expected. Unfortunately, [trainer.py](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L628) keys off that debug option to invoke `xm.master_print(...)` and `xm`/`torch_xla.core.xla_model` isn't loaded because I'm not working on a TPU-based system.
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
All steps should be run on a system with a GPU but no TPU. Steps to reproduce the behavior:
1. Run `run_language_modeling.py` with the debug flag:
```sh
python run_language_modeling.py \
--output_dir ./output \
--model_type gpt2 \
--model_name_or_path gpt2 \
--do_train \
--train_data_file ./train.txt \
--learning_rate 1e-4 \
--num_train_epochs 1 \
--save_total_limit 2 \
--save_steps 200 \
--do_eval \
--eval_data_file ./eval.txt \
--debug
```
2. Allow the script to run.
The command will error towards the end with this traceback:
```sh
Epoch: 0%| | 0/1 [54:06<?, ?it/s]
Traceback (most recent call last):
File "run_language_modeling.py", line 281, in <module>
main()
File "run_language_modeling.py", line 245, in main
trainer.train(model_path=model_path)
File "/home/user/project/env/lib/python3.8/site-packages/transformers/trainer.py", line 570, in train
xm.master_print(met.metrics_report())
NameError: name 'xm' is not defined
```
## Expected behavior
The script exits without error.
| 08-06-2020 22:15:45 | 08-06-2020 22:15:45 | The documentation of `debug` clearly points out this parameter is only used for TPU-training:
```
debug (:obj:`bool`, `optional`, defaults to :obj:`False`):
When training on TPU, whether to print debug metrics or not.
```
It's called simply debug (and not `tpu_debug`) for harmonization with `TFTrainer`.<|||||>Ah, my bad for not fully reading the documentation. Would you be open to a PR with a guard or more-specific error message for this scenario?<|||||>Sure!<|||||>closed by #6390 |
transformers | 6,307 | closed | fix the shuffle agrument usage and the default | This is a follow up to the recently merged PR to https://github.com/huggingface/transformers/pull/6027
The `shuffle` wasn't handled correctly:
```
cd examples/text-classification
./run_pl.sh
```
```
TypeError: get_dataloader() missing 1 required positional argument: 'shuffle'
```
this fixes it | 08-06-2020 22:01:08 | 08-06-2020 22:01:08 | The merged https://github.com/huggingface/transformers/pull/6027 broke `examples/seq2seq/test_seq2seq_examples.py::test_finetune_lr_shedulers` - which I think was flagged by failing CI of that PR.
yeah, PL already has `--gpus` - so it conflicts with the one added by 6027. So I will look at how to rework that need in a different way.
Added skip for now for the failing test. Will fix once we discussed how to proceed.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6307?src=pr&el=h1) Report
> Merging [#6307](https://codecov.io/gh/huggingface/transformers/pull/6307?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ffceef2042d5a1f2a2d70c8a0606551147dd6f8d&el=desc) will **increase** coverage by `0.26%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6307?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6307 +/- ##
==========================================
+ Coverage 79.14% 79.41% +0.26%
==========================================
Files 148 148
Lines 27193 27193
==========================================
+ Hits 21521 21594 +73
+ Misses 5672 5599 -73
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6307?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.20%)` | :arrow_down: |
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-5.17%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.26%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.40% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.84% <0.00%> (+7.41%)` | :arrow_up: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+21.91%)` | :arrow_up: |
| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6307/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6307?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6307?src=pr&el=footer). Last update [ffceef2...0a83f75](https://codecov.io/gh/huggingface/transformers/pull/6307?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>> The merged #6027 broke `examples/seq2seq/test_seq2seq_examples.py::test_finetune_lr_shedulers` - which I think was flagged by failing CI of that PR - I will sort it out.
>
> yeah, PL already has `--gpus` - so it conflicts with the one added by 6027. So I will look at how to rework that need in a different way.
I think we can remove the --gpus argument from the run_pl.sh file it does not have to be there for the example. But it has to be in the generic_arguments.
I agree on the shuffle. Thank you for this!<|||||>Please merge this asap, since master CI is currently breaking!
Let's continue the discussion here: https://github.com/huggingface/transformers/issues/6310<|||||>> I think we can remove the --gpus argument from the run_pl.sh file it does not have to be there for the example. But it has to be in the generic_arguments.
no, the problem is elsewhere, see https://github.com/huggingface/transformers/issues/6310
> I agree on the shuffle. Thank you for this!
thank you for the kind words. |
transformers | 6,306 | closed | solving `make quality` failures | `make quality` currently and for a while now fails with 1 warning and 1 failure:
1. isort warning:
with `isort==4.3.21` or the required by the latest stable `pylint`, or `setup.py`'s current
`git+git://github.com/timothycrosley/isort.git@e63ae06ec7d70b06df9e528357650281a3d3ec22#egg=isort`
we get:
```
make style
```
```
/home/stas/anaconda3/envs/main/lib/python3.7/site-packages/setuptools/distutils_patch.py:26: UserWarning: Distutils was imported before Setuptools. This usage is discouraged and may exhibit undesirable behaviors or errors. Please use Setuptools' objects directly or at least import Setuptools first.
"Distutils was imported before Setuptools. This usage is discouraged "
[...]
```
If I install `isort==5.3.0` it now wants to reformat a whole bunch of imports:
```
ERROR: /mnt/nvme1/code/huggingface/transformers-unittests/examples/longform-qa/eli5_app.py Imports are incorrectly sorted and/or formatted.
ERROR: /mnt/nvme1/code/huggingface/transformers-unittests/examples/text-generation/pplm/run_pplm_discrim_train.py Imports are incorrectly sorted and/or formatted.
[...] some dozens of those
```
This version has deprecated the `--recursive` flag to `isort`, so once the code is re-formatted to appease to never-ending new rules we can:
1. require `isort>=5.3.0` in `setup.py`'s `quality` section
2. remove the `--recursive` flag to `isort` in Makefile (I validated that just removing this deprecated flag won't change the configuration - it still checks the listed dirs recursively)
the only potential problem if we need to appease to `pylint`, which wants `isort==4.3.21`
----
2. and then older `flake8` can't handle `TYPE_CHECKING` - error
```
flake8 examples templates tests src utils
tests/test_tokenization_common.py:31:5: F401 'transformers.PretrainedConfig' imported but unused
tests/test_tokenization_common.py:31:5: F401 'transformers.PreTrainedModel' imported but unused
tests/test_tokenization_common.py:31:5: F401 'transformers.TFPreTrainedModel' imported but unused
src/transformers/pipelines.py:77:5: F401 '.modeling_utils.PreTrainedModel' imported but unused
src/transformers/pipelines.py:78:5: F401 '.modeling_tf_utils.TFPreTrainedModel' imported but unused
`
```
`flake8-3.8.3` doesn't complain about these.
Can we add:
```
diff --git a/setup.py b/setup.py
index 206c3e35..c33898b4 100644
--- a/setup.py
+++ b/setup.py
@@ -95,7 +95,7 @@ extras["quality"] = [
"black",
# "isort",
"isort @ git+git://github.com/timothycrosley/isort.git@e63ae06ec7d70b06df9e528357650281a3d3ec22#egg=isort",
- "flake8",
+ "flake8>=3.8.3",
]
extras["dev"] = extras["testing"] + extras["quality"] + extras["ja"] + ["scikit-learn", "tensorflow", "torch"]
```
| 08-06-2020 20:02:39 | 08-06-2020 20:02:39 | Confirming the warning started to appear since I did a conda upgrade (for pytorch 1.6.0), never got the error here.<|||||>> Confirming the warning started to appear since I did a conda upgrade (for pytorch 1.6.0)
Thank you for validating this.
The warning is kind of like an error, since it's noisy, so there is no quick way to see if all is clean before committing.
> never got the error here.
You probably happened to have a newer `flake8`, hence suggesting a minimum requirement.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>this has been resolved. |
transformers | 6,305 | closed | Remove redundant line in run_pl_glue.py | 08-06-2020 19:36:17 | 08-06-2020 19:36:17 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6305?src=pr&el=h1) Report
> Merging [#6305](https://codecov.io/gh/huggingface/transformers/pull/6305?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/118ecfd4273b5381aeeb65476a01678c7a96ae3e&el=desc) will **increase** coverage by `0.57%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6305?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6305 +/- ##
==========================================
+ Coverage 78.15% 78.72% +0.57%
==========================================
Files 148 148
Lines 27193 27193
==========================================
+ Hits 21252 21407 +155
+ Misses 5941 5786 -155
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6305?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-22.88%)` | :arrow_down: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.18% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+2.25%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6305/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `94.63% <0.00%> (+70.08%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6305?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6305?src=pr&el=footer). Last update [118ecfd...31ae60c](https://codecov.io/gh/huggingface/transformers/pull/6305?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks for contributing! |
|
transformers | 6,304 | closed | Add_argument ``gpus`` | 08-06-2020 19:29:42 | 08-06-2020 19:29:42 | should be fixed on master, let me know if not. |
|
transformers | 6,303 | closed | default `n_tpu_cores` in lightning_base.py | The original default `n_tpu_cores` value `0` raise Error
``pytorch_lightning.utilities.exceptions.MisconfigurationException: `tpu_cores` can only be 1, 8 or [<1-8>]``
And it should be corrected as `None`. | 08-06-2020 19:25:17 | 08-06-2020 19:25:17 | fixed on master, let me know if not. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.