repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 5,400 | closed | Create model card for schmidek/electra-small-cased | 06-30-2020 16:32:16 | 06-30-2020 16:32:16 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5400?src=pr&el=h1) Report
> Merging [#5400](https://codecov.io/gh/huggingface/transformers/pull/5400?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/87716a6d072b2b66415ce43086c73b04e63fe0fe&el=desc) will **increase** coverage by `0.39%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5400?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5400 +/- ##
==========================================
+ Coverage 77.69% 78.08% +0.39%
==========================================
Files 140 140
Lines 24334 24334
==========================================
+ Hits 18906 19001 +95
+ Misses 5428 5333 -95
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5400?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5400/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.10% <0.00%> (+0.28%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5400/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.22% <0.00%> (+0.31%)` | :arrow_up: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5400/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <0.00%> (+1.53%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5400/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.20% <0.00%> (+2.17%)` | :arrow_up: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5400/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `89.95% <0.00%> (+2.28%)` | :arrow_up: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5400/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5400/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.71% <0.00%> (+8.92%)` | :arrow_up: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5400/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `94.52% <0.00%> (+17.80%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5400?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5400?src=pr&el=footer). Last update [87716a6...8b18faa](https://codecov.io/gh/huggingface/transformers/pull/5400?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,399 | closed | Add support for past states | This adds support in the `Trainer` for models that can use past states. Since there is no way to know in advance where the output will be in the model (we can guess it's going to be the second, but that may not always be the case for all models or user-defined models), I added an argument to the `TrainingArguments` for the index of where to look at mems in the outputs.
If it's left to -1, nothing happens, and if it's set, the `Trainer` will save the past mems in its state at each training step and add them to the inputs on the next step.
If this looks good to you, I'll add the same thing on the TF side before merging. | 06-30-2020 16:31:33 | 06-30-2020 16:31:33 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5399?src=pr&el=h1) Report
> Merging [#5399](https://codecov.io/gh/huggingface/transformers/pull/5399?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/87716a6d072b2b66415ce43086c73b04e63fe0fe&el=desc) will **decrease** coverage by `0.09%`.
> The diff coverage is `13.33%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5399?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5399 +/- ##
==========================================
- Coverage 77.69% 77.60% -0.10%
==========================================
Files 140 140
Lines 24334 24362 +28
==========================================
- Hits 18906 18905 -1
- Misses 5428 5457 +29
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5399?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5399/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `16.98% <0.00%> (-0.88%)` | :arrow_down: |
| [src/transformers/training\_args\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5399/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `51.16% <ø> (ø)` | |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5399/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.81% <21.42%> (-0.58%)` | :arrow_down: |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5399/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `77.08% <100.00%> (+0.24%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5399/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5399/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5399/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5399/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.68% <0.00%> (-0.72%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5399/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.43% <0.00%> (ø)` | |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5399/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.22% <0.00%> (+0.31%)` | :arrow_up: |
| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/5399/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5399?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5399?src=pr&el=footer). Last update [87716a6...7cce295](https://codecov.io/gh/huggingface/transformers/pull/5399?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I like it! This solution seems to be the cleanest one we have. We have to make sure that all models name their inner state `mems`.
GPT2 has a `past` parameter, but it should not be used for training as far as I know. <|||||>@jplu adding yo as a reviewer for the TF side of things :-) |
transformers | 5,398 | closed | Inference time difference between pipeline and with standalone model and tokenizer | # 🐛 Bug
## Information
Model I am using: distilbert-base-cased-distilled-squad
Language I am using the model on : English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQUaD
* [ ] my own task or dataset: (give details below)
## To reproduce
With the two examples provided in the Usage section [here](https://huggingface.co/transformers/usage.html#extractive-question-answering), the first method using the pipeline takes more time for the same context and question.
I modified the contents and question same for both the example and I used the same finetuned SQuAD model for both.
```
from transformers import pipeline
from time import perf_counter
nlp = pipeline("question-answering", model="/Users/arjun/Datasets/QuestionAnswering/distilbert-base-cased-distilled-squad",tokenizer="/Users/arjun/Datasets/QuestionAnswering/distilbert-base-cased-distilled-squad")
context = r"""
🤗 Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose
architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural
Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between
TensorFlow 2.0 and PyTorch.
"""
st = perf_counter()
nlp(question="How many pretrained models are available in Transformers?", context=context)
nlp(question="What does Transformers provide?", context=context)
nlp(question="Transformers provides interoperability between which frameworks?", context=context)
print(f'Time taken is {perf_counter()-st}')
```
The output was,
> Time taken is 1.5614857940000002
With some added debugged code inside the pipeline code, it seems that the 99% of the time is taken when the input_ids are fed to the model for getting the start and end tensors.
The next example with standalone model and tokenizers is pretty fast,
```
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
import torch
tokenizer = AutoTokenizer.from_pretrained("/Users/arjun/Datasets/QuestionAnswering/distilbert-base-cased-distilled-squad")
model = AutoModelForQuestionAnswering.from_pretrained("/Users/arjun/Datasets/QuestionAnswering/distilbert-base-cased-distilled-squad")
text = r"""
🤗 Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides general-purpose
architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU) and Natural
Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep interoperability between
TensorFlow 2.0 and PyTorch.
"""
questions = [
"How many pretrained models are available in Transformers?",
"What does Transformers provide?",
"Transformers provides interoperability between which frameworks?",
]
st = perf_counter()
for question in questions:
inputs = tokenizer.encode_plus(question, text, add_special_tokens=True, return_tensors="pt")
input_ids = inputs["input_ids"].tolist()[0]
text_tokens = tokenizer.convert_ids_to_tokens(input_ids)
answer_start_scores, answer_end_scores = model(**inputs)
answer_start = torch.argmax(
answer_start_scores
) # Get the most likely beginning of answer with the argmax of the score
answer_end = torch.argmax(answer_end_scores) + 1 # Get the most likely end of answer with the argmax of the score
answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))
print(f"Question: {question}")
print(f"Answer: {answer}\n")
print(f'Time taken is {perf_counter()-st}')
```
The output is,
> Time taken is 0.4920176359999999
I would like to know if my understanding is wrong here. Because inside the pipeline flow, the model and tokenizers are initialized with AutoModelForQuestionAnswering and AutoTokenizer only.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I was expecting that both the examples should take relatively equal amount of time.
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Darwin-18.0.0-x86_64-i386-64bit
- Python version: 3.7.2
- PyTorch version (GPU?): 0.4.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No | 06-30-2020 13:18:18 | 06-30-2020 13:18:18 | Anyone looking into this issue ? :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,397 | closed | tokenizer started throwing this warning, ""Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'only_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you may want to check this is the right behavior."" | Recently while experimenting, BertTokenizer start to throw this warning
```bash
Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'only_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you may want to check this is the right behavior.
```
I know, this warning asks to provide truncation value.
I'm asking here because this warning started this morning. | 06-30-2020 12:07:33 | 06-30-2020 12:07:33 | This is because we recently upgraded the library to version v3.0.0, which has an improved tokenizers API. You can either disable warnings or put `truncation=True` to remove that warning (as indicated in the warning).<|||||>how do you disable the warnings for this? I'm encountering the same issue. But I don't want to set the truncation=True<|||||>You can disable the warnings with:
```py
import logging
logging.basicConfig(level=logging.ERROR)
```<|||||>I've changed the logging level and removed max_length but am still getting this error:
WARNING:transformers.tokenization_utils_base:Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.
<|||||>On which version are you running? Can you try to install v3.0.2 to see if it fixes this issue?<|||||>I've tried with v3.0.2 and I'm getting the same warning messages even when I changed the logging level with the code snippet above.<|||||>@tutmoses @wise-east can you give us a self-contained code example reproducing the behavior?<|||||>I got the same question<|||||>update transformers library to v3 and explicitly provide "trucation=True" while encoding text using tokenizers<|||||>Could reproduce the error with this code:
```
from transformers.data.processors.utils import SingleSentenceClassificationProcessor
tokenizer = CamembertTokenizer.from_pretrained("camembert-base")
texts = ["hi", "hello", "salut", "bonjour"]
labels = [0, 0, 1, 1,]
processor = SingleSentenceClassificationProcessor().create_from_examples(texts, labels)
dataset = processor.get_features(tokenizer=tokenizer)
```<|||||>Hello,
Using the following command had solved the problem:
`import logging
logging.basicConfig(level = logging.ERROR)`
However, since today 15h40 (Paris time), it does not work anymore and the following warning continues to pop up until crashing Google Colab:
`Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'longest_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy more precisely by providing a specific strategy to `truncation`.
`
Could you please tell me how to solve it? I also tried to deactivate truncation from the encode_plus tokenizer:
` encoded_dict = tokenizer.encode_plus(
sent, # Sentence to encode.
add_special_tokens = True, # Add '[CLS]' and '[SEP]'
max_length = 128, # Pad & truncate all sentences.
pad_to_max_length = True,
return_attention_mask = True, # Construct attn. masks.
return_tensors = 'pt', # Return pytorch tensors.
truncation = False
)`
But it did not work.
Thank for your help/replies,
----------EDIT---------------
I modified my code in the following way by setting "truncation = True" as suggested on this [post](https://github.com/Tiiiger/bert_score/pull/68). It worked perfectly! From what I understood, this should consider the max_lenght I'm applying and avoid the warning from comming up.
` encoded_dict = tokenizer.encode_plus(
sent, # Sentence to encode.
add_special_tokens = True, # Add '[CLS]' and '[SEP]'
max_length = 128, # Pad & truncate all sentences.
pad_to_max_length = True,
return_attention_mask = True, # Construct attn. masks.
return_tensors = 'pt', # Return pytorch tensors.
truncation = True
)`
J.<|||||>'truncation=True' solves the problem.
tokenizer = BertTokenizer.from_pretrained(cfg.text_model.pretrain)
lengths = [len(tokenizer.tokenize(c)) + 2 for c in captions]
captions_ids = [torch.LongTensor(tokenizer.encode(c, max_length=max_len, pad_to_max_length=True**_, truncation=True_**))
for c in captions]<|||||>not elegant solution
modify transformers source code (`~/python/site-packages/transformers/tokenization_utils_base.py`) line 1751 to aviod this warning
```
if 0: #if verbose:
logger.warning(
"Truncation was not explicitely activated but `max_length` is provided a specific value, "
"please use `truncation=True` to explicitely truncate examples to max length. "
"Defaulting to 'longest_first' truncation strategy. "
"If you encode pairs of sequences (GLUE-style) with the tokenizer you can select this strategy "
"more precisely by providing a specific strategy to `truncation`."
)
truncation = "longest_first"
```<|||||>add 'truncation=True' to tokenizer.encode_plus(truncation=True).
work to me! |
transformers | 5,396 | closed | Create model card | Create model card for electicidad-small (Spanish Electra) fine-tuned on SQUAD-esv1 | 06-30-2020 12:05:42 | 06-30-2020 12:05:42 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5396?src=pr&el=h1) Report
> Merging [#5396](https://codecov.io/gh/huggingface/transformers/pull/5396?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/331d8d2936e7a140225cf60301ba6469930fd216&el=desc) will **increase** coverage by `0.85%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5396?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5396 +/- ##
==========================================
+ Coverage 76.98% 77.84% +0.85%
==========================================
Files 138 138
Lines 24314 24314
==========================================
+ Hits 18719 18928 +209
+ Misses 5595 5386 -209
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5396?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5396/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5396/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.30% <0.00%> (+1.25%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5396/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.71% <0.00%> (+1.32%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5396/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.24% <0.00%> (+2.49%)` | :arrow_up: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5396/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+8.92%)` | :arrow_up: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5396/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <0.00%> (+13.07%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5396/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.14% <0.00%> (+29.44%)` | :arrow_up: |
| [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5396/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <0.00%> (+66.66%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5396?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5396?src=pr&el=footer). Last update [331d8d2...3299725](https://codecov.io/gh/huggingface/transformers/pull/5396?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,395 | closed | [Almost all TF models] TF clean up: add missing CLM / MLM loss; fix T5 naming and keras compile | This PR aligns TF code more with PT code and adds full training support to all CLM and MLM models applying @jplu's loss design to the remaining models. In more detail the following things are included in the PR:
- Add `TFMaskedLanguageModelingLoss` and `TFCausalLanguageModelingLoss` to all CLM and MLM TF models. Only Transfo-XL and XLM are not included since they use adaptive softmax (TF Transfo-XL currently has no Adaptive Softmax implemented cc @TevenLeScao for notification)
- Change value to mask CE loss from -1 to -100 to align with PyTorch cc - tf_ner script is updated accordingly @jplu. Using -1 is deprecated here and should be removed in a future version.
- Split Bert into BertForCLM and BertForMLM as was done in PyTorch (small break in backward compatibility here)
- Split TFAutoModelWithLMHead into TFAutoModelForCLM, ...ForMLM, ForSeq2Seq as was done in PyTorch to make TF ready for encoder-decoder wrapper.
- Add various tests for `modeling_tf_auto`.py e.g. that the mappings are correctly ordered
- Fix inconsistent naming in TF T5 and fix TF T5 keras compilation bug @sshleifer - encoder decoder tf related tests are fixed so should concern tf bart as well
TODO:
- [x] add labels to all tests where it applies
- [x] add CLM loss to all other models
- [x] add MLM loss to all other models
- [x] Clean TF T5
Future Pr:
- [ ] Test that TF Trainer works well with all new CLM / MLM models - we should definitely start adding tests for TF Trainer as well @jplu @julien-c @LysandreJik
- [ ] TF Benchmark can now be done on training as welll -> update the benchmark scripts | 06-30-2020 10:44:39 | 06-30-2020 10:44:39 | @LysandreJik @julien-c @jplu @thomwolf @sgugger - can you take a look at this example if the CLM loss is correctly added? If yes, I will add this loss to all other CLM models and add tests.<|||||>Looks good to me!!<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5395?src=pr&el=h1) Report
> Merging [#5395](https://codecov.io/gh/huggingface/transformers/pull/5395?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/21cd8c40862ba356096ab4cda31563ee3a35c1bb&el=desc) will **decrease** coverage by `0.03%`.
> The diff coverage is `77.17%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5395?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5395 +/- ##
==========================================
- Coverage 76.39% 76.35% -0.04%
==========================================
Files 141 141
Lines 24617 24868 +251
==========================================
+ Hits 18807 18989 +182
- Misses 5810 5879 +69
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5395?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/5395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.22% <ø> (ø)` | |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `74.41% <ø> (ø)` | |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.86% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.02% <10.00%> (-0.91%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `23.38% <14.28%> (-0.24%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `63.03% <40.42%> (-9.47%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `96.97% <80.00%> (-1.40%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `90.90% <83.65%> (-0.53%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.88% <84.61%> (-0.23%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5395/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `76.47% <100.00%> (+0.48%)` | :arrow_up: |
| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/5395/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5395?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5395?src=pr&el=footer). Last update [21cd8c4...c25aa53](https://codecov.io/gh/huggingface/transformers/pull/5395?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Ok will add this for all TF CLM models then :-) and add tests. |
transformers | 5,394 | closed | Upload DistilBART artwork | w.r.t. #5278 | 06-30-2020 10:10:39 | 06-30-2020 10:10:39 | |
transformers | 5,393 | closed | GPT2Tokenizer.save_pretrained does not work in v3.0.0 | # 🐛 Bug
## Information
I am using GPT2Tokenizer and try to save the information about the tokenizer.
In v2.7, I always do
```python
>>> from transformers import GPT2Tokenizer
>>> a = GPT2Tokenizer('./tests/dataloader/dummy_gpt2vocab/vocab.json', './tests/dataloader/dummy_gpt2vocab/merges.txt')
>>> a.save_pretrained("./")
('./vocab.json', './merges.txt', './special_tokens_map.json', './added_tokens.json')
```
And I found it doesn't work when I upgrade to 3.0.
```python
>>> from transformers import GPT2Tokenizer
>>> a = GPT2Tokenizer('./tests/dataloader/dummy_gpt2vocab/vocab.json', './tests/dataloader/dummy_gpt2vocab/merges.txt')
>>> a.save_pretrained("./")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\hzhwcmhf\anaconda3\envs\mix\lib\site-packages\transformers\tokenization_utils_base.py", line 1362, in save_pretrained
f.write(json.dumps(tokenizer_config, ensure_ascii=False))
File "C:\Users\hzhwcmhf\anaconda3\envs\mix\lib\json\__init__.py", line 238, in dumps
**kw).encode(obj)
File "C:\Users\hzhwcmhf\anaconda3\envs\mix\lib\json\encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "C:\Users\hzhwcmhf\anaconda3\envs\mix\lib\json\encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "C:\Users\hzhwcmhf\anaconda3\envs\mix\lib\json\encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type AddedToken is not JSON serializable
>>> a.save_vocabulary("./")
('./vocab.json', './merges.txt')
```
I know there is ``save_vocabulary`` now, but has ``save_pretrained`` been removed and should not be used?
However, ``save_vocabulary`` does not save special tokens. So how can I save the whole information about the tokenizer.
## Environment info
- `transformers` version: 3.0.0
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 06-30-2020 09:43:06 | 06-30-2020 09:43:06 | Hello! Do you mind giving me your tokenizers version?
It shouldn't fail like this, `save_pretrained` is still supported and the recommended way to save tokenizers.<|||||>``tokenizers-0.8.0rc4`` is installed when I use ``pip install transformers``.
Actually, I'm running a CI for my project. You can see more information here: https://travis-ci.com/github/thu-coai/cotk/builds/173607827<|||||>Hmm @mfuntowicz might have an idea about what's going on here?<|||||>The problem still exists in 3.0.1<|||||>@hzhwcmhf, I encounter the same issue while saving pretrained fast tokenizer based on `RobertaTokenizerFast`.
This happens due to adding non-serializable `AddedToken` instances to `kwargs` and later `init_kwargs`: https://github.com/huggingface/transformers/blob/d6b0b9d451e7ffe0da72a5f532cc9bec1563e801/src/transformers/tokenization_roberta.py#L316
https://github.com/huggingface/transformers/blob/d6b0b9d451e7ffe0da72a5f532cc9bec1563e801/src/transformers/tokenization_roberta.py#L321
`AddedToken` instances appear in `init_kwargs` property which is used to build a `tokenizer_config` which in its turn is being serialized into a `tokenizer_config_file`:
https://github.com/huggingface/transformers/blob/d6b0b9d451e7ffe0da72a5f532cc9bec1563e801/src/transformers/tokenization_utils_base.py#L1355
https://github.com/huggingface/transformers/blob/d6b0b9d451e7ffe0da72a5f532cc9bec1563e801/src/transformers/tokenization_utils_base.py#L1362
Note, that `AddedToken` instances are properly serialized to `special_tokens_map_file`:
https://github.com/huggingface/transformers/blob/d6b0b9d451e7ffe0da72a5f532cc9bec1563e801/src/transformers/tokenization_utils_base.py#L1368
## Workaround
As a workaround you can inspect `tokenizer.init_kwargs` property and update all items which are not JSON serializable (i.e. not strings in this case) before saving a pretrained tokenizer.
```python
tokenizer = ...
init_kwargs = {}
for key, value in tokenizer.init_kwargs.items():
if isinstance(value, AddedToken):
init_kwargs[key] = str(value)
else:
init_kwargs[key] = value
tokenizer.init_kwargs = init_kwargs
tokenizer.save_pretrained(<path>)
``` |
transformers | 5,392 | closed | Windows: No matching distribution found for lightning_base | # 🐛 Bug
I followed the seq2seq readme and wanted to try the sshleifer/distilbart-cnn-12-6 model for absractive text summarization.
I got the bug above, it seems like lightning_base was part of this project before it was moved/removed.
## Information
Model I am using: sshleifer/distilbart-cnn-12-6
Language I am using the model on: English
The problem arises when using:
* [x] the official example scripts: (give details below)
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
Cnn_dm
## To reproduce
Steps to reproduce the behavior:
1. Follow the instructions in the readme and prepare your environment & oull the latest master
2. Start summarization by using `./finetune.sh \
--data_dir $CNN_DIR \
--train_batch_size=1 \
--eval_batch_size=1 \
--output_dir=xsum_results \
--num_train_epochs 1 \
--model_name_or_path facebook/bart-large`
3. Receive the error
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I would expect the model to start inference
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: no
@sshleifer, you asked for beeing tagged on issues in the readme
| 06-30-2020 09:09:36 | 06-30-2020 09:09:36 | Also I am a bit worried, that my RTx 2070 with 8GB will be too small for training, since 13GB were recommended for a batch size of 1 with fp16. I appreciate any hints, what I could do to make it run. Thanks you<|||||>Also I followed the basic installation process from here:
https://github.com/huggingface/transformers/blob/master/examples/README.md#important-note
but I still get the same error<|||||>lightning_base is still there. Do you have a traceback?<|||||>Yep, thats what I get so far:
Traceback (most recent call last):
_File "finetune.py", line 15, in <module>
from lightning_base import BaseTransformer, add_generic_args, generic_train
ModuleNotFoundError: No module named 'lightning_base'_
<|||||>try `export PYTHONPATH="../":"${PYTHONPATH}"`
More info in `examples/seq2seq/README.md`<|||||>I had the same issue as you described @MichaelJanz, also on Windows10 with python 3.7.7.
To clarify my setup, I followed the instructions under "Important Note" on the [transformers/examples page](https://github.com/huggingface/transformers/tree/master/examples) and got an error that faiss was unable to be installed (faiss only supports Linux and MacOS currently I think). I removed faiss from the list of requirements at examples/requirements.txt and ran the example finetune.sh command. Similarly to Michael, I got an error that lightning_base was not found. Since "export" doesn't work on windows command line, I inserted two lines above the lightning_base import in finetune.py:
```
import sys
sys.path.insert(0, r'C:\Users\chris\transformers\examples
```
This solved the issue that lightning_base wasn't found, but I encountered a new error:
```
File "finetune.py", line 17, in <module>
from lightning_base import BaseTransformer, add_generic_args, generic_train
...
File "C:\Users\chris\transformers\env\lib\site-packages\tokenizers\__init__.py", line 17, in <module>
from .tokenizers import Tokenizer, Encoding, AddedToken
ModuleNotFoundError: No module named 'tokenizers.tokenizers'
```
Looking at the tokenizers package installed, I didn't see an additional folder labeled "tokenizers". The tokenizers version I have within my virtual environment is `tokenizers==0.8.0rc4`. @sshleifer , could you let me know what version of tokenizers you have in your environment? Let me know if you have any other suggestions about what might be happening (I worry that the problem lies with using Windows).
Edit: for context, I tried running the finetuning script within a Linux environment and had no problems, with the same `tokenizers==0.8.0rc4` version. I'm guessing that this whole issue is a Windows problem.<|||||>Yeah I have the same version of tokenizers, this seems like a windows problem. <|||||>To fix that for me, I decided to execute _finetune.sh_ directly. I had to insert
`export PYTHONPATH="$PATH:(absolute path to the examples folder)`
and `--data_dir (absolute path to example folder)`
However, thats an unpretty workaround, which works.
Then I got into an encoding error, which I had to change line 39 on utils.py to
`lns = lmap(str.strip, data_path.open(encoding="UTF-8").readlines())`
Then the finetuning process starts. It just is running horribly slow with no GPU usage with the warning:
`Warning: multi_tensor_applier fused unscale kernel is unavailable, possibly because apex was installed without --cuda_ext --cpp_ext. Using Python fallback. Original ImportError was: ModuleNotFoundError("No module named 'amp_C'")`
So Gpu is not used. I already opened an [Issue](https://github.com/NVIDIA/apex/issues/905) on Nvidia/Apex about bulding Apex for Windows with Cuda extensions, but any hint here is appreciated:
I am thinking about switching to Ubuntu, since it seems like alot of errors have their origin in windows. Is that a recommended thing?
<|||||>Before you start running these commands: it probably end up _not_ working due to Mecab issues (see bottom).
- FAISS is currently not supported on Windows (though it does have [an open project](https://github.com/facebookresearch/faiss/projects/2)): remove from requirements
- instead of `wget` you can use `Invoke-WebRequest` on Powershell (after having `cd`'d into `examples/seq2seq`):
```ps
Invoke-WebRequest https://s3.amazonaws.com/datasets.huggingface.co/summarization/xsum.tar.gz -OutFile xsum.tar.gz
```
- To add the environment variable:
```ps
$env:Path += ";" + (Join-Path -Path (Get-Item .).FullName -ChildPath "xsum")
```
Rather than using the bash file (finetune.sh), I suggest that you open it and copy-paste the Python command that is in there, including the options that are already present, and add your own options after it (things like data dir, model name).
Before running the command, add `../` to PYTHONPATH:
```ps
$env:PythonPath += ";../"
```
After all that, you will probably still run into a problem involving mecab. It is used for Japanese tokenisation, and it is not easy to disable (it's also part of sacrebleu). Mecab has a new v1.0 release that works on Windows, however, it includes breaking changes in the rest of [transformers ](https://github.com/huggingface/transformers/pull/5375) as well as [sacrebleu](https://github.com/mjpost/sacrebleu/issues/94). This is unfortunate because such a small change means that many functionalities or examples cannot be used on Windows, _even if you do not use Japanese_. This is due to the nature of import. I'd rather have that these libraries are only imported when they are needed to maximise cross-platform usage.<|||||>@MichaelJanz How did you get passed the mecab issue? And does it run correctly without the AMP flag?
Running Ubuntu is one way to go. However, my personal recommendation is waiting a bit until WSL has full GPU support (which is currently in beta, [you can try it out!](https://developer.nvidia.com/cuda/wsl)). That way, you can still enjoy your good ol' Windows experience, and only open up an Ubuntu terminal when running your experiments.<|||||>@BramVanroy It runs well so far under windows, but I dont know what the AMP flag is for. Just the gpu support is missing in training, however gpu is available during testing (atleast I get some gpu usage there and high clock)
About mecab, I did not have any issues with it at all. In theory, training is working and possible, it just takes way too long.
Ty for the hint about WSL gpu support, I am just working on getting that to run<|||||>It is very odd how you did not have any issues with mecab because the pinned versions are not supported on Windows...
I meant the `--fp16` flag. If used, it will try to use AMP. But since you seem to have issues with AMP, you can try to remove `--fp16` from the command in `finetune.sh`.<|||||>I removed the --fp16 flag and the missing AMP_C message is gone. But still the gpu is not used. Could it be, that it is too small for that model, so it just uses the cpu? I dont know how Pytorch handles memory issues.
Thats my screen so far. As you can see, the gpu is not utilized.

<|||||>Are you sure? Can you let the code run for a couple of steps and then monitor the GPU? It is possible that the code first does some data preprocessing, which would be CPU-intensive, without GPU. Only when training really starts (and you see the steps moving) the GPU should be used.<|||||>@MichaelJanz export PYTHONPATH="$PATH:(absolute path to the examples folder)
do you mean something like this? export PYTHONPATH="$PATH:/content/transformers/examples"
I'm using google collab and the path to examples is /content/transformers/examples,
Sorry I'm a complete noob when it comes to python
-------------------------------------------------------------
Edit:
I think I fixed it by giving the path to finetune.py in finetune.sh :
python /content/transformers/examples/seq2seq/finetune.py \
--learning_rate=3e-5 \
--fp16 \
--gpus 1 \
--do_train \
--do_predict \
--n_val 1000 \
--val_check_interval 0.1 \
--sortish_sampler \
$@
However, I got a new error:
File "/content/transformers/examples/seq2seq/finetune.sh", line 7
--gpus 1 \
is it normal?
^<|||||>@Hildweig make sure your lightning example is up to date with examples/requirements.txt. Closing this. Pls make a new issue if you are still struggling to get things working :) |
transformers | 5,391 | closed | Training a GPT-2 from scratch in Greek-text, results in a low perplexity score of 7 after 15 epochs. Is it normal that score? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 06-30-2020 08:37:47 | 06-30-2020 08:37:47 | I try to train a GPT-2 from scratch in Greek with an older version of run_language_modeling.py (https://github.com/huggingface/transformers/tree/master/examples/language-modeling) script, but I get a low perplexity score of 7 after 15 epochs.
My data for train is about 4.6Gb and is constructed as 5 sentences per line. The data for the evaluation is about 450Mb constructed with the same way. Use of BPE for the encoding with a vocab of 22000 merges.
For the train is used FP16 O2.
<|||||>I think it is normal.
For perplexity, the lower, the better.
By the way, can you share your script for preprocessing data?<|||||>@xx-zhou16 Ok i found a mistake i had...when i was computing the loss i wasn't ignoring the pad_token . Now i train it again , i think now the perplexity score will stop about 11, which is again low.
I use the line by LineByLineTextDataset :
class LineByLineTextDataset(Dataset):
def __init__(self, tokenizer: PreTrainedTokenizer, args, file_path: str, block_size=512):
assert os.path.isfile(file_path)
logger.info("Creating features from dataset file at %s", file_path)
with open(file_path, encoding="utf-8") as f:
lines = [line for line in f.read().splitlines() if (len(line) > 0 and not line.isspace())]
self.examples = tokenizer.batch_encode_plus(lines, add_special_tokens=True, truncation=True, max_length=block_size)["input_ids"]
def __len__(self):
return len(self.examples)
def __getitem__(self, i):
return torch.tensor(self.examples[i], dtype=torch.long)
def load_and_cache_examples(args, tokenizer, evaluate=False):
file_path = args.eval_data_file if evaluate else args.train_data_file
if args.line_by_line:
return LineByLineTextDataset(tokenizer, args, file_path=file_path, block_size=args.block_size)
padding and creating an attention mask .
def collate(examples: List[torch.Tensor]):
padding_value = 0 if tokenizer._pad_token is None else tokenizer.pad_token_id
input_ids = pad_sequence(examples, batch_first=True, padding_value=padding_value)
max_length = input_ids.shape[1]
attention_mask = torch.stack([torch.cat([torch.ones(len(t), dtype=torch.long), torch.zeros(max_length - len(t), dtype=torch.long)]) for t in examples])
return input_ids, attention_mask
and that is in the evaluation , for the perplexity score computation.
for batch in tqdm(eval_dataloader, desc="Evaluating"):
input_ids, attention_mask = batch
inputs, labels = (input_ids, input_ids)
inputs = inputs.to(args.device)
labels = labels.to(args.device)
attention_mask = attention_mask.to(args.device)
with torch.no_grad():
outputs = model(inputs, labels=labels, attention_mask=attention_mask)
lm_loss = outputs[0].mean().item()
eval_loss += lm_loss
nb_eval_steps += 1
eval_loss = eval_loss / nb_eval_steps
perplexity = torch.exp(torch.tensor(eval_loss))<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,390 | closed | model.generate source code | Hi there,
I am trying to use BART to do NLG task. During my reading the BART tutorial on the website, I couldn't find the definition of 'model.generate()" function. Could you please add some explaination on that? thanks in advance | 06-30-2020 08:20:33 | 06-30-2020 08:20:33 | `generate()` is defined in the `PretrainedModel` class here https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_utils.py#L872. Given that `BartModel`, `BartForConditionalGeneration`, etc, inherit from `PretrainedModel` they all have access to this generate function.
Hope this help? <|||||>We actually just moved all of the `generate` functions to their own file:
https://github.com/huggingface/transformers/blob/c4d4e8bdbd25d9463d41de6398940329c89b7fb6/src/transformers/generation_utils.py#L101
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>> We actually just moved all of the `generate` functions to their own file:
>
> https://github.com/huggingface/transformers/blob/c4d4e8bdbd25d9463d41de6398940329c89b7fb6/src/transformers/generation_utils.py#L101
Hi, it seems that .generate() can only take input_ids as source input. I wonder whether input_embs can be used as input. |
transformers | 5,389 | closed | Raises PipelineException on FillMaskPipeline when there are != 1 mask_token in the input | 06-30-2020 08:16:16 | 06-30-2020 08:16:16 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5389?src=pr&el=h1) Report
> Merging [#5389](https://codecov.io/gh/huggingface/transformers/pull/5389?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/64e3d966b1131c15b5905b1e1e582d4bebac1ef0&el=desc) will **decrease** coverage by `0.16%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5389?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5389 +/- ##
==========================================
- Coverage 77.75% 77.59% -0.17%
==========================================
Files 140 140
Lines 24373 24384 +11
==========================================
- Hits 18951 18920 -31
- Misses 5422 5464 +42
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5389?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `75.79% <100.00%> (+0.47%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.68% <0.00%> (-0.72%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.43% <0.00%> (+0.75%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.30% <0.00%> (+1.25%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `89.11% <0.00%> (+5.10%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `95.14% <0.00%> (+29.44%)` | :arrow_up: |
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5389/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `95.23% <0.00%> (+33.33%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5389?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5389?src=pr&el=footer). Last update [64e3d96...56d9c36](https://codecov.io/gh/huggingface/transformers/pull/5389?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>no particular reason @LysandreJik other than developer lazyness (me)<|||||>> no particular reason @LysandreJik other than developer lazyness (me)
@julien-c
Hi Julien. I understand if the implementation of multiple tokens is too inconvenient, but I am sure that many would appreciate such a feature. <|||||>Hey @mfuntowicz , @julien-c,
> LGTM! Why don't we handle multiple tokens, by the way? The models should behave somewhat okay given that their pre-training hides more than one token, right?
I am implementing this feature in my local version, is there anything I should be careful of? I am just planning to remove the exception trigger and handle the multiple masked tokens output.
Also, let me know if you'd like a PR for this.
|
|
transformers | 5,388 | closed | Why T5 do not generate the whole next sentence as one of the pretrain loss? | T5 has the next-sentence-predict loss.
And the next-sentence-predict loss is not generating the whole next sentence but generating is_next or not_next token.
Thank you very much. | 06-30-2020 07:34:59 | 06-30-2020 07:34:59 | Hi @guotong1988 what do you exactly mean by not generating the whole next sentence ?<|||||>Sorry, my mistake.
Input the previous sentence and generate the next sentence, instead of input the two sentences.<|||||>I don't really understand the question here. Are you asking why T5 was not trained with the objective to generate the complete next sentence? <|||||>Thank you for your reply.
Yes. I am asking why T5 was not pre-trained with the objective to generate the complete next sentence?
Do you think generating the whole next sentence as one of the pretrain loss will improve pretrain performance?<|||||>
https://github.com/google-research/text-to-text-transfer-transformer/issues/286 |
transformers | 5,387 | closed | BART fine tuning on gpu issue | # ❓ Questions & Help
I am fine tuning BART by executing transformers/examples/seq2seq/finetune.sh as instructed.
I am getting the following error:
```
Traceback (most recent call last):
File "finetune.py", line 346, in <module>
main(args)
File "finetune.py", line 324, in main
logger=logger,
File "/data/t-angand/transformers/examples/lightning_base.py", line 330, in generic_train
trainer.fit(model)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 928, in fit
self.single_gpu_train(model)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 183, in single_gpu_train
self.run_pretrain_routine(model)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1086, in run_pretrain_routine
False)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 291, in _evaluate
output = self.evaluation_forward(model, batch, batch_idx, dataloader_idx, test_mode)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 462, in evaluation_forward
output = model.validation_step(*args)
File "finetune.py", line 136, in validation_step
return self._generative_step(batch)
File "finetune.py", line 163, in _generative_step
generated_ids = self.model.generate(input_ids=source_ids, attention_mask=source_mask, use_cache=True,)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File "/data/t-angand/transformers/src/transformers/modeling_utils.py", line 1159, in generate
encoder_outputs: tuple = encoder(input_ids, attention_mask=attention_mask)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/data/t-angand/transformers/src/transformers/modeling_bart.py", line 303, in forward
inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/torch/nn/modules/sparse.py", line 114, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File "/home/angand/.conda/envs/bart_env/lib/python3.6/site-packages/torch/nn/functional.py", line 1724, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected object of device type cuda but got device type cpu for argument #3 'index' in call to _th_index_select
```
Thanks is advance. | 06-30-2020 06:56:36 | 06-30-2020 06:56:36 | Have you checked if your setup is working properly and if
`torch.cuda.is_available()`
returns True?<|||||>Yeah
It gives initial output as:
Some weights of the model checkpoint at facebook/bart-large were not used when initializing BartForConditionalGeneration: ['encoder.version', 'decoder.version']
- This IS expected if you are initializing BartForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPretraining model).
- This IS NOT expected if you are initializing BartForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of BartForConditionalGeneration were not initialized from the model checkpoint at facebook/bart-large and are newly initialized: ['final_logits_bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
CUDA_VISIBLE_DEVICES: [0]
Using APEX 16bit precision.
Selected optimization level O1: Insert automatic casts around Pytorch functions and Tensor methods.
Defaults for this optimization level are:
enabled : True
opt_level : O1
cast_model_type : None
patch_torch_functions : True
keep_batchnorm_fp32 : None
master_weights : None
loss_scale : dynamic
Processing user overrides (additional kwargs that are not None)...
After processing overrides, optimization options are:
enabled : True
opt_level : O1
cast_model_type : None
patch_torch_functions : True
keep_batchnorm_fp32 : None
master_weights : None
loss_scale : dynamic
Warning: multi_tensor_applier fused unscale kernel is unavailable, possibly because apex was installed without --cuda_ext --cpp_ext. Using Python fallback. Original ImportError was: ModuleNotFoundError("No module named 'amp_C'",)
Validation sanity check: 0it [00:00, ?it/s]
Then fails.<|||||>What's the output of `transformers-cli env`?<|||||>Sorry I didnt get what you are asking for.
<|||||>Can you run the command `transformers-cli env` in terminal and paste the output here in code format (using ```)<|||||>It says command not found.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,386 | closed | bart-large-cnn training related information | # ❓ Questions & Help
Hi Team HuggingFace
Can we please know the training data set,,total number of summary pairs used and the configuration used to train BART Summarizer (facebook/bart-large-cnn).
Thanks in advance!! | 06-30-2020 06:46:30 | 06-30-2020 06:46:30 | bart-large-cnn is trained on CNN/DM summarization dataset. See this for dataset info https://www.tensorflow.org/datasets/catalog/cnn_dailymail<|||||>Also available in the Hugging Face `nlp` [library](https://huggingface.co/datasets/cnn_dailymail) :)
You can explore the dataset [here](https://huggingface.co/nlp/viewer/?dataset=cnn_dailymail&config=3.0.0)
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>The name is very confusing if it's trained on CNN and Dailymail but named as CNN (only without DM). Please consider changing the name. <|||||>sorry for the confusion. it's too late to change it and the name is copied from fairseq. it's trained on the cnn/dailymail dataset.<|||||>I feel that you answered the question about the dataset, but what about the training configuration used? |
transformers | 5,385 | closed | Example: PyTorch Lightning returns missing attribute error (Token Classification) | Hi,
the following error message is currently thrown (version 3.0.0 of Transformers) when running the `run_ner_pl.sh` example from the token classification example:
```bash
Traceback (most recent call last):
File "run_pl_ner.py", line 198, in <module>
trainer = generic_train(model, args)
File "/mnt/transformers-stefan/examples/lightning_base.py", line 330, in generic_train
trainer.fit(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 918, in fit
self.single_gpu_train(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/distrib_parts.py", line 176, in single_gpu_train
self.run_pretrain_routine(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/trainer.py", line 1093, in run_pretrain_routine
self.train()
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/training_loop.py", line 335, in train
self.reset_train_dataloader(model)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/data_loading.py", line 189, in reset_train_dataloader
self.train_dataloader = self.request_dataloader(model.train_dataloader)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/trainer/data_loading.py", line 352, in request_dataloader
dataloader = dataloader_fx()
File "/mnt/transformers-stefan/examples/lightning_base.py", line 142, in train_dataloader
* float(self.hparams.num_train_epochs)
File "/usr/local/lib/python3.6/dist-packages/pytorch_lightning/utilities/parsing.py", line 116, in __getattr__
raise AttributeError(f'Missing attribute "{key}"')
AttributeError: Missing attribute "n_gpu"
```
It seems that `n_gpu` is set in the `lightning_base.py` file:
https://github.com/huggingface/transformers/blob/7f60e93ac5c73e74b5a00d57126d156be9dbd2b8/examples/lightning_base.py#L139-L142
The example is running when `n_gpu` is changed to `gpus`.
I will prepare for fixing it and hopefully this does not introduce regression bugs for other PL examples 😅 | 06-30-2020 00:24:10 | 06-30-2020 00:24:10 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,384 | closed | Positional and Segment Embeddings in BERT | #### Question
I'm trying to train a BERT language model from scratch using the huggingface library. I'm not sure I completely understand how positional and segment embeddings work in BERT. The original BERT paper states that unlike transformers, positional and segment embeddings are learned. What exactly does this mean?
How do positional embeddings help in predicting masked tokens? Is the positional embedding of the masked token predicted along with the word?
How has this been implemented in the huggingface library? | 06-29-2020 23:55:09 | 06-29-2020 23:55:09 | They are not predicted, they are learnt alongside the token embedding themselves. They work exactly the same w.r.t. to training: they are all simply embeddings but with a different vocab size.
https://github.com/huggingface/transformers/blob/9a473f1e43221348334b9e7f95bb45770b7ef268/src/transformers/modeling_bert.py#L154-L156
The output of all three embeddings are summed up before passing them to the transformer layers.
Positional embeddings can help because they basically highlight the position of a word in the sentence. A word in the first position likely has another meaning/function than the last one. Also, the same word likely will have a different syntactic function in the first vs. last position. Positional embeddings thus play some kind of syntactic role where they tell the model that a word can have a different meaning/syntactic function depending on its position.<|||||>Thank you for the explanation. What is the benefit of learning positional embeddings as opposed to the technique adopted by transformers where the positional encoding is directly added to each input embedding and not learnt?<|||||>When is a positional embedding directly added? An embedding is an embedding and as such always needs to be learned or transfered. <|||||>Section 3.5 of the paper '[Attention is All You Need](https://arxiv.org/pdf/1706.03762.pdf)' explains the positional encoding in the case of transformers. They use _'sine and cosine functions of different frequencies'_ to inject information about the position of the tokens. Learned positional embeddings do not seem to help in the case of the original transformers. BERT on the other hand 'learns' positional embeddings. I wanted to understand the benefits of learning these positional embeddings in BERT.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,383 | closed | Documentation for the Trainer API | This PR introduces documentation for the Trainer/TFTrainer classes and the argument dataclasses.
While documenting, I noticed that the method `Trainer.evaluate` and `TFTrainer.evaluate` had an argument `pred_loss_only` which was not used at all (my guess is that it's the same argument passed at init that is used). I removed that argument. I'm aware this is a breaking change but, as a user, I'd expect a hard error when passing something that is not used at all. Let me know if there was a good reason for it. | 06-29-2020 21:54:02 | 06-29-2020 21:54:02 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5383?src=pr&el=h1) Report
> Merging [#5383](https://codecov.io/gh/huggingface/transformers/pull/5383?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0267668c3d648c6e41afda97f5df8671ee880ac3&el=desc) will **decrease** coverage by `0.19%`.
> The diff coverage is `77.71%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5383?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5383 +/- ##
==========================================
- Coverage 77.01% 76.81% -0.20%
==========================================
Files 128 138 +10
Lines 21615 24314 +2699
==========================================
+ Hits 16646 18676 +2030
- Misses 4969 5638 +669
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5383?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2N0cmwucHk=) | `97.05% <ø> (ø)` | |
| [src/transformers/configuration\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/5383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.22% <ø> (ø)` | |
| [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `97.14% <ø> (ø)` | |
| [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JvYmVydGEucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5383/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <ø> (ø)` | |
| ... and [157 more](https://codecov.io/gh/huggingface/transformers/pull/5383/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5383?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5383?src=pr&el=footer). Last update [482a599...c56586e](https://codecov.io/gh/huggingface/transformers/pull/5383?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Great! Much needed! Also adding @thomwolf who might want to chime in<|||||>The TF documentation looks much better!! :smile: Thanks @sgugger <|||||>Thanks for catching all my bad copy-pastes @jplu
Concerning `n_gpu`, I think the change of name to `n_replica`/`n_device` might be welcome, especially since it does not represent the same thing as `n_gpu` in the PyTorch `Trainer` (which is only more than 1 when using dp instead of ddp). But that probably would go better in another PR! |
transformers | 5,382 | closed | cannot import name 'AutoModelForSeq2SeqLM' from transformers | cannot import name 'AutoModelForSeq2SeqLM' from transformers | 06-29-2020 21:15:30 | 06-29-2020 21:15:30 | what's your `transformers-cli env` output? You should be on 3.0.0 or install from source for this to work. |
transformers | 5,381 | closed | AssertionError: Padding_idx must be within num_embeddings | ### Information
Model I am using: XLM
### To reproduce
```
from pytorch_transformers import XLMModel, XLMConfig, XLMTokenizer
model_class, tokenizer_class, pretrained_weights = (XLMModel, XLMTokenizer, 'xlm-mlm-en-2048')
config = XLMConfig.from_pretrained(pretrained_weights)
xlm_model = model_class.from_pretrained(pretrained_weights, config=config)
```
**Error:**
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/username/venv27/lib/python2.7/site-packages/pytorch_transformers/modeling_utils.py", line 536, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "/home/username/venv27/lib/python2.7/site-packages/pytorch_transformers/modeling_xlm.py", line 545, in __init__
self.embeddings = nn.Embedding(self.n_words, self.dim, padding_idx=self.pad_index)
File "/home/username/venv27/lib/python2.7/site-packages/torch/nn/modules/sparse.py", line 88, in __init__
assert padding_idx < self.num_embeddings, 'Padding_idx must be within num_embeddings'
AssertionError: Padding_idx must be within num_embeddings
```
### Expected behavior
Until recently (last I checked was 2 months ago, but since then I never updated anything), everything worked without issues and I was able to load the model with those 4 instructions easily (which is what I am expecting).
### Environment info
pytorch_transformers: 1.2.0
Python: 2.7
pytorch: 1.4.0 | 06-29-2020 19:29:03 | 06-29-2020 19:29:03 | I have no problem loading this in `transformers`. These files have not been updated in the past two months either; can you try doing the same in `transformers` or are you constrained by Python 2?<|||||>I'm constrained to Python 2 right now (due to a decently large codebase which I do not have enough time to upgrade to Py3 in these months), and that's why I'm using such an old version the library (which is the latest available for Py2).<|||||>Hmm, okay I think I found the source of the issue. It seems the configuration file was indeed changed, the April 24th in a non-backwards compatible way (cc @julien-c). Very sorry about this.
In order to fix this you can use this JSON as a file on your machine:
```json
{
"architectures": [
"XLMWithLMHeadModel"
],
"emb_dim": 2048,
"n_layers": 12,
"n_heads": 16,
"dropout": 0.1,
"attention_dropout": 0.1,
"gelu_activation": true,
"sinusoidal_embeddings": false,
"asm": false,
"bos_index": 0,
"eos_index": 1,
"pad_index": 2,
"unk_index": 3,
"mask_index": 5,
"n_langs": 1,
"n_words": 30145
}
```
and then load it as you did, but replacing the weights name with the filepath:
```py
from pytorch_transformers import XLMModel, XLMConfig, XLMTokenizer
model_class, tokenizer_class, pretrained_weights = (XLMModel, XLMTokenizer, 'xlm-mlm-en-2048')
config = XLMConfig.from_pretrained(PATH_TO_FILE)
xlm_model = model_class.from_pretrained(pretrained_weights, config=config)
```<|||||>Everything's working now (both the four-liner and my codebase as well!). Thanks! <|||||>Glad you could make it work! |
transformers | 5,380 | closed | Unable to load pre-trained model/tokenizer when using Kubernetes | # 🐛 Bug
## Information
Model I am using: GPT-2
The problem arises when using:
* [Y] my own modified scripts: (give details below)
The tasks I am working on is:
* [Y] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Create a Kubernetes cluster, a Docker image with all needed dependencies, and a shared mount file-system for the cluster (the file-system contains the code and training data).
2. Create an `MPIJob` YAML file for running training.
3. Deploy the job on the cluster using `kubectl` or `helm`.
Here is the error I get (I have removed references to my custom code since that is irrelevant for this error):
```
Mon Jun 29 06:31:29 2020<stderr>:Traceback (most recent call last):
Mon Jun 29 06:31:29 2020<stderr>: File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 379, in _from_pretrained
Mon Jun 29 06:31:29 2020<stderr>: resolved_vocab_files[file_id] = cached_path(file_path, cache_dir=cache_dir, force_download=force_download, proxies=proxies, resume_download=resume_download)
Mon Jun 29 06:31:29 2020<stderr>: File "/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py", line 211, in cached_path
Mon Jun 29 06:31:29 2020<stderr>: resume_download=resume_download, user_agent=user_agent)
Mon Jun 29 06:31:29 2020<stderr>: File "/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py", line 312, in get_from_cache
Mon Jun 29 06:31:29 2020<stderr>: os.makedirs(cache_dir)
Mon Jun 29 06:31:29 2020<stderr>: File "/usr/lib/python3.6/os.py", line 220, in makedirs
Mon Jun 29 06:31:29 2020<stderr>: mkdir(name, mode)
Mon Jun 29 06:31:29 2020<stderr>:FileExistsError: [Errno 17] File exists: '/root/.cache/torch/transformers'
Mon Jun 29 06:31:29 2020<stderr>:
Mon Jun 29 06:31:29 2020<stderr>:During handling of the above exception, another exception occurred:
Mon Jun 29 06:31:29 2020<stderr>:
Mon Jun 29 06:31:29 2020<stderr>:Traceback (most recent call last):
Mon Jun 29 06:31:29 2020<stderr>: tokenizer = tokenizer_class.from_pretrained(args.model_checkpoint)
Mon Jun 29 06:31:29 2020<stderr>: File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 302, in from_pretrained
Mon Jun 29 06:31:29 2020<stderr>: return cls._from_pretrained(*inputs, **kwargs)
Mon Jun 29 06:31:29 2020<stderr>: File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py", line 391, in _from_pretrained
Mon Jun 29 06:31:29 2020<stderr>: raise EnvironmentError(msg)
Mon Jun 29 06:31:29 2020<stderr>:OSError: Couldn't reach server at '{}' to download vocabulary files.
```
## Expected behavior
I wouldn't expect `FileExistsError: [Errno 17] File exists: '/root/.cache/torch/transformers'` to show up.
I also wouldn't expect `OSError: Couldn't reach server at '{}' to download vocabulary files.` to show up.
To debug whether there is a network issue with my containers causing the above `OSError`, I created a separate sample Pod and within the container, I started up python and tried to load a tokenizer, which worked just fine. So I don't think this is a network issue. I also don't quite understand why the server path is empty in the above log.
## Environment info
- `transformers` version: 2.3.0
- Platform: AWS Elastic Kubernetes Service (EKS) cluster
- Python version: 3.6
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
| 06-29-2020 19:10:16 | 06-29-2020 19:10:16 | Do you have a code sample to reproduce this?<|||||>@LysandreJik sure, create a file named `train.py` containing only the following 2 lines:
```
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
```
Create `helm/mpijob_chart/templates/job.yaml` as follows:
```
apiVersion: kubeflow.org/v1alpha2
kind: MPIJob
metadata:
name: {{ .Values.jobName }}
spec:
slotsPerWorker: {{ .Values.gpusPerNode }}
cleanPodPolicy: Running
backoffLimit: 0
mpiReplicaSpecs:
Worker:
replicas: {{ .Values.nodeCount }}
restartPolicy: Never
template:
spec:
nodeSelector:
beta.kubernetes.io/instance-type: {{ .Values.nodeType }}
volumes:
- name: fsx
persistentVolumeClaim:
claimName: fsx-claim
containers:
- image: {{ .Values.image }}
imagePullPolicy: Always
name: worker
# This will allocate 8 GPUs for each worker, helping Kubernetes place resources.
# Should match slotsPerWorker
resources:
limits:
nvidia.com/gpu: {{ .Values.gpusPerNode }}
# This mounts onto an external FSx volume, useful for storing the training data here.
volumeMounts:
- name: fsx
mountPath: /fsx
# This exposes the Docker container to listen to the mpirun ports
env:
- name: NCCL_SOCKET_IFNAME
value: ^lo,docker0
# Place the subdirectory on the Python path
- name: PYTHONPATH
value: {{ .Values.pythonPath }}
# Some initial preparation, sleep for 14 days so the container will stay alive and respond to mpirun
command: ["/bin/bash"]
args: ["-c", "sleep 1209600"]
Launcher:
replicas: 1
restartPolicy: Never
template:
spec:
nodeSelector:
beta.kubernetes.io/instance-type: {{ .Values.nodeType }}
volumes:
- name: fsx
persistentVolumeClaim:
claimName: fsx-claim
containers:
- image: {{ .Values.image }}
imagePullPolicy: Always
name: launcher
volumeMounts:
- mountPath: /fsx
name: fsx
# Wait 15 seconds so any additional dependencies being manually installed will finish on worker nodes
command: ["/bin/bash"]
args:
- "-c"
- "sleep 15 && \
mpirun -np {{ mul .Values.nodeCount .Values.gpusPerNode }} \
--allow-run-as-root \
--timestamp-output \
-bind-to none -map-by slot \
-x NCCL_DEBUG=INFO -x LD_LIBRARY_PATH -x PATH \
-mca pml ob1 -mca btl ^openib \
{{ .Values.command1 }} "
```
`helm/mpijob_chart/values.yaml` for the above `job.yaml`:
```
# Each job name must be unique to the Kubernetes cluster
jobName: gpt2-training
nodeCount: 1
# If gpusPerNode is too high, the job will pend forever.
gpusPerNode: 7
nodeType: p3dn.24xlarge
image: <REPLACE WITH YOUR OWN IMAGE HERE, I suggest building a custom image from Horovod base image with transformers installed in it>
pythonPath: /fsx/myCodeRepo
command1: "python /fsx/myCodeRepo/train.py &> /fsx/testout.txt"
```
For setting up the shared file-system for the cluster (i.e., FSx, the persistent volume and persistent volume claim YAMLs), the instructions/templates are here: https://github.com/kubernetes-sigs/aws-fsx-csi-driver/tree/master/examples/kubernetes/static_provisioning
Copy `train.py` to `/fsx/myCodeRepo/` in the shared file-system. Then you can use `helm install trainjob helm/mpijob_chart` to run the job.<|||||>Have you tried with changing the `cache_dir` when using `from_pretrained`?<|||||>@LysandreJik yes, I changed it to:
```
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2", cache_dir="/fsx/transformers_cache/")
```
The error is still the same. Instead of giving `FileExistsError ` with `/root/.cache/torch/transformers`, it gives the error with `/fsx/transformers_cache/`. And the `OSError` is also thrown just like before.<|||||>@LysandreJik it seems that this error only happens when I have multiple processes during training. I tried a simple test of using 1 node and only 1 GPU in that node during training, and that is working totally fine.
UPDATE: So as a temporary workaround, what I did was I kicked off a job with 1 node, 1 GPU, allowing for the pre-trained tokenizer/model caching to happen fine. Then I killed the job immediately, and kicked off a new one with N nodes, 8 GPUs. No errors this time. But this is kinda a hack. Do you have a better solution? |
transformers | 5,379 | closed | Cannot reduce n_ctx for distil gpt2 from 1024 to 256 | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): gpt2
Language I am using the model on (English, Chinese ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| 06-29-2020 19:10:06 | 06-29-2020 19:10:06 | Hi! Do you mind filling in the template?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,378 | closed | Mention openAI model card and merge content | 06-29-2020 18:23:09 | 06-29-2020 18:23:09 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5378?src=pr&el=h1) Report
> Merging [#5378](https://codecov.io/gh/huggingface/transformers/pull/5378?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0267668c3d648c6e41afda97f5df8671ee880ac3&el=desc) will **decrease** coverage by `0.20%`.
> The diff coverage is `77.67%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5378?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5378 +/- ##
==========================================
- Coverage 77.01% 76.80% -0.21%
==========================================
Files 128 138 +10
Lines 21615 24314 +2699
==========================================
+ Hits 16646 18675 +2029
- Misses 4969 5639 +670
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5378?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2N0cmwucHk=) | `97.05% <ø> (ø)` | |
| [src/transformers/configuration\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/5378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.22% <ø> (ø)` | |
| [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `97.14% <ø> (ø)` | |
| [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JvYmVydGEucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5378/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <ø> (ø)` | |
| ... and [158 more](https://codecov.io/gh/huggingface/transformers/pull/5378/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5378?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5378?src=pr&el=footer). Last update [482a599...3595179](https://codecov.io/gh/huggingface/transformers/pull/5378?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,377 | closed | New tokenizer code in transformer 3.0.0 is creating error with old code | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): BeRT and GPT2 for Poly-encoder implementation
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Ubuntu V2.0 corpus dataset, implementing Poly-encoder pipeline, everything was done, was re-training the model again to verify the results of the first training process.
## To reproduce
Steps to reproduce the behavior:
Happens when using some_tokenizer_fromHF.encode_plus(), below is the eval script to test on custom input text, is exactly same as the one reading from dataset during training(simplified for eval)
```
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
c_text="what is your name. I am Gloid"
context=tokenizer.encode_plus(c_text,
text_pair=None,
add_special_tokens=True,
max_length=max_len,
pad_to_max_length=False)
texts=["Obama", "Trump", "Eminem", "slender man", "Pewdiepie"]
for text in texts:
tokenized_dict = tokenizer.encode_plus(text,
text_pair=None,
add_special_tokens=True,
max_length=max_len,
pad_to_max_length=True)
```
The error is -
"Truncation was not explicitely activated but `max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length. Defaulting to 'only_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you may want to check this is the right behavior."
Repeated a total of 6 times, i.e., for every sequence passed into encode_plus
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Not to give this error, and return the input ids, segment ids, and input masks
The issue is completely identical to the closed issue - https://github.com/huggingface/transformers/issues/5155
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.0
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1+cu101 (True)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| 06-29-2020 18:20:28 | 06-29-2020 18:20:28 | Hi, this is not an error but a warning. If you want to disable warnings, you can use the following:
```py
import logging
logging.basicConfig(level=logging.ERROR)
```<|||||>Oh, I'm sorry for writing "error" everywhere, but I want to know, is this default behaviour correct for BeRT, it says by default it'll use only_first<|||||>You can read the documentation concerning that method [here](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.__call__).
Here's the part you're probably interested in:
> ‘only_first’: truncate to a max length specified in max_length or to the max acceptable input length for the model if no length is provided (max_length=None). This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided,
Since you're only using a single sentence, it seems to be what you're looking for?<|||||>I am also concatenating multiple "context" sequences using [SEP] token, I'm just feeding sequences into encode_plus and stripping off the [CLS] token at the beginning and concatenating the rest with the old one, making it
[CLS]seq1[SEP]seq2[SEP]
I assume, even earlier, in older version, when this warning wasnt being logged to the terminal, it still used only_first, did it?<|||||>Is there a reason why you're not using the `encode_plus` method with your pairs? The tokenizer will automatically build the pairs as the model expects them.
If you pass a single sequence and want to build them with the special tokens yourself, you can use the `add_special_tokens=False` flag. No need to strip the special tokens then.
Be careful as since you're building the sequences yourself, if you truncate these some of the special tokens might be cut off.<|||||>This is my snippet, "texts" is a list of strings
```
def __call__(self, texts):
input_ids_list, segment_ids_list, input_masks_list = [], [], []
for text in texts[::-1][:self.max_history]:
tokenized_dict = self.tokenizer.encode_plus(text,
text_pair=None,
add_special_tokens=True,
max_length=self.max_len,
pad_to_max_length=False)
input_ids, input_masks = tokenized_dict['input_ids'], tokenized_dict['attention_mask']
segment_ids = [1] * len(input_ids)
if len(input_ids_list) > 0:
input_ids = input_ids[1:]
segment_ids = segment_ids[1:]
input_masks = input_masks[1:]
input_ids_list.extend(input_ids)
segment_ids_list.extend(segment_ids)
input_masks_list.extend(input_masks)
if len(input_ids_list) >= self.max_len:
input_ids_list = input_ids_list[:self.max_len - 1] + [self.sep_id]
segment_ids_list = segment_ids_list[:self.max_len]
input_masks_list = input_masks_list[:self.max_len]
break
input_ids_list += [self.pad_id] * (self.max_len - len(input_ids_list))
segment_ids_list += [0] * (self.max_len - len(segment_ids_list))
input_masks_list += [0] * (self.max_len - len(input_masks_list))
assert len(input_ids_list) == self.max_len
assert len(segment_ids_list) == self.max_len
assert len(input_masks_list) == self.max_len
return input_ids_list, segment_ids_list, input_masks_list
```
I'm not truncating anything after I create my full sequence. That suggestion was great for using text_pair, I dont know why I didnt think of that. Thank you
PS- Is this way of creating input ids correct, and in the older version, when this warning wasn't being logged to the terminal, was it using only_first even then?<|||||>In that snippet, are you trying to concatenate a lot of sequences together? If you have 10 sequences in your text, you want to have a giant `input_ids_list` containing all the 10 sequences separated by a separator token?<|||||>Yes, exactly, thats what I am doing, and I am then strippin off earlier later part, coz I am flippin the list too. Basically the list is a conversation where I am making a new token list out of recent N words of conversation<|||||>For anyone stumbling across this issue and having problems with sentence pair classification in v3.0.0:
In v3.0.0, the default truncation strategy was changed, which causes code that used to work in v2.11.0 to break in some cases.
**v2.11.0**: default truncation strategy is `longest_first`
**v.3.0.0**: truncation strategy appears to default to `only_first`
For sentence pair classification in v3.0.0, this can result in a failure to truncate sentence pair to the supplied `max_length` parameter, which can break downstream model or other code:
```
W0702 12:56:50.435204 140139424331584 tokenization_utils_base.py:1447] Truncation was not explicitely activated but
`max_length` is provided a specific value, please use `truncation=True` to explicitely truncate examples to max length.
Defaulting to 'only_first' truncation strategy. If you encode pairs of sequences (GLUE-style) with the tokenizer you may
want to check this is the right behavior.
E0702 12:56:50.437675 140139424331584 tokenization_utils.py:784] We need to remove 25 to truncate the input but the first
sequence has a length 17. Please select another truncation strategy than TruncationStrategy.ONLY_FIRST, for instance
'longest_first' or 'only_second'.
```
For example, the following code prints **32** in v2.11.0, but **57** in v3.0.0:
```python
text_a = '''Debunk this: Six Corporations Control $NUMBER$% Of The Media In America'''
text_b = '''
I can't believe people are missing the two obvious flaws in this analysis.
This infographic doesn't show that $NUMBER$ companies control $NUMBER$% of the media. '''
from transformers import *
t = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
output = t.encode_plus(text_a, text_b, max_length=32)
print(len(output['input_ids']))
```
The solution is to explicitly provide `truncate='longest_first`:, as indicated in the warning:
```python
output = t.encode_plus(text_a, text_b, max_length=32, truncation='longest_first')
```<|||||>Good point. We will release a patch to fix this breaking change (move back to having `longest_first` as default) plus the one mentioned in #5447 probably tomorrow or early next week. <|||||>@thomwolf: Thanks - changing the default back to `longest_first` may also address #5460 |
transformers | 5,376 | closed | Unable to load Longformer pretrained weights | Hi,
I am following the official example to load longformer model by:
```
from transformers import LongformerModel, LongformerTokenizer
tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
model = LongformerModel.from_pretrained('allenai/longformer-base-4096')
```
I can load tokenizer and config, but when I try to load model, it gave me the error "OSError: Unable to load weights from pytorch checkpoint file. If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True." It happened on my other transformer models like BERT/XLNet, and I solved it by their corresponding functions "convert_tf_checkpoint_to_pytorch". I don't understand why this happen on Longformer as it is not from any tensorflow module?
I'm using
- `transformers` version: 3.0.0
- Python version: 3.7.4
- PyTorch version: 1.2.0 | 06-29-2020 18:19:54 | 06-29-2020 18:19:54 | Hi! Do you have internet access? Does the folder `allenai/longformer-base-4096` exist on your machine?<|||||>Ahh...sorry I thought it was included in the library...Thank you for your reply!<|||||>It is included in the library :) I'm asking because it may be possible that you have a folder that has the same name, so the library is looking to load that folder instead of [the weights we have on S3.](https://huggingface.co/allenai/longformer-base-4096)<|||||>@LysandreJik I don't have any existing folder called 'allenai/longformer-base-4096' (or 'bert-base-uncased' for BERT) and I can't load pretrained weights until I download them to my local machines |
transformers | 5,375 | closed | Update dependency mecab to at least version 1.0 | Require at least version 1.0 of mecab, which comes with prebuilt wheels for all major platforms (OS X, Linux, Windows). This should do away with some reported incompatibility issues for non-linux systems.
See https://github.com/SamuraiT/mecab-python3/issues/31#issuecomment-651053281
Note: code untested but nothing much to test. [Version 1.0.0 is on PyPi](https://pypi.org/project/mecab-python3/1.0.0/) so it should work as written. | 06-29-2020 17:52:21 | 06-29-2020 17:52:21 | No, I pinned this on purpose because v1.0 has breaking changes. Tokenization is not done the same way (once you go through the obvious bugs) because the underlying dictionary seems different. Not sure how we can go to v1.0 until pretrained models using this tokenizer have been updated (if that's ever done).<|||||>Could you give more information so that I can post a new issue over at mecab? (Or even better, if you have the time, post the issue yourself?)<|||||>Well the breaking changes are numerous.
First, the library does not provide a dictionary again and requires to install another dependency:
```
pip install unidic-lite
```
Once the installation problem is passed, we need to fix the tokenizer by changing [this line](https://github.com/huggingface/transformers/blob/482a5993c20ef32512a42661bfa404516763b72e/src/transformers/tokenization_bert_japanese.py#L207) to `token = line.split("\t")[0]` because the tokenizer now returns multiple tab-separated things.
Once this step is done, the [test_mecab_tokenizer](https://github.com/huggingface/transformers/blob/482a5993c20ef32512a42661bfa404516763b72e/tests/test_tokenization_bert_japanese.py#L90) still fails because the tokenization of " \tアップルストアでiPhone8 が \n 発売された 。 " is not ["アップルストア", "で", "iPhone", "8", "が", "発売", "さ", "れ", "た", "。"] anymore but ['アップル', 'ストア', 'で', 'iPhone', '8', 'が', '発売', 'さ', 'れ', 'た', '。'].
And I did not look at other tests yet, since that tokenization change alone is enough to warrant a pin on older releases, given the fact it will break models pretrained with that tokenizer.
(Note that the mecab dependency is only used for these tests anyway and that a Windows user calling pytest on the tests folder will have no issue since the tokenizer tests are ignored by default.)
<|||||>Gotcha. I would like to see the move to v1 in the future though, for the reasons mentioned before. Perhaps this is something to plan for transformers v4? That would give users plenty of time still to use their current models, and in having a new major version it is no surprise there can be compatibility changes.<|||||>Hello, I'm the mecab-python3 maintainer. Please feel free to tag me on any discussions about this if there's a way I can help.
I would strongly suggest switching to the new dictionary when you can - the dictionary you currently use, ipadic, hasn't been updated since at least 2007 and will never be updated again (the organization that created it no longer does that kind of work). In contrast Unidic is maintained by the NINJAL, the organization in charge of Japanese Universal Dependencies.<|||||>Hi @polm, thanks for getting involved! So from an accuracy point of view, the new dictionary is better than the previous version - presumably? The reason for updating, then, would be: better tokenisation, cross-platform pip install support. Downsides are that models that are already trained with the old tokenisation will behave differently if they use a different tokenizer. That's why I suggest to make the breaking change in v4 which is probably still some months away. If people want to use older models, they can always use an older version of mecab.
@sgugger I don't know if this is possible, but perhaps the model cards should include a section on which version of the library and/or dependencies (pip freeze) were used to train the model. This would also greatly help reproducibility issues.<|||||>ipadic was already pretty accurate, but yes, the new dictionary (Unidic) will at the very least be more accurate by virtue of having newer words, like 令和 (Reiwa), the current era name. You can read more about the different dictionaries [here](https://www.dampfkraft.com/nlp/japanese-tokenizer-dictionaries.html).
> @sgugger I don't know if this is possible, but perhaps the model cards should include a section on which version of the library and/or dependencies (pip freeze) were used to train the model. This would also greatly help reproducibility issues.
This is also very important. One of my objectives in packaging unidic in pip was to make it easier to use the same dictionary version between projects.<|||||>AFAICT having a model on the Hub that doesn't work properly across all versions of the library would be a first, and set a dangerous precedent. IMO moving to mecab >= 1 requires to retrain the model on the hub with the new tokenizer and pulling out the old ones, as I've stated before. I don't know if @thomwolf or @julien-c feel differently.<|||||>IMO in the future the library shouldn't prescribe what tokenizer to use, i.e. the model cards from @singletongue at https://huggingface.co/cl-tohoku could let users know (or maybe even programmatically check) that they need a specific version of `mecab<1.0`, and we would remove `mecab` from the dependencies (users can install it if they need it)
Same for sacremoses and sentencepiece which shouldn't be hard dependencies of the library anymore.
In the short term, I'm ok with pinning the dependency to <1.
Thoughts?<|||||>@julien-c Yes, I agree. This joins together what I wrote before: the ability/requirement for model cards to indicate some environment variables/dependencies that were used. So for instance, this could be part of the model card in a "dependency" section:
```
mecab==1.0.0
transformers>=2.8.1
```
<|||||>Then we just need to decide which version we use for the tests (since we can't have both mecab < 1 and mecab >= 1 at the same time.<|||||>Just a note, I am working on packaging ipadic the same way I packaged Unidic after several requests for backwards compatability, including from sacrebleu. That should be released in the next few weeks, see [this issue](https://github.com/SamuraiT/mecab-python3/issues/49). That way you'll get backward compatability and the better wheels in 1.0+. <|||||>I released ipadic on PyPI. It works with the latest mecab-python3. You can use it like this:
```
import MeCab
import ipadic
tagger = MeCab.Tagger(ipadic.MECAB_ARGS)
print(tagger.parse("図書館にいた事がバレた"))
```
I hope you'll change to Unidic as soon as is feasible anyway, but this way you can use the recent version of mecab-python3 with the old models.<|||||>I believe that since #6086 has been merged this can be closed.<|||||>Yes it can. |
transformers | 5,374 | closed | How to share model cards with the CLI | As seen offline with @julien-c, adding a new recommended way to upload model cards along with models. | 06-29-2020 17:41:58 | 06-29-2020 17:41:58 | I would reverse the two options for now (PR to the repo is still the recommended way at the moment)<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5374?src=pr&el=h1) Report
> Merging [#5374](https://codecov.io/gh/huggingface/transformers/pull/5374?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0267668c3d648c6e41afda97f5df8671ee880ac3&el=desc) will **increase** coverage by `0.17%`.
> The diff coverage is `77.67%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5374?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5374 +/- ##
==========================================
+ Coverage 77.01% 77.18% +0.17%
==========================================
Files 128 138 +10
Lines 21615 24314 +2699
==========================================
+ Hits 16646 18766 +2120
- Misses 4969 5548 +579
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5374?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2N0cmwucHk=) | `97.05% <ø> (ø)` | |
| [src/transformers/configuration\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/5374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.22% <ø> (ø)` | |
| [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `97.14% <ø> (ø)` | |
| [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JvYmVydGEucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5374/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <ø> (ø)` | |
| ... and [157 more](https://codecov.io/gh/huggingface/transformers/pull/5374/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5374?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5374?src=pr&el=footer). Last update [482a599...12d3941](https://codecov.io/gh/huggingface/transformers/pull/5374?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Switched the order. |
transformers | 5,373 | closed | Update README.md | - T5 pic uploaded to a more permanent place | 06-29-2020 17:37:02 | 06-29-2020 17:37:02 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5373?src=pr&el=h1) Report
> Merging [#5373](https://codecov.io/gh/huggingface/transformers/pull/5373?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b9ee87f5c730d72b326ef65089a574a0b519e827&el=desc) will **decrease** coverage by `0.06%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5373?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5373 +/- ##
==========================================
- Coverage 77.49% 77.42% -0.07%
==========================================
Files 138 138
Lines 24314 24314
==========================================
- Hits 18843 18826 -17
- Misses 5471 5488 +17
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5373?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `49.40% <0.00%> (-42.04%)` | :arrow_down: |
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `61.90% <0.00%> (-33.34%)` | :arrow_down: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.68% <0.00%> (-0.72%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.22% <0.00%> (+0.31%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `83.28% <0.00%> (+0.58%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.30% <0.00%> (+1.25%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5373/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <0.00%> (+1.53%)` | :arrow_up: |
| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/5373/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5373?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5373?src=pr&el=footer). Last update [482a599...126ed9d](https://codecov.io/gh/huggingface/transformers/pull/5373?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,372 | closed | Albert pooling dimension mismatches | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): **albert-base-v2**
Language I am using the model on (English, Chinese ...): **English**
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run the following code snippet. You will need the **gdown** package. Note that I am using a modified version of modeling_albert.py to avoid the bug described [here](https://github.com/huggingface/transformers/pull/4095) and [here](https://github.com/huggingface/transformers/issues/1188).
```shell
# Install Transformers
pip install transformers
# Create a working folder (skip if using Colab)
mkdir /content/
# Clone BAT repository to the folder
git clone https://github.com/akkarimi/Adversarial-Training-for-ABSA.git
# Download patches for BAT scripts in src/
cd /content/
git clone https://github.com/williamsdaniel888/reimagined-octo-tribble.git
mv /content/reimagined-octo-tribble/modeling_albert.py /usr/local/lib/python3.6/dist-packages/transformers/modeling_albert.py
mv /content/reimagined-octo-tribble/absa_data_utils.py /content/Adversarial-Training-for-ABSA/src/absa_data_utils.py
mv /content/reimagined-octo-tribble/asc_bert_pt.py /content/Adversarial-Training-for-ABSA/src/asc_bert_pt.py
mv /content/reimagined-octo-tribble/bat_asc.py /content/Adversarial-Training-for-ABSA/src/bat_asc.py
mv /content/reimagined-octo-tribble/run_asc.py /content/Adversarial-Training-for-ABSA/src/run_asc.py
# Training
cd /content/Adversarial-Training-for-ABSA/script
chmod +x ./run_absa.sh
./run_absa.sh asc laptop_pt laptop pt_asc 1 0
```
2. Open **/content/Adversarial-Training-for-ABSA/run/pt_asc/laptop/1/train_log.txt** to observe the stack trace.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
**Stack trace:**
```
Traceback (most recent call last):
File "../src/run_asc.py", line 280, in <module>
main()
File "../src/run_asc.py", line 275, in main
train(args)
File "../src/run_asc.py", line 122, in train
_loss, adv_loss = model(input_ids, segment_ids, input_mask, label_ids)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/content/Adversarial-Training-for-ABSA/src/bat_asc.py", line 20, in forward
_loss = self.loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py", line 932, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 2317, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 2113, in nll_loss
.format(input.size(0), target.size(0)))
ValueError: Expected input batch_size (1600) to match target batch_size (16).
```
A diagnostic printout of intermediate variables' sizes within the model's forward() method:
```
Shape of input_ids: torch.Size([16, 100])
Shape of embedding_output: torch.Size([16, 100, 128])
Shape of last encoded layer: torch.Size([16, 100, 768])
Shape of pooled_output: torch.Size([16, 100, 768])
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The pooling unit should pool over the second dimension of the last encoded layer, so that the shape of pooled_output is ```torch.Size([16, 768])``` instead of ```torch.Size([16, 100, 768])```.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.0
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1+cu101 (True)
- Tensorflow version (GPU?): 2.2.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
## Possible fix
I have worked around this issue by defining an AlbertPooler class for AlbertModel, similar to the BertPooler method for BertModel in modeling_bert.py:
```python
class AlbertPooler(nn.Module):
def __init__(self, config):
super().__init__()
self.dense = nn.Linear(config.hidden_size, config.hidden_size)
self.activation = nn.Tanh()
def forward(self, hidden_states):
# We "pool" the model by simply taking the hidden state corresponding
# to the first token.
first_token_tensor = hidden_states[:, 0]
pooled_output = self.dense(first_token_tensor)
pooled_output = self.activation(pooled_output)
return pooled_output
```
Make sure to then update **AlbertModel's init()** method:
```
self.pooler = AlbertPooler(config)
```
and **AlbertModel's forward()** method:
```python
pooled_output = self.pooler(sequence_output)
```
| 06-29-2020 17:36:32 | 06-29-2020 17:36:32 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,371 | closed | Update README.md | - Model pic uploaded to a more permanent place | 06-29-2020 17:33:18 | 06-29-2020 17:33:18 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5371?src=pr&el=h1) Report
> Merging [#5371](https://codecov.io/gh/huggingface/transformers/pull/5371?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b9ee87f5c730d72b326ef65089a574a0b519e827&el=desc) will **increase** coverage by `0.06%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5371?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5371 +/- ##
==========================================
+ Coverage 77.49% 77.56% +0.06%
==========================================
Files 138 138
Lines 24314 24314
==========================================
+ Hits 18843 18858 +15
+ Misses 5471 5456 -15
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5371?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.93% <0.00%> (+0.33%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5371/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.75% <0.00%> (+2.05%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5371?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5371?src=pr&el=footer). Last update [482a599...812f63a](https://codecov.io/gh/huggingface/transformers/pull/5371?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,370 | closed | Update README.md | - Fix missing ```-``` in language meta
- T5 pic uploaded to a more permanent place | 06-29-2020 17:29:25 | 06-29-2020 17:29:25 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5370?src=pr&el=h1) Report
> Merging [#5370](https://codecov.io/gh/huggingface/transformers/pull/5370?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b9ee87f5c730d72b326ef65089a574a0b519e827&el=desc) will **increase** coverage by `0.34%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5370?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5370 +/- ##
==========================================
+ Coverage 77.49% 77.84% +0.34%
==========================================
Files 138 138
Lines 24314 24314
==========================================
+ Hits 18843 18928 +85
+ Misses 5471 5386 -85
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5370?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5370/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5370/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5370/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `90.22% <0.00%> (+0.31%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5370/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.30% <0.00%> (+1.25%)` | :arrow_up: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5370/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `83.84% <0.00%> (+1.53%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/5370/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.20% <0.00%> (+2.17%)` | :arrow_up: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5370/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `89.95% <0.00%> (+2.28%)` | :arrow_up: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5370/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <0.00%> (+3.75%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5370/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.24% <0.00%> (+4.54%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5370/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.71% <0.00%> (+8.92%)` | :arrow_up: |
| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/5370/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5370?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5370?src=pr&el=footer). Last update [482a599...7a1f1c6](https://codecov.io/gh/huggingface/transformers/pull/5370?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,369 | closed | Update README.md | - Fix missing ```-``` in language meta
- T5 pic uploaded to a more permanent place | 06-29-2020 17:26:26 | 06-29-2020 17:26:26 | |
transformers | 5,368 | closed | Fix model card folder name so that it is consistent with model hub | 06-29-2020 15:44:12 | 06-29-2020 15:44:12 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5368?src=pr&el=h1) Report
> Merging [#5368](https://codecov.io/gh/huggingface/transformers/pull/5368?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b9ee87f5c730d72b326ef65089a574a0b519e827&el=desc) will **decrease** coverage by `0.88%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5368?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5368 +/- ##
==========================================
- Coverage 77.49% 76.61% -0.89%
==========================================
Files 138 138
Lines 24314 24314
==========================================
- Hits 18843 18627 -216
- Misses 5471 5687 +216
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5368?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `24.19% <0.00%> (-74.20%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |
| [...rc/transformers/data/datasets/language\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/5368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `34.69% <0.00%> (-57.15%)` | :arrow_down: |
| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `50.74% <0.00%> (-35.83%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `17.45% <0.00%> (-21.94%)` | :arrow_down: |
| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `32.10% <0.00%> (-17.35%)` | :arrow_down: |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `65.26% <0.00%> (-11.58%)` | :arrow_down: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |
| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/5368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `26.31% <0.00%> (-1.32%)` | :arrow_down: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5368/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.68% <0.00%> (-0.72%)` | :arrow_down: |
| ... and [11 more](https://codecov.io/gh/huggingface/transformers/pull/5368/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5368?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5368?src=pr&el=footer). Last update [97f2430...37b7dc0](https://codecov.io/gh/huggingface/transformers/pull/5368?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,367 | closed | Add link to file and fix typos in model card | 06-29-2020 15:30:52 | 06-29-2020 15:30:52 | ||
transformers | 5,366 | closed | Doc for v3.0.0 | 06-29-2020 15:05:57 | 06-29-2020 15:05:57 | ||
transformers | 5,365 | closed | [seq2seq docs] Move evaluation down, fix typo | 06-29-2020 14:34:33 | 06-29-2020 14:34:33 | ||
transformers | 5,364 | closed | Layer #0 (named "roberta") expects 0 weight(s), but the saved weights have 199 element(s) | # ❓ Questions & Help
## Details
I wanna build a new NLU model extented by TFRobertaPreTrainedModel.
And I write code with reference to 'TFRobertaForSequenceClassification' class.
```python
from transformers import TFRobertaPreTrainedModel, TFRobertaMainLayer
class TFRobertaForNLU(TFRobertaPreTrainedModel):
def __init__(self, config, *inputs, **kwargs):
super().__init__(config, *inputs, **kwargs)
self.roberta = TFRobertaMainLayer(config, name="roberta")
def call(self,inputs=None,attention_mask=None,token_type_ids=None,position_ids=None,head_mask=None,inputs_embeds=None,output_attentions=None,output_hidden_states=None,labels=None,training=False,):
pass
model = TFRobertaForNLU.from_pretrained("jplu/tf-xlm-roberta-base")
```
And I got an error:
```python
Traceback (most recent call last):
File "test.py", line 449, in <module>
File ".../lib/python3.7/site-packages/transformers/modeling_tf_utils.py", line 489, in from_pretrained
model.load_weights(resolved_archive_file, by_name=True)
File ".../lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 181, in load_weights
return super(Model, self).load_weights(filepath, by_name)
File ".../lib/python3.7/site-packages/tensorflow_core/python/keras/engine/network.py", line 1175, in load_weights
saving.load_weights_from_hdf5_group_by_name(f, self.layers)
File ".../lib/python3.7/site-packages/tensorflow_core/python/keras/saving/hdf5_format.py", line 751, in load_weights_from_hdf5_group_by_name
str(len(weight_values)) + ' element(s).')
ValueError: Layer #0 (named "roberta") expects 0 weight(s), but the saved weights have 199 element(s).
```
It seems like the 'roberta' layer not properly initialized. Am I wrong in somewhere? | 06-29-2020 14:29:13 | 06-29-2020 14:29:13 | Hello, I'm facing the same issue. Do you find any solutions ? :)<|||||>Hello, facing the same problem<|||||>cc @Rocketknight1 @gante :raised_hands: <|||||>Directly using `TFRobertaMainLayer` is unusual and not recommended - @jessiewang158 can you try using `TFRobertaModel` instead?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 5,363 | closed | [Benchmark] Readme for benchmark | At @sshleifer, @julien-c, @clmnt - shifting discussion about README.me here to include prev PR: #5360
in v3.0.0 | 06-29-2020 14:11:19 | 06-29-2020 14:11:19 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5363?src=pr&el=h1) Report
> Merging [#5363](https://codecov.io/gh/huggingface/transformers/pull/5363?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4bcc35cd693cc0f62d2b4853cd8a5db8608a4abd&el=desc) will **decrease** coverage by `0.87%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5363?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5363 +/- ##
==========================================
- Coverage 77.65% 76.77% -0.88%
==========================================
Files 138 138
Lines 24314 24314
==========================================
- Hits 18881 18668 -213
- Misses 5433 5646 +213
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5363?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `92.39% <0.00%> (-1.33%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.24% <0.00%> (-0.15%)` | :arrow_down: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.40% <0.00%> (+0.71%)` | :arrow_up: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5363/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5363?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5363?src=pr&el=footer). Last update [4bcc35c...a73a58e](https://codecov.io/gh/huggingface/transformers/pull/5363?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@julien-c - should we go for this solution for now instead of letting users adapt a giant csv file?
We can always later adapt the way we present the data since it's just a README<|||||>sounds good |
transformers | 5,362 | closed | Pin mecab for now | This fixes the tests locally, so will hopefully fix the CI :-) | 06-29-2020 13:46:53 | 06-29-2020 13:46:53 | |
transformers | 5,361 | closed | [WIP] update pl=0.8.5 | 1. update to use from_argparse_args
2. deleted args that were manual but trainer has already.
3. fixed optimizer_step.
4. moved the LR logging to the correct hook
5. dropped the rank_zero_only stuff since that's not needed. All loggers and print ops happen only on rank_zero already. This was redundant.
6. proper use of setup()
@sshleifer let me know how I can test these changes | 06-29-2020 12:42:23 | 06-29-2020 12:42:23 | @sshleifer this looks good now :)
```
Epoch 1: 0%| | 2/13382 [00:01<3:15:14, 1.14it/s, loss=171200.000, v_num=1t6n9wkw]Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0
Epoch 1: 0%| | 3/13382 [00:02<2:28:51, 1.50it/s, loss=135029.328, v_num=1t6n9wkw]Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 4096.0
Epoch 1: 0%| | 4/13382 [00:02<2:06:06, 1.77it/s, loss=109544.000, v_num=1t6n9wkw]Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 2048.0
Epoch 1: 20%|███████████████████████▉ | 2672/13382 [23:08<1:32:43, 1.93it/s, loss=7298.400, v_num=1t6n9wkw[[AEpoch 1: 50%|████████████████████████████████████████████████████████████▎ | 6674/13382 [55:41<55:58, 2.00it/s, loss=28663.199, v_num=1t6n9wkw]
Epoch 1: 50%|████████████████████████████████████████████████████████████▎ | 6675/13382 [55:46<56:02, 1.99it/s, loss=28663.199, v_num=1t6n9wkw]
Validating: 76%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 48/63 [03:33<01:07, 4.52s/it]
Epoch 1: 53%|████████████████████████████████████████████████████████████████▎ | 7114/13382 [59:07<52:05, 2.01it/s, loss=28257.600, v_num=1t6n9wkw]
```
**This PR:**
- Fixes the lightning_base.py
- adds best practices to it
- fixes seq2seq example
**TODO:**
- fix the other examples (separate PR likely?)
**Comments**
- There is a lot of flag mapping that kind of starts to limit what users can do. (accumulate_grad_batches vs whatever). It's not the worst thing, but it adds unnecessary points of failure.
- data: Download data in prepare_data, use setup to split/etc and assign.
<|||||>- Do you have a ROUGE-2 score for that run? The TQDM reported loss is much higher.
- Does multi-gpu + `trainer.test` work?
<|||||>I think you also need to fix examples/seq2seq/test_seq2seq_examples.py
and `examples/requirements.txt`. But I would only do that after you've verified
that val-rouge2 doesn't get worse on single and multi-gpu. You can see metrics in `output_dir/metrics.json`.
<|||||>@sshleifer ok, this is verified to work with 2 GPUs wnb and apex... what do you want to do now?<|||||>You also need to run `make style` to satisfy the check_code_quality CI job, and potentially examples/requirements.txt.<|||||>@sshleifer 0.8.5 is live. we can merge and finish this now :)
Then update the rest of the examples and we should be golden.
I'll fix the style stuff this weekend<|||||>A very similar change was merged @moscow25 |
transformers | 5,360 | closed | [Docs] Benchmark docs | This PR updates the docs for benchmarks and adds a README.md where the community can post their benchmark results.
Would be happy about Feedback from @sgugger and @LysandreJik .
@LysandreJik - I deleted the part about "This work was done by [Timothy Liu](https://github.com/tlkh)." because the links were broken. | 06-29-2020 12:12:43 | 06-29-2020 12:12:43 | Actually will make a separate PR for the examples README.md - to have the docs in v3.0.0.<|||||>@sgugger right now it's not ignored, so the slow test will fail because the output isn't the same. I don't think it's too big of a deal though, we can fix that after the release with only partial testing of the file.<|||||>Thanks for the review. Addressed them and also renamed the classes for consistency. |
transformers | 5,359 | closed | Segmentation fault (core dumped) after importing transformers | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.import torch
2.import transformers
3.torch.rand(4,5)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```python
import torch
import transformers
torch.rand(4,5)
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I got an error 'segmentation fault (core dumped)' while trying to generate a tensor after importing transformers, but if I removed 'import transformers', the tensor could be generated.

## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Ubuntu 18.04.4 LTS
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0+cu100
- Tensorflow version (GPU?): 2.2.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 06-29-2020 11:46:22 | 06-29-2020 11:46:22 | Hi, I believe this error may be due to [the sentencepiece version 0.1.92 which causes a segmentation fault.](https://github.com/huggingface/transformers/issues/4857).<|||||>> Hi, I believe this error may be due to [the sentencepiece version 0.1.92 which causes a segmentation fault.](https://github.com/huggingface/transformers/issues/4857).
Thank you very much, this error disappeared when sentencpiece was downgraded to 0.1.91.
BTW, is there some debugging method to find this root cause when we encounter segmentation fault? <|||||>Not that I know of :man_shrugging: <|||||>It works. GREAT!!!!<|||||>The error still persists after 6 months. Maybe the version 0.1.91 should be hardcoded in package requirements<|||||>It was for the last 3.x versions. For version 4.x onwards, `sentencepiece` is not a requirement anymore. |
transformers | 5,358 | closed | Do T5 have the next-sentence-predict loss? | Input the previous sentence and predict/generate the next sentence.
Thank you very much. | 06-29-2020 11:26:59 | 06-29-2020 11:26:59 |
Hi @guotong1988 It's possible to do these tasks with text-to-text approach.
for predicting next sentence you can process input like this
`sentence1: sentence1_text sentence2: sentence2_text`
and ask the model to predict `true` if sentence2 is the next sentence else `false`
and for generating next sentence, provide first sentence as input and next sentence as the target.<|||||>Thank you. It is not generating the whole next sentence. But it is predicting is_next or not_next. |
transformers | 5,357 | closed | Fix table format fot test tesults | 06-29-2020 11:10:17 | 06-29-2020 11:10:17 | ||
transformers | 5,356 | closed | Create model card | 06-29-2020 11:08:56 | 06-29-2020 11:08:56 | Pretty cool!<|||||>Thank you @julien-c |
|
transformers | 5,355 | closed | Update Bertabs example to work again | Fix the bug 'Attempted relative import with no known parent package' when using the bertabs example. Also change the used model from bertabs-finetuned-cnndm, since it seems not be accessible anymore | 06-29-2020 08:34:33 | 06-29-2020 08:34:33 | There are `remi/bertabs-finetuned-cnndm-extractive-abstractive-summarization` and also `remi/bertabs-finetuned-extractive-abstractive-summarization`. Which should be used here? @sshleifer <|||||>Also #5234<|||||>Model remi/bertabs-finetuned-cnndm-extractive-abstractive-summarization did not provide GPU support, which I (newbie) would expect from cnndm prefix.
I had to use remi/bertabs-finetuned-extractive-abstractive-summarization for GPU support.<|||||>Thanks! Circleci wants you to run `make style` I believe.<|||||>Ty, will do the next time. Is there a guideline for how to properly set up pull requests? I was not able to find one<|||||>Yes: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests |
transformers | 5,354 | closed | Added data collator for XLNet language modeling and related calls | Added `DataCollatorForXLNetLanguageModeling` in `data/data_collator.py` to return necessary inputs (applies masking and generates revelant tensors i.e. input_ids, perm_mask, target_mask and labels as per https://github.com/zihangdai/xlnet/blob/master/data_utils.py) for language modeling training with XLNetLMHeadModel. Also added related arguments, logic and calls in `examples/language-modeling/run_language_modeling.py`.
Resolves: #4739, #2008 (partially) | 06-29-2020 05:31:01 | 06-29-2020 05:31:01 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5354?src=pr&el=h1) Report
> Merging [#5354](https://codecov.io/gh/huggingface/transformers/pull/5354?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/28a690a80e6c8dbcb50b5628ef853146e1940125&el=desc) will **increase** coverage by `1.15%`.
> The diff coverage is `82.51%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5354?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5354 +/- ##
==========================================
+ Coverage 76.18% 77.33% +1.15%
==========================================
Files 138 141 +3
Lines 24292 24660 +368
==========================================
+ Hits 18506 19071 +565
+ Misses 5786 5589 -197
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5354?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5354/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.21% <0.00%> (-0.45%)` | :arrow_down: |
| [src/transformers/testing\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5354/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `80.85% <ø> (ø)` | |
| [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5354/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `16.53% <4.34%> (-2.16%)` | :arrow_down: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5354/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `61.06% <17.30%> (-37.33%)` | :arrow_down: |
| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5354/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.09% <20.00%> (-0.36%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5354/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `37.96% <30.00%> (-1.43%)` | :arrow_down: |
| [src/transformers/training\_args\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5354/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `47.45% <44.44%> (-3.71%)` | :arrow_down: |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5354/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `93.80% <66.66%> (+25.94%)` | :arrow_up: |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5354/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `88.37% <77.96%> (-0.86%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5354/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.67% <85.67%> (ø)` | |
| ... and [39 more](https://codecov.io/gh/huggingface/transformers/pull/5354/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5354?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5354?src=pr&el=footer). Last update [28a690a...3397fb4](https://codecov.io/gh/huggingface/transformers/pull/5354?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks a lot for the PR @shngt - this looks really cool :-)
It's won't actually be that easy to make this work with the trainer since we will have to pass `mems` to the `model(...)` and retrieve it from the outputs. This won't be solved in a nice clean way IMO.
We will have to adapt this function here: https://github.com/huggingface/transformers/blob/331d8d2936e7a140225cf60301ba6469930fd216/src/transformers/trainer.py#L572
Because we don't have `namedtuples` (another case why we should have `namedtuples` :D @thomwolf @LysandreJik)
I would suggest to add the following lines after this one: https://github.com/huggingface/transformers/blob/331d8d2936e7a140225cf60301ba6469930fd216/src/transformers/trainer.py#L580 for the moment (not very pretty to this :-/):
```python
outputs = model(**inputs)
loss = outputs[0] # model outputs are always tuple in transformers (see doc)
...
if model.config_class.__name__ in ["XLNetConfig", "TransfoXLConfig"]:
mems = outputs[1]
else:
mems = None
return loss, mems
```
I don't see a better solution at the moment. What do you think @LysandreJik @julien-c ?
Would it be ok for you to add a couple of "not-so-pretty" if-statements to the trainer?
@shngt - Let's first decide on how to adapt the `Trainer` before continuing.<|||||>Wait for #5399 to be merged :-) <|||||>Sorry for the mess, I didn't know rebasing would do this. Repeating the commit message for clarity:
Changed the name of `DataCollatorForXLNetLanguageModeling` to the more general `DataCollatorForPermutationLanguageModelling`.
Removed the `--mlm` flag requirement for the new collator and defined a separate `--plm_probability` flag for its use.
CTRL uses a CLM loss just like GPT and GPT-2, so should work out of the box with this script (provided `past` is taken care of
similar to `mems` for XLNet). Added a few words in the comments to reflect this.
Changed calls and imports appropriately.
<|||||>Yeah rebasing can be dangerous, happens to us all the time :D.
#5399 is merged so it should be possible to add the XLNet data collator.
I would here to copy the only file that is changed from the branch `src/transformers/data/data_collator.py` (I tihnk) and then open a new PR where you paste in this file. Then this PR should be clean. We can close this PR then since it seems to be borked. |
transformers | 5,353 | closed | Create model card for asafaya/bert-large-arabic | 06-29-2020 00:57:48 | 06-29-2020 00:57:48 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5353?src=pr&el=h1) Report
> Merging [#5353](https://codecov.io/gh/huggingface/transformers/pull/5353?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/28a690a80e6c8dbcb50b5628ef853146e1940125&el=desc) will **increase** coverage by `1.73%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5353?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5353 +/- ##
==========================================
+ Coverage 76.18% 77.91% +1.73%
==========================================
Files 138 138
Lines 24292 24292
==========================================
+ Hits 18506 18928 +422
+ Misses 5786 5364 -422
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5353?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.39% <0.00%> (-0.15%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |
| [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.77% <0.00%> (+23.94%)` | :arrow_up: |
| [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `57.27% <0.00%> (+24.54%)` | :arrow_up: |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `93.75% <0.00%> (+25.89%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `75.98% <0.00%> (+55.48%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5353/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.72% <0.00%> (+73.10%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5353?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5353?src=pr&el=footer). Last update [28a690a...4558772](https://codecov.io/gh/huggingface/transformers/pull/5353?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,352 | closed | Create model card for asafaya/bert-mini-arabic | 06-29-2020 00:55:17 | 06-29-2020 00:55:17 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5352?src=pr&el=h1) Report
> Merging [#5352](https://codecov.io/gh/huggingface/transformers/pull/5352?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/28a690a80e6c8dbcb50b5628ef853146e1940125&el=desc) will **decrease** coverage by `0.03%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5352?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5352 +/- ##
==========================================
- Coverage 76.18% 76.14% -0.04%
==========================================
Files 138 138
Lines 24292 24292
==========================================
- Hits 18506 18497 -9
- Misses 5786 5795 +9
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5352?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |
| [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.95% <0.00%> (-0.59%)` | :arrow_down: |
| [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.77% <0.00%> (+23.94%)` | :arrow_up: |
| [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `57.27% <0.00%> (+24.54%)` | :arrow_up: |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `93.75% <0.00%> (+25.89%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5352/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `75.98% <0.00%> (+55.48%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5352?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5352?src=pr&el=footer). Last update [28a690a...62e5950](https://codecov.io/gh/huggingface/transformers/pull/5352?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks! Added references to the datasets, feel free to update. |
|
transformers | 5,351 | closed | Create model card for asafaya/bert-medium-arabic | 06-29-2020 00:54:53 | 06-29-2020 00:54:53 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5351?src=pr&el=h1) Report
> Merging [#5351](https://codecov.io/gh/huggingface/transformers/pull/5351?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/28a690a80e6c8dbcb50b5628ef853146e1940125&el=desc) will **increase** coverage by `0.97%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5351?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5351 +/- ##
==========================================
+ Coverage 76.18% 77.15% +0.97%
==========================================
Files 138 138
Lines 24292 24292
==========================================
+ Hits 18506 18743 +237
+ Misses 5786 5549 -237
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5351?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: |
| [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.69% <0.00%> (-29.45%)` | :arrow_down: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `33.56% <0.00%> (-8.93%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `84.75% <0.00%> (-2.79%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `79.72% <0.00%> (-1.38%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.05% <0.00%> (-1.26%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |
| [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `95.77% <0.00%> (+23.94%)` | :arrow_up: |
| [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5351/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `57.27% <0.00%> (+24.54%)` | :arrow_up: |
| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/5351/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5351?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5351?src=pr&el=footer). Last update [28a690a...17af457](https://codecov.io/gh/huggingface/transformers/pull/5351?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks! Feel free to open another PR for tweaks if needed |
|
transformers | 5,350 | closed | Move tests/utils.py -> transformers/testing_utils.py | Fixes #5350
The motivation is to allow examples/ tests to use these utilities.
for both groups of tests, the import is
```python
from transformers.testing_utils import slow
```
Motivation: I was about to rewrite the @slow decorator today and felt that this was cleaner. | 06-29-2020 00:21:14 | 06-29-2020 00:21:14 | CI failure is spurious<|||||>no objection<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5350?src=pr&el=h1) Report
> Merging [#5350](https://codecov.io/gh/huggingface/transformers/pull/5350?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/28a690a80e6c8dbcb50b5628ef853146e1940125&el=desc) will **increase** coverage by `1.86%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5350?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5350 +/- ##
==========================================
+ Coverage 76.18% 78.04% +1.86%
==========================================
Files 138 139 +1
Lines 24292 24339 +47
==========================================
+ Hits 18506 18996 +490
+ Misses 5786 5343 -443
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5350?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/testing\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `80.85% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `26.92% <0.00%> (-68.47%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `43.47% <0.00%> (-49.57%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.95% <0.00%> (-0.59%)` | :arrow_down: |
| [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/5350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `57.27% <0.00%> (+24.54%)` | :arrow_up: |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/5350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `93.75% <0.00%> (+25.89%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `75.98% <0.00%> (+55.48%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/5350/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.72% <0.00%> (+73.10%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5350?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5350?src=pr&el=footer). Last update [28a690a...4b09e67](https://codecov.io/gh/huggingface/transformers/pull/5350?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,349 | closed | T5 FP16: bad generations from my converted checkpoint | `t5-base` can generate reasonable summaries in fp16, but my checkpoint `sshleifer/t5-base-cnn` cannot. Is there something besides the conversion script I need to do to make fp16 work?
[Colab](https://colab.research.google.com/drive/1k8agNDPdzfF38aTrz5o1kcqvWDHUxAnD?usp=sharing) showing good generations for t5-base, bad for my checkpoint.
Without colab,
```python
from transformers import *
device = 'cuda'
tokenizer = T5Tokenizer.from_pretrained('t5-base')
my_model = T5ForConditionalGeneration.from_pretrained('sshleifer/t5-base-cnn').to(device)
TXT = """summarize: Marseille, France (CNN)The French prosecutor leading an investigation into the crash of Germanwings Flight 9525 insisted Wednesday that he was not aware of any video footage from on board the plane. Marseille prosecutor Brice Robin told CNN that "so far no videos were used in the crash investigation." He added, "A person who has such a video needs to immediately give it to the investigators." Robin\'s comments follow claims by two magazines, German daily Bild and French Paris Match, of a cell phone video showing the harrowing final seconds from on board Germanwings Flight 9525 as it crashed into the French Alps. All 150 on board were killed. Paris Match and Bild reported that the video was recovered from a phone at the wreckage site. The two publications described the supposed video, but did not post it on their websites. The publications said that they watched the video, which was found by a source close to the investigation. "One can hear cries of \'My God\' in several languages," Paris Match reported. "Metallic banging can also be heard more than three times, perhaps of the pilot trying to open the cockpit door with a heavy object. Towards the end, after a heavy shake, stronger than the others, the screaming intensifies. Then nothing." "It is a very disturbing scene," said Julian Reichelt, editor-in-chief of Bild online. An official with France\'s accident investigation agency, the BEA, said the agency is not aware of any such video. Lt. Col. Jean-Marc Menichini, a French Gendarmerie spokesman in charge of communications on rescue efforts around the Germanwings crash site, told CNN that the reports were "completely wrong" and "unwarranted." Cell phones have been collected at the site, he said, but that they "hadn\'t been exploited yet." Menichini said he believed the cell phones would need to be sent to the Criminal Research Institute in Rosny sous-Bois, near Paris, in order to be analyzed by specialized technicians working hand-in-hand with investigators. But none of the cell phones found so far have been sent to the institute, Menichini said. Asked whether staff involved in the search could have leaked a memory card to the media, Menichini answered with a categorical "no." Reichelt told "Erin Burnett: Outfront" that he had watched the video and stood by the report, saying Bild and Paris Match are "very confident" that the clip is real. He noted that investigators only revealed they\'d recovered cell phones from the crash site after Bild and Paris Match published their reports."""
batch = tokenizer.batch_encode_plus([TXT], return_tensors='pt').to(device)
tokenizer.batch_decode(my_model.generate(**batch, skip_special_tokens=True)) # way shorter than t5-base but english
# ['prosecutor in crash investigation says he is not aware of any video footage from on board ']
my_model = my_model.half()
my_model.generate(**batch, skip_special_tokens=True)
# [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]
```
**Update:** On master +brutasse, (torch 1.5), t5-base produces bad generations in fp16. They are similar to the recently converted checkpoint -all bos. | 06-29-2020 00:06:51 | 06-29-2020 00:06:51 | Not sure whether I can help you here. I don't think fp16 is really stable for T5 no? <|||||>Definitely not stable. But you had an idea for how to fix related to layernorm if i recall?
<|||||>As it is now layer norm is always done in fp32: https://github.com/huggingface/transformers/blob/9a473f1e43221348334b9e7f95bb45770b7ef268/src/transformers/modeling_t5.py#L157
But even that does not seem to be enough to make fp16 work all the time<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,348 | closed | T5 Warning: embeddings are not initialized | ```python
from transformers import *
model = T5ForConditionalGeneration.from_pretrained("t5-base")
```
Is this concerning?
```
Some weights of T5ForConditionalGeneration were not initialized from the model checkpoint at t5-base and are newly initialized: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight', 'lm_head.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
Relatedly, I am porting a summarization checkpoint and wondering whether I should initialize lm_head. | 06-28-2020 23:34:11 | 06-28-2020 23:34:11 | This [comment](https://github.com/huggingface/transformers/issues/3553#issuecomment-624306027) answers this issue. |
transformers | 5,347 | closed | Training with a large dataset | I'm currently following the tutorial on how to train a new language model ([here](https://huggingface.co/blog/how-to-train)) and facing some issues on Colab because of my large training corpus (+40 mil lines, 5 GB).
I tried to use IterableDataset so I can load data on the fly, and I'm getting this error when I try to train with the script provided in the tutorial:
"ValueError: DataLoader with IterableDataset: expected unspecified sampler option, but got sampler=<torch.utils.data.sampler.RandomSampler object at 0x7f777bdbe710>"
What's the best way to resolve this? | 06-28-2020 23:12:13 | 06-28-2020 23:12:13 | Hi, can you share more details, (all the details asked in the issue template for instance).
In particular, the exact command line you are using, the full error message, etc...<|||||>Hi! Thank you for your reply. My apologies for not using the template.
I'm following the tutorial on Colab and switched LineByLineTextDataset to the following custom dataset class to process each line on the fly rather than loading everything in memory:
```
from torch.utils.data import IterableDataset
class CustomIterableDataset(IterableDataset):
def __init__(self, filename, tokenizer, block_size, len):
self.filename = filename
self.tokenizer = tokenizer
self.block_size = block_size
self.len = len
def preprocess(self, text):
batch_encoding = self.tokenizer(text.strip("\n"), add_special_tokens=True, truncation=True, max_length=self.block_size)
return torch.tensor(batch_encoding["input_ids"])
def line_mapper(self, line):
return self.preprocess(line)
def __iter__(self):
file_itr = open(self.filename, encoding="utf-8")
mapped_itr = map(self.line_mapper, file_itr)
return mapped_itr
def __len__(self):
return self.len
dataset = CustomIterableDataset("large_corpus.txt", tokenizer=tokenizer, block_size=128, len=40228303)
```
I run the rest of the script until this part:
```
%%time
trainer.train()
```
The error looks like this:
```
ValueError Traceback (most recent call last)
<ipython-input-31-0c647bc3a8b8> in <module>()
----> 1 get_ipython().run_cell_magic('time', '', 'trainer.train()')
5 frames
<decorator-gen-60> in time(self, line, cell, local_ns)
<timed eval> in <module>()
/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in __init__(self, dataset, batch_size, shuffle, sampler, batch_sampler, num_workers, collate_fn, pin_memory, drop_last, timeout, worker_init_fn, multiprocessing_context)
177 raise ValueError(
178 "DataLoader with IterableDataset: expected unspecified "
--> 179 "sampler option, but got sampler={}".format(sampler))
180 elif batch_sampler is not None:
181 # See NOTE [ Custom Samplers and IterableDataset ]
ValueError: DataLoader with IterableDataset: expected unspecified sampler option, but got sampler=<torch.utils.data.sampler.RandomSampler object at 0x7f1eaf8fa128>
```
Please let me know if there's anything else I can provide to help you understand this problem.<|||||>That's because unfortunately the trainer cannot be currently used with an `IterableDataset`, because the `get_train_dataloader` method creates a `DataLoader` with a sampler, while `IterableDataset` may not be used with a sampler. You could override the trainer and reimplement that method as follows:
```py
def get_train_dataloader(self) -> DataLoader:
if self.train_dataset is None:
raise ValueError("Trainer: training requires a train_dataset.")
if is_tpu_available():
train_sampler = get_tpu_sampler(self.train_dataset)
else:
train_sampler = (
RandomSampler(self.train_dataset)
if self.args.local_rank == -1
else DistributedSampler(self.train_dataset)
)
data_loader = DataLoader(
self.train_dataset,
batch_size=self.args.train_batch_size,
sampler=train_sampler if not isinstance(self.train_dataset, IterableDataset) else None,
collate_fn=self.data_collator.collate_batch,
)
```
Note how we're not passing the sampler if it's an `IterableDataset`.<|||||>@LysandreJik Thank you so much for your help. I just started training with your suggestion.
For future references, I had to make the following trivial adjustments to the snippet to make it with the current master version.
```
from torch.utils.data.dataset import IterableDataset
def get_train_dataloader(self) -> DataLoader:
if self.train_dataset is None:
raise ValueError("Trainer: training requires a train_dataset.")
if is_torch_tpu_available():
train_sampler = get_tpu_sampler(self.train_dataset)
else:
train_sampler = (
RandomSampler(self.train_dataset)
if self.args.local_rank == -1
else DistributedSampler(self.train_dataset)
)
data_loader = DataLoader(
self.train_dataset,
batch_size=self.args.train_batch_size,
sampler=train_sampler if not isinstance(self.train_dataset, IterableDataset) else None,
collate_fn=self.data_collator,
)
return data_loader
```<|||||>oops yes forgot the return statement :sweat_smile: glad you got it to work!<|||||>Thanks! It was super helpful. Actually, I just had to change is_tpu_available() -> is_torch_tpu_available() and collate_fn=self.data_collator.collate_batch -> collate_fn=self.data_collator. I wasn't sure if these were from different versions. <|||||>> I'm currently following the tutorial on how to train a new language model ([here](https://huggingface.co/blog/how-to-train)) and facing some issues on Colab because of my large training corpus (+40 mil lines, 5 GB).
>
> I tried to use IterableDataset so I can load data on the fly, and I'm getting this error when I try to train with the script provided in the tutorial:
> "ValueError: DataLoader with IterableDataset: expected unspecified sampler option, but got sampler=<torch.utils.data.sampler.RandomSampler object at 0x7f777bdbe710>"
>
> What's the best way to resolve this?
HI hgjlee,
Have you solve this problem? Could you please share the solution that how to load a huge dataset for the pretraining?
Now my corpus has 10G in hundres of txt files. Now I have no idea to load these file for pretraining.
Thanks in advance for your help. |
transformers | 5,346 | closed | Upload model card with the CLI | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Upload model card README.md with transformers CLI.
## Motivation
Some applications may require to upload multiple models. It would be nice to be able to upload associated model card with model/tokenizer/config files.
At the moment a PR is required making it more difficult for users to upload their models and nearly impossible for apps relying on training and uploading a lot of different models.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
I could help propose a PR for the client side.
| 06-28-2020 23:03:30 | 06-28-2020 23:03:30 | This is not currently documented (cc @sgugger) but it's already possible, see for instance https://huggingface.co/Helsinki-NLP/opus-mt-fr-en
At some point it might become the preferred way of publishing a model card too.<|||||>Will update the tutorial today.<|||||>Thanks, this is great! |
transformers | 5,345 | closed | Massive text generation slowdown when using repetition_penalty param on GPU | # 🐛 Bug
## Information
Text generation when using the `repetition_penalty` takes about 2x-10x longer on a GPU, which is disproportionate and implies the GPU may not be used in that instance.
Reported from https://github.com/minimaxir/aitextgen/issues/34
The `enforce_repetition_penalty_` function at
https://github.com/huggingface/transformers/blob/08c9607c3d025f9f1a0c40e6d124d5d5d446208e/src/transformers/modeling_utils.py#L817
may be using CPU ops instead of Tensor ops.
## To reproduce
Demo Colab notebook w/ time benchmarks: https://colab.research.google.com/drive/1SzYUEC0xikHN8OEp2WOTWHlT6JxT9rRf?usp=sharing
| 06-28-2020 19:05:50 | 06-28-2020 19:05:50 | Thanks for the notebook - I can reproduce the results both on a local GPU and on colab. You are right, this slowdown is disproportionate!
I'm suspecting the double for loop to be the reason for the heavy slow down. It might be possible to replace the two for loops by some smart matrix operations, but not sure.
Also pinging our PyTorch GPU master @mfuntowicz - do you maybe have some insight here?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,344 | closed | [examples] fix example links | Fix links for summarization and translation examples in BIG TABLE OF TASKS.
Regarding issue #5309
@sshleifer | 06-28-2020 13:57:00 | 06-28-2020 13:57:00 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5344?src=pr&el=h1) Report
> Merging [#5344](https://codecov.io/gh/huggingface/transformers/pull/5344?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/98109464c12619c4164ba7714f3e5526a290239a&el=desc) will **decrease** coverage by `1.10%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5344?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5344 +/- ##
==========================================
- Coverage 77.91% 76.80% -1.11%
==========================================
Files 138 138
Lines 24282 24282
==========================================
- Hits 18920 18651 -269
- Misses 5362 5631 +269
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5344?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/5344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `24.19% <0.00%> (-74.20%)` | :arrow_down: |
| [...rc/transformers/data/datasets/language\_modeling.py](https://codecov.io/gh/huggingface/transformers/pull/5344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2xhbmd1YWdlX21vZGVsaW5nLnB5) | `34.69% <0.00%> (-57.15%)` | :arrow_down: |
| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `50.74% <0.00%> (-35.83%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `17.45% <0.00%> (-21.94%)` | :arrow_down: |
| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `32.10% <0.00%> (-17.35%)` | :arrow_down: |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/5344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `65.26% <0.00%> (-11.58%)` | :arrow_down: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `91.78% <0.00%> (-2.74%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.41% <0.00%> (-2.36%)` | :arrow_down: |
| [src/transformers/data/processors/utils.py](https://codecov.io/gh/huggingface/transformers/pull/5344/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvdXRpbHMucHk=) | `26.31% <0.00%> (-1.32%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5344?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5344?src=pr&el=footer). Last update [9810946...fc8f199](https://codecov.io/gh/huggingface/transformers/pull/5344?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks! |
transformers | 5,343 | closed | [Reformer] Simpler reverse sort backward implementation | After reviewing code of similar PyTorch implementation: https://github.com/lucidrains/reformer-pytorch/pull/104 and original trax code again: https://github.com/google/trax/blob/master/trax/layers/research/efficient_attention.py#L1265-L1266, there is a simpler way to implement the backward reverse sort function.
The code is taken from https://github.com/lucidrains/reformer-pytorch/blob/42a8682ff8e7cec3122eff6febc9087f1c53f370/reformer_pytorch/reformer_pytorch.py#L414.
This simpler code is also implemented in the Reformer test branch and all tests are checked for correctness: https://github.com/huggingface/transformers/tree/branch_to_save_trax_integration_tests
**Note**: There was no bug in the code before / the logic of the code has not changed. This PR just makes the backward function of ReverseSort simpler.
| 06-28-2020 12:11:15 | 06-28-2020 12:11:15 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5343?src=pr&el=h1) Report
> Merging [#5343](https://codecov.io/gh/huggingface/transformers/pull/5343?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9fe09cec76efa1e221c3fd6eb8520ba0a911f092&el=desc) will **increase** coverage by `0.40%`.
> The diff coverage is `66.66%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5343?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5343 +/- ##
==========================================
+ Coverage 77.93% 78.33% +0.40%
==========================================
Files 138 138
Lines 23860 23851 -9
==========================================
+ Hits 18595 18684 +89
+ Misses 5265 5167 -98
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5343?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5343/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `89.18% <66.66%> (+0.97%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5343/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.41% <0.00%> (-0.74%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5343/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `91.82% <0.00%> (+0.31%)` | :arrow_up: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5343/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `90.86% <0.00%> (+0.91%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5343/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5343/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.68% <0.00%> (+28.02%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5343?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5343?src=pr&el=footer). Last update [9fe09ce...d53f899](https://codecov.io/gh/huggingface/transformers/pull/5343?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,342 | closed | Added support for XLNet language modelling training in examples | Updated src/transformers/data/data_collator.py with a new XLNet-specific collator that applies masking and generates revelant tensors (input_ids, perm_mask, target_mask, labels) as per https://github.com/zihangdai/xlnet/blob/master/data_utils.py. Also added relevant calls and imports in examples/language-modeling/run_language_modeling.py. Relevant issues #4739 #2008 | 06-28-2020 09:19:27 | 06-28-2020 09:19:27 | CircleCI says:
examples/language-modeling/run_language_modeling.py:29: in <module>
from transformers import (
E ImportError: cannot import name 'DataCollatorForXLNetLanguageModeling'
How can I fix this?<|||||>`DataCollatorForXLNetLanguageModeling` needs to be imported into `src/transformers/__init__.py` to be imported `from transformers` |
transformers | 5,341 | closed | In Tensorflow the serving is very slow | # ❓ I'm using the gpt2 model, In Tensorflow the serving is very slow, Tensorflow is much faster.
## Details
I use tf.saved_model save model
`tf.saved_model.save(model, export_dir="/model/2/")`
Tensorflow reads the model
```
imported = tf.saved_model.load("/model/2/")
f = imported.signatures["serving_default"]
```
The speed of Tensorflow is much higher than that of tensorflow
In the same model without the use of transformers, tensorFlow and TensorFlow serve at the same speed | 06-28-2020 07:43:47 | 06-28-2020 07:43:47 | I don't get it |
transformers | 5,340 | closed | GPT2Tokenizer remove the ' ' (space) if it is at the end of text? | # ❓ Questions & Help
GPT2Tokenizer remove the ' ' (space) if it is at the end of text?
## Details
<!-- Description of your issue -->
Hello~ I'm using transformers v2.11.0 to run some GPT2 model.
When I tested the code, I found that the token id list generated by GPT2Tokenizer had confusing result when I input a text string end with a space.
To reproduce:
```
from transformers import GPT2Tokenizer
tokenizer_file = 'path_to_your_tokenzier_files'
tokenizer = GPT2Tokenizer.from_pretrained(tokenizer_file)
# case 1: without a space at the end input text string
text_1 = 'import tensorflow as'
encoded_1 = tokenizer.encode(text_1, add_special_tokens=False)
print(encoded_1)
# case 2: with a space at the end input text string
text_2 = 'import tensorflow as ' # noted there is a space at the end
encoded_2 = tokenizer.encode(text_2, add_special_tokens=False)
print(encoded_2)
```
the output is confusing
```
# case 1:
[618, 1969, 573]
# case 2:
[618, 1969, 573]
```
both these two case output the same token id list, with my tokenzier is [618, 1969, 573]. Which means the space at the end of case2 is ignored in the tokenization process.
I debug the code and found that the space is ignored at this step
https://github.com/huggingface/transformers/blob/b42586ea560a20dcadb78472a6b4596f579e9043/src/transformers/tokenization_utils.py#L1288
Is this step with some propose?
I also tried the ByteLevelBPETokenizer in tokenizers project
```
from tokenizers.implementations import ByteLevelBPETokenizer
tokenizer_file = 'path_to_your_tokenzier_files'
tokenizer = ByteLevelBPETokenizer(tokenizer_file)
# case 1: without a space at the end input text string
text_1 = 'import tensorflow as'
encoded_1 = tokenizer.encode(text_1)
print(len(encoded_1))
print(encoded_1.ids)
print(encoded_1.tokens)
# case 2: with a space at the end input text string
text_2 = 'import tensorflow as '
encoded_2 = tokenizer.encode(text_2)
print(len(encoded_2))
print(encoded_2.ids)
print(encoded_2.tokens)
```
the output are as expected:
```
# case 1
3
[618, 5015, 573]
['import', 'Ġtensorflow', 'Ġas']
# case 2
4
[618, 5015, 573, 231]
['import', 'Ġtensorflow', 'Ġas', 'Ġ']
```
| 06-28-2020 07:14:56 | 06-28-2020 07:14:56 | Hi @carter54,
Thanks for your issue! I think you're correct. This was a bug in a previous version.
Note that for v3.0.0 we have done a major refactoring that corrected this error as far as I can see.
If you take a look at this line in current master in this line: https://github.com/huggingface/transformers/blob/9d9b872b66f9ab9b7b7c73f2c00985dd92c4121b/src/transformers/tokenization_utils.py#L310
You can see that the whitespace is now only stripped when `all_special_tokens_extended.get(tok, None).lstrip)` is explicitly set to `True` which is not the default case for `gpt2`. So using these versions:
```
tokenizers.__version__: 0.8.0.rc3
```
and
```
transformers.__version__: 3.0.1 (master)
```
I cannot reproduce the error.
Could you update transformers via:
```
pip install transformers --upgrade
```
and check if the error persists? I think v3.0.0 should have fixed it :-) <|||||>@patrickvonplaten Yes, I can see it is fixed in V3.0.0.
Thanks~ |
transformers | 5,339 | closed | Predefined tasks in T5 | ```
"task_specific_params": {
"summarization": {
"early_stopping": true,
"length_penalty": 2.0,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
},
"translation_en_to_de": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to German: "
},
"translation_en_to_fr": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to French: "
},
"translation_en_to_ro": {
"early_stopping": true,
"max_length": 300,
"num_beams": 4,
"prefix": "translate English to Romanian: "
}
},
```
The config file generated while saving a T5 model mentions the above-predefined tasks which can be done by just prepending the inputs with appropriate prefixes. I wanted to know if there is a list that shows any other predefined tasks which can be performed by T5. | 06-28-2020 06:54:46 | 06-28-2020 06:54:46 | You can take a look the paper to find out other tasks. AFAIK it includes GLUE tasks, and SQuAD QA as well.<|||||>https://arxiv.org/pdf/1910.10683
On page 45 the examples for many tasks are starting and ending on page 52<|||||>Thanks for the help. I will refer the paper. |
transformers | 5,338 | closed | Confuse by "All learning rates are 0" | transformers/examples/seq2seq/finetune.py 219row :
if max(scheduler.get_last_lr()) > 0:
warnings.warn("All learning rates are 0")
why, is there something i left?
| 06-28-2020 03:57:23 | 06-28-2020 03:57:23 | I've got the same problem here. Have you solved it?<|||||>Same here. Any solution to solve? Or is this just a warning message that can be ignored?<|||||>Pinging @sshleifer <|||||>Ignore it. I will try to get it out of the code. I think the lr starts at 0 and climbs up for `warmup_steps` iterations. |
transformers | 5,337 | closed | arxiv-ai-gpt2 model card | Add model card and generation script for model arxiv_ai_gpt2. | 06-28-2020 00:11:05 | 06-28-2020 00:11:05 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5337?src=pr&el=h1) Report
> Merging [#5337](https://codecov.io/gh/huggingface/transformers/pull/5337?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/efae6645e223f29cf05eeafe95105a9f869b66dd&el=desc) will **decrease** coverage by `0.18%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5337?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5337 +/- ##
==========================================
- Coverage 77.69% 77.51% -0.19%
==========================================
Files 138 138
Lines 24291 24291
==========================================
- Hits 18872 18828 -44
- Misses 5419 5463 +44
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5337?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5337/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `49.40% <0.00%> (-42.04%)` | :arrow_down: |
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5337/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `57.14% <0.00%> (-38.10%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5337/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.56% <0.00%> (-1.92%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5337/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5337/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.37% <0.00%> (+0.72%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5337/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.09% <0.00%> (+1.37%)` | :arrow_up: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5337/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5337/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.98% <0.00%> (+74.19%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5337?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5337?src=pr&el=footer). Last update [1af58c0...bf2c4ac](https://codecov.io/gh/huggingface/transformers/pull/5337?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks for the reply!
Yeah, if that is not allowed, I'll just remove it an provide it somewhere else. I'll also remove the extra lines in the model card.
---
Just removed the unnecessary lines. If the code is not appropriate, I'll also remove that immediately.<|||||>> Thanks for the reply!
>
> Yeah, if that is not allowed, I'll just remove it an provide it somewhere else. I'll also remove the extra lines in the model card.
>
> Just removed the unnecessary lines. If the code is not appropriate, I'll also remove that immediately.
IMO, it makes sense if you upload the code to Github Gist and add a link here. Adding code in this folder is not favorable because we won't be able to maintain any code here and these codes are excluded from the unit testing!<|||||>Got it and I just removed it. I'll provide a link to GitHub gist in the model card later when it's ready. Sorry for the inconvenience.<|||||>Nice! Also made me think of @LysandreJik's [`lysandre/arxiv-nlp`](https://huggingface.co/lysandre/arxiv-nlp) |
transformers | 5,336 | closed | BillSum dataset finetuning | (from distilbart issue).
Hey @sshleifer , thanks for the distilled BART version I was able to fine tune it with the same script on BillSum dataset as T5 but the numbers are way different between the two. I just wanted to understand if I might be doing something wrong with regards to fine tuning distilBART, does it require student training everytime?
Reference numbers on BillSum Dataset:
T5-base:
avg_train_loss = tensor(1.5333, device='cuda:0')
avg_val_loss = tensor(1.4528, device='cuda:0')
epoch = 1
loss = tensor(1.6734, device='cuda:0')
rouge1 = 0.49188267841912325
rouge2 = 0.26436589848185027
rougeL = 0.3591894400892483
train_loss = tensor(1.6734, device='cuda:0')
val_loss = tensor(1.4528, device='cuda:0')
dBART-cnn-12-6:
avg_train_loss = tensor(1.3013, device='cuda:0')
avg_val_loss = tensor(1.4013, device='cuda:0')
epoch = 1
loss = tensor(1.4901, device='cuda:0')
rouge1 = 0.3681518923769047
rouge2 = 0.15683286277623087
rougeL = 0.23453727441540043
train_loss = tensor(1.4901, device='cuda:0')
val_loss = tensor(1.4013, device='cuda:0')
PS. I am using a modified version of the older finetune.py so it doesn't have Rouge for validation epochs.
Thanks | 06-27-2020 21:08:46 | 06-27-2020 21:08:46 | Interesting!
I wonder what you get starting from bart-large.
If you use the new finetune.py with --logger wandb I could more easily see your logs and suggest hyperparameters.
<|||||>Thanks! and Sure, let me do that and share the results soon!<|||||>Took me a while to get to this :/ the numbers look as expected when fine-tuned with the latest finetune.py! :)
Must have been a bug in my modified script, updated that too and confirmed the results:
dBART:
"val_avg_loss": 1.3925694227218628,
"val_avg_loss": 1.3925694227218628,
"val_avg_rouge1": 0.576209431885421,
"val_avg_rouge2": 0.34462863342832595,
"val_avg_rougeL": 0.3999874418853423,
"val_avg_gen_time": 6.152730942964554,
"val_avg_summ_len": 132.768
T5:
"val_avg_loss": 1.6064366102218628,
"val_avg_rouge1": 0.5291717003599621,
"val_avg_rouge2": 0.3049920551697407,
"val_avg_rougeL": 0.37437278542325914,
"val_avg_gen_time": 2.408391712649965,
"val_avg_summ_len": 171.18881856540085
Just a quick query though if you could help me out, for a lot of the predictions in the resulting test_generation file, I am getting summaries starting with lower case, and then generating a coherent summary from there on.
I looked at my dataset and the code and fine-tuned the models again after some pre-processing but still the same result.
Any ideas?? This was for both T5 and dBart btw.
Thanks!
<|||||>Can I see an example of an input, output pair that demonstrates the issue you are facing?<|||||>Here you go...apologies I should have shared the examples in the previous post itself :)
**Example 1:
Src :** This Act may be cited as the "Military Call-Up Relief Act". (a) Waiver For Certain Distributions. (1) In general. Section 72(t)(2) of the Internal Revenue Code of 1986 is amended by adding at the end the following: (G) Distributions to individuals performing national emergency active duty. Any distribution to an individual who, at the time of the distribution, is a member of a reserve component called or ordered to active duty pursuant to a provision of law referred to in section 101(a)(B) of title 10, United States Code, during the period of the national emergency declared by the President on September 14, 2001. (2) Waiver of underpayment penalty. Section 6654(e)(3) of such Code is amended by adding at the end the following: (C) Certain early withdrawals from retirement plans. No addition to tax shall be imposed under subsection (a) with respect to any underpayment to the extent such underpayment was created or increased by any distribution described in section 72(t)(2)(G). (3) Effective date. The amendments made by this subsection shall apply to distributions made to an individual after September 13, 2001. (b) Catch-up Contributions Allowed. (1) Individual retirement accounts. Section 219(b)(5) of the Internal Revenue Code of 1986 is amended by adding at the end the following: (D) Catch-up contributions for certain distributions. In the case of an individual who has received a distribution described in section 72(t)(2)(G), the deductible amount for any taxable year shall be increased by an amount equal to (i) the aggregate amount of such distributions made with respect to such individual, over " the aggregate amount of such distributions previously taken into account under this subparagraph or section 414(w). (2) Roth iras. Section 408A(c) of such Code is amended by redesignating paragraph (7) as paragraph (8) and by inserting after paragraph (6) the following: (7) Catch-up contributions for certain distributions. Any contribution described in section 219(b)(5)(D) shall not be taken into account for purposes of paragraph (2). (3) Employer plans. Section 414 of such Code is amended by adding at the end the following: (w) Catch-up contributions for certain distributions. (1) In general. An applicable employer plan shall not be treated as failing to meet any requirement of this title solely because the plan permits an applicable participant to make additional elective deferrals in any plan year. (2) Limitation on amount of additional deferrals. (A) In general. A plan shall not permit additional elective deferrals under paragraph (1) for any year in an amount greater than the lesser of (i) the applicable dollar amount, or " the excess of (I) the participant's compensation (as defined in section 415(c) for the year, over " any other elective deferrals of the participant for such year which are made without regard to this subsection. (B) Applicable dollar amount. For purposes of this paragraph, the applicable dollar amount with respect to a participant shall be an amount equal to (i) the aggregate amount of distributions described in section 72(t)(2)(G) made with respect to such participant, over " the aggregate amount of such distributions previously taken into account under this subsection or section 219(b)(5)(B). (3) Treatment of contributions. Rules similar to the rules of paragraphs (3) and (4) of subsection (v) shall apply with respect to contributions made under this subsection. (4) Definitions. For purposes of this subsection, the terms 'applicable employer plan' and 'elective deferral' have the same meanings given such terms in subsection (v)(6). (4) Conforming amendment. Section 414(v)(2)(A) of such Code is amended by inserting (other than deferrals under subsection " after "deferrals". (5) Effective date. The amendments made by this subsection shall apply to contributions in taxable years ending after December 31, 2001.
**Ground Truth:** Amend the internal revenue code of 1986 to provide a waiver of the early withdrawal penalty for distributions from qualified retirement plans to individuals called to active duty during the national emergency declared by the president on september 14, 2001, and for other purposes. Military Call-up Relief Act - Amends the Internal Revenue Code to waive the ten percent early withdrawal penalty for distributions from qualified retirement plans to individuals called to active duty during the national emergency declared by the President on September 14, 2001.
**Prediction:** military call-up relief act - Amends the Internal Revenue Code to allow a tax-exempt distribution to an individual who is a member of a reserve component called or ordered to active duty during the period of the national emergency declared by the President on September 14, 2001. Requires an employer plan to not be treated as failing for purposes of this Act. Provides for a catch-up contribution for certain distributions.
**Example 2:
Src:** This Act may be cited as the "National Climate Service Act of 2009". The Congress finds the following: (1) Weather, climate change, and climate variability affect public safety, environmental services and security, human health, agriculture, energy use, water resources, and other factors vital to national security and human welfare. (2) Climate forecasts create opportunities for society to prepare, potentially reducing the costs of climate-related events, as well as serving national needs related to enhancing economic growth, managing risk, protecting life and property, and promoting environmental stewardship. (3) Information on predicted climate and climate impacts is not being fully disseminated or used well, despite the increasing predictability of climate. (4) The United States lacks adequate research, infrastructure, and coordinated outreach and communication mechanisms to meet national climate monitoring, prediction, and decision support needs for adapting to and mitigating the impacts of climate change and climate variability. (5) Increasing societal resilience to climate impacts requires understanding climate trends and variations as well as possible, understanding the impacts of climate on human and nonhuman systems, providing decision-relevant tools based on that information, and increasing society's capacity to act on that information. It is the purpose of this Act to establish a National Climate Service that will assist the Nation and the world in understanding, anticipating, and responding to climate, climate change, and climate variability and their impacts and implications. The Service shall inform the public through the sustained production and delivery of authoritative, timely, useful information about impacts on local, State, regional, tribal, national, and global scales. The Service shall be user-centric, by ensuring that the information is accessible, consistent with users' ability to respond, and based on user needs and limitations. The Service shall provide such usable information through a sustained network of observations, modeling, and research activities. (a) Establishment. (1) In general. The Secretary of Commerce shall establish within the Climate Program Office of the National Oceanic and Atmospheric Administration a National Climate Service not later than one year after the date of enactment of this Act. The Service shall include a national center and a network of regional and local facilities for operational climate observation, modeling, and research. (2) General purpose. The Service shall inform the public through the sustained production and delivery of authoritative, timely, useful information about impacts on local, State, regional, tribal, national, and global scales. (3) Specific services. The Service, at minimum, shall (A) serve as a clearinghouse and technical access point to stakeholders for regionally and nationally relevant information on climate, climate impacts, and adaptation, developing comprehensive databases of information relevant to specific regional and national stakeholder needs. (B) provide education on climate impacts, vulnerabilities, and application of climate information in decisionmaking. (C) design decision-support tools that facilitate use of climate information in stakeholders' near-term operations and long-term planning (D) facilitate user access to climate and climate impacts experts for technical assistance in use of climate information and to inform the climate forecast community of their information needs. (E) provide researcher, modeler, and observations experts access to users to help guide direction of research, modeling, and observation activities. And (F) propose and evaluate adaptation strategies for climate variability and change. (4) Specific functions. The Service, at minimum, shall (A) integrate global, national, and regional observations to produce information and assessments of use to stakeholders and researchers, (B) develop climate models for decision support. (C) perform basic and applied research on climate dynamics and impacts relevant to stakeholder interests. (D) create and maintain an operational delivery system and facilitate transition of new climate applications products to Service member agencies. (E) establish an atmospheric monitoring and verification program utilizing aircraft, satellite, ground sensors, ocean and coastal observing systems, and modeling capabilities to monitor, measure, and verify greenhouse gas concentrations and emissions throughout the global oceans and atmosphere. (F) develop and maintain a dialog among research teams, Federal agencies, and stakeholders for developing information relevant for planning and decisionmaking. (G) identify climate-related vulnerabilities and build national capacity to increase resilience. (H) articulate regional and national climate issues and concerns in regional and national policy arenas and facilitate regional-national communications on Service needs and performance. And (I) outreach to stakeholder groups. (b) Action Plan. Within 1 year after the date of enactment of this Act, the Secretary of Commerce shall submit to the Committee on Commerce, Science, and Transportation of the Senate and the Committee on Science and Technology of the House of Representatives a plan of action for the National Climate Service. The plan, at a minimum, shall (1) provide for the interpretation and communication of climate data, conditions, predictions, projections, and risks on an ongoing basis to decision and policy makers at the local, regional, and national levels. (2) design, deploy, and operate a national climate observing system that closes gaps in existing coverage. (3) support infrastructure and ability to archive and ensure the quality of climate data, and make federally funded model simulations and other relevant climate information available from Global Change Research Program activities and other sources. (4) include a program for long-term stewardship, quality control, development of relevant climate products, and efficient access to all relevant climate data, products, and model simulations. (5) establish a national coordinated modeling strategy, including a national climate modeling center to provide a dedicated capability for modeling and forecasting scenarios, and a regular schedule of projections on long-term and short- term time horizons over a range of scales, including regional scales. (6) improve integrated modeling, assessment, and predictive capabilities needed to document and forecast climate changes and impacts, and to guide national, regional, and local planning and decisionmaking. (7) provide a system of regular consultation and coordination with Federal agencies, States, tribes, nongovernmental organizations, the private sector, and the academic community to ensure (A) that the information requirements of these groups are well incorporated. And (B) timely and full sharing, dissemination and use of climate information and services in risk preparedness, planning, decisionmaking, and early warning and natural resources management, both domestically and internationally. (8) develop standards, evaluation criteria, and performance objectives to ensure that the Service meets the evolving information needs of the public, policy makers, and decisionmakers in the face of a changing climate, (9) develop funding estimates to implement the plan. And support competitive research programs that will improve elements of the Service described in this Act through the Climate Program Office within the Service headquarter function. (c) Director. The Administrator shall appoint a Director of the Service, who shall oversee all processes associated with managing the organization and executing the functions and actions described in this Act. (d) National Climate Service Advisory Council. The Administrator shall, in consultation with the Chairmen and ranking minority members of the Committee on Commerce, Science, and Transportation of the Senate and the Committee on Science and Technology of the House of Representatives, and the National Academy of Sciences, appoint the membership of a National Climate Service Advisory Council, with members serving 4-year terms, that shall include a diverse membership from appropriate Federal, State, and local government, universities, and nongovernment and private sectors who use climate information and cover a range of sectors, such as water, drought, fisheries, coasts, agriculture, health, natural resources, transportation, and insurance. The Council shall advise the Director of the Service of key priorities in climate-related issues that require the attention of the Service. The Council shall be responsible for promoting coordination across regional, national, and international concerns and the assessment of evolving information needs. Functions vested in any Federal officer or agency by this Act or under the program established under this Act may be exercised through the facilities and personnel of the agency involved or, to the extent provided or approved in advance in appropriation Acts, by other persons or entities under contracts or grant arrangements entered into by such officer or agency. The Secretary of Commerce shall prepare and submit to the President and the Congress, not later than March 31 of each year, a report on the activities conducted pursuant to this Act during the preceding fiscal year, including (1) a summary of the achievements of the National Climate Service during the previous fiscal year. And (2) an analysis of the progress made toward achieving the goals and objectives of the Service. (1) Administrator. The term "Administrator" means the Administrator of the National Oceanic and Atmospheric Administration. (2) Advisory council. The term "Advisory Council" refers to the National Climate Service Advisor Council. (3) Climate change. The term "climate change" means any change in climate over time, whether due to natural variability or as a result of human activity. (4) Director. The term "Director" means the director of the National Oceanic and Atmospheric Administration's National Climate Service. (5) Secretary. The term "Secretary" means the Secretary of Commerce. (6) Service. The term "Service" means the National Oceanic and Atmospheric Administration's National Climate Service. There are authorized to be appropriated to the Secretary to carry out this Act (1) $300,000,000 for fiscal year 2011. (2) $350,000,000 for fiscal year 2012. (3) $400,000,000 for fiscal year 2013. (4) $450,000,000 for fiscal year 2014. (5) $500,000,000 for fiscal year 2015.
**Ground Truth:** Provide for the establishment of a national climate service, and for other purposes. National Climate Service Act of 2009 - Requires the Secretary of Commerce to establish within the Climate Program Office of the National Oceanic and Atmospheric Administration a National Climate Service that includes a national center and a network of regional and local facilities for operational climate observation, modeling, and research. Requires the Service to: (1) inform the public about climate impacts. (2) serve as a clearinghouse and technical access point to stakeholders for information on climate, climate impacts, and adaptation, and relevant comprehensive databases of information. (3) provide education on climate impacts, vulnerabilities, and application of climate information in decisionmaking. (4) design decision-support tools that facilitate use of climate information in stakeholders' near-term operations and long-term planning. (5) facilitate user access to climate experts for technical assistance in the use of climate information and to inform the climate forecast community of their information needs. (6) provide researcher, modeler, and observations experts access to users to help guide direction of their activities. And (7) propose and evaluate adaptation strategies for climate variability and change. Sets forth the Service's functions, including establishing an atmospheric monitoring and verification program utilizing aircraft, satellite, ground sensors, ocean and coastal observing systems, and modeling capabilities to monitor, measure, and verify greenhouse gas concentrations and emissions throughout the oceans and atmosphere. Requires the Secretary to report to specified congressional committees on a plan of action for the Service. Requires the Administrator of NOAA to appoint a Director of the Service. Requires the Director to appoint members of a National Climate Service Advisory Council to promote coordination across regional, national, and international concerns and assess information needs.
**Prediction:** the Secretary of Commerce shall establish within the Climate Program Office of the National Oceanic and Atmospheric Administration a National Climate Service that will assist the Nation and the world in understanding, anticipating, and responding to climate, climate change, and climate variability and their impacts and implications. The Service shall inform the public through the sustained production and delivery of authoritative, timely, useful information about impacts on local, State, regional, tribal, national, and global scales. The service shall be user-centric, by ensuring that the information is accessible, consistent with users' ability to respond, and based on user needs and limitations. <|||||>I encountered the similar with finetune with Fairseq version bart-large, https://github.com/pytorch/fairseq/issues/2347. Looks like it is a in complete sentence or starting from a middle of a sentence.
It will be great if you shed some light on this.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,335 | closed | BertForPreTraining and BertModel when loading TF checkpoints | Hello,
I want to load tensorflow checkpoints and when I run code below
```python
config = BertConfig.from_json_file('./bert_config.json')
config.output_hidden_states = True
bert_model = BertForPreTraining.from_pretrained('./model.ckpt', from_tf=True, config=config)
......
self.bert = bert_model
.......
bert_seq_out, _ = self.bert(input_ids, token_type_ids=segment_ids, attention_mask=input_mask,
output_hidden_states=False)
```
got error like this
```
Traceback (most recent call last):
File "train_tf.py", line 738, in <module>
neg_log_likelihood = model.neg_log_likelihood(input_ids, segment_ids, input_mask, label_ids)
File "train_tf.py", line 605, in neg_log_likelihood
bert_feats = self._get_bert_features(input_ids, segment_ids, input_mask)
File "train_tf.py", line 542, in _get_bert_features
bert_seq_out, _ = self.bert(input_ids, token_type_ids=segment_ids, attention_mask=input_mask, output_hidden_states=False)
File "/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 541, in __call__
result = self.forward(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'output_hidden_states'
```
I think it is because I use `BertForPreTraining` instead of `BertModel`,
but when I use `BertModel`, I got error like below
```
/opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
592 return modules[name]
593 raise AttributeError("'{}' object has no attribute '{}'".format(
--> 594 type(self).__name__, name))
595
596 def __setattr__(self, name, value):
AttributeError: 'BertModel' object has no attribute 'bias'
```
How can I fix and load TF checkpoint correctly?
any help will be appreciated | 06-27-2020 14:21:00 | 06-27-2020 14:21:00 | Hi! What is your library version? The `output_hidden_states` as argument to the call method was only added in the v3.0.0 version, which was released this morning. |
transformers | 5,334 | closed | Add "labels" functionality for all TF Causal LM and Masked LM models | # 🚀 Feature request
Currently, it is not possible to calculate the loss of a CLM or MLM model using TF:
```python
from transformers import TFBertForMaskedLM, BertTokenizerFast
model = TFBertForMaskedLM.from_pretrained("bert-base-uncased")
tok = BertTokenizerFast.from_pretrained("bert-base-uncased")
input_ids = tok("This is a test string.").input_ids
loss = model(input_ids, labels=input_ids)[0] # This is currently not possible
```
Currently the `call(...)` function does not accept a `labels` input and does not return the
loss. This should be implemented similar to how it was done for `BertForSequenceClassification`:
https://github.com/huggingface/transformers/blob/393b8dc09a97197df1937a7e86c0c6b4ce69c7e9/src/transformers/modeling_tf_bert.py#L918
All CLM and MLM TF models should be updated to accept `labels` as an input and return the corresponding loss.
## Motivation
1. This allows to use TFTrainer for CLM and MLM models.
2. This aligns TF with PT API
## Your contribution
Starting from 29.06/30.06 I want to implement all these features (with some help from @jplu if possible ;-))
Pinging @LysandreJik @julien-c @sgugger for notification.
| 06-27-2020 14:06:03 | 06-27-2020 14:06:03 | Awesome! I'm in!!! <|||||>Yes please! Happy to help if you need it. |
transformers | 5,333 | closed | XLNet with high CPU usage | The same issue as these two:
https://github.com/huggingface/transformers/issues/1722#issue-517212978
https://github.com/huggingface/transformers/issues/1529#issue-507567684 | 06-27-2020 13:15:58 | 06-27-2020 13:15:58 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,332 | closed | Link to the example/summarization in doc is broken | The link to the example/summarization folder in [the-big-table-of-tasks](https://huggingface.co/transformers/examples.html#the-big-table-of-tasks) is broken because the folder's name has been changed to seq2seq | 06-27-2020 10:34:49 | 06-27-2020 10:34:49 | Duplicate of #5309 |
transformers | 5,331 | closed | Adds train_batch_size, eval_batch_size, and n_gpu to to_sanitized_dict output for logging. | Closes #5330
```
>>> from transformers import TrainingArguments
>>> args = TrainingArguments("dir")
>>> args.to_sanitized_dict()
{
"output_dir": "dir",
"overwrite_output_dir": False,
"do_train": False,
...
"train_batch_size": 8,
"eval_batch_size": 8,
"n_gpu": 0,
}
``` | 06-27-2020 04:13:54 | 06-27-2020 04:13:54 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5331?src=pr&el=h1) Report
> Merging [#5331](https://codecov.io/gh/huggingface/transformers/pull/5331?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0267668c3d648c6e41afda97f5df8671ee880ac3&el=desc) will **decrease** coverage by `1.03%`.
> The diff coverage is `80.40%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5331?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5331 +/- ##
==========================================
- Coverage 77.01% 75.97% -1.04%
==========================================
Files 128 138 +10
Lines 21615 24292 +2677
==========================================
+ Hits 16646 18455 +1809
- Misses 4969 5837 +868
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5331?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2FsYmVydC5weQ==) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2N0cmwucHk=) | `97.05% <ø> (ø)` | |
| [src/transformers/configuration\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rpc3RpbGJlcnQucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/5331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VuY29kZXJfZGVjb2Rlci5weQ==) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.22% <ø> (ø)` | |
| [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `97.14% <ø> (ø)` | |
| [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JvYmVydGEucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5331/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.42% <ø> (ø)` | |
| ... and [170 more](https://codecov.io/gh/huggingface/transformers/pull/5331/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5331?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5331?src=pr&el=footer). Last update [1af58c0...161e09d](https://codecov.io/gh/huggingface/transformers/pull/5331?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>It seems that this is also true for the wandb config logging, but instead it uses `vars(args)` instead of `args.to_sanitized_dict()`. Is there any reason the following line is not using`to_sanitized_dict()`?
https://github.com/huggingface/transformers/blob/1af58c07064d8f4580909527a8f18de226b226ee/src/transformers/trainer.py#L330
Here's a diff between my new implementation of `arg.to_sanitized_dict()` and `vars(args)`:
```
>>> from transformers import TrainingArguments
>>> args = TrainingArguments("some_dir")
>>> set(vars(args).items()) - set(args.to_sanitized_dict().items())
{
("__cached__setup_devices", (device(type="cpu"), 0)),
("tpu_num_cores", None),
("per_gpu_train_batch_size", None),
("per_gpu_eval_batch_size", None),
("save_total_limit", None),
}
>>> set(args.to_sanitized_dict().items()) - set(vars(args).items())
{
("tpu_num_cores", "None"),
("per_gpu_train_batch_size", "None"),
("train_batch_size", 8),
("n_gpu", 0),
("eval_batch_size", 8),
("per_gpu_eval_batch_size", "None"),
("save_total_limit", "None"),
}
```
The main difference is that in `to_sanitized_dict()` the `__cached_setup_devices` property is not saved, and instead of `None`, the value is stringified to `"None"` (and of course, with my new changes `train_batch_size`, `eval_batch_size`, and `n_gpu` are also recoreded).
Gonna add another commit to make the wandb config logging use `to_sanitized_dict`.<|||||>Hi @JetRunner , was hoping I can get an update on this PR. The test fail seems to be unrelated to my committed code.<|||||>@jaymody OK please wait for another approval |
transformers | 5,330 | closed | Better hyperparameter tensorboard logging in Trainer. | # 🚀 Feature request
`Trainer` should write `train_batch_size` and `eval_batch_size` to tensorboard. Currently only `per_device_` batch sizes are logged as hyperparamters, which means unless you know how many gpus you trained on (which is also not logged), you can't know the actual batch sizes used in training.
## Your contribution
Submitted a PR: [Adds train_batch_size, eval_batch_size, and n_gpu to to_sanitized_dict output for logging.](https://github.com/huggingface/transformers/pull/5331#issue-440857548)
| 06-27-2020 04:11:53 | 06-27-2020 04:11:53 | |
transformers | 5,329 | closed | Add option to keep tb_writer open after training is done. | Currently, `Trainer` automatically closes the tensorboard writer `tb_writer` after the training loop is done. Often I'll want to add extra observations, metrics, data, etc ... to the tensorboard event that requires the trained model (ie after the training loop is done). For example.
```
tb_writer = SummaryWriter(log_dir="some_dir")
# do some stuff with tb_writer before training
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=test_dataset,
compute_metrics=compute_metrics,
tb_writer=tb_writer,
)
trainer.train()
# do extra stuff with tb_writer after the model is trained (that requires a trained model)
```
## Your contribution
Submitted a PR: [Adds option to keep tb_writer open after training finishes](https://github.com/huggingface/transformers/pull/5328#issue-440856807) | 06-27-2020 04:06:45 | 06-27-2020 04:06:45 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,328 | closed | Adds option to keep tb_writer open after training finishes | Set the `close_tb_writer` parameter to `False` in `Trainer.train()` to keep the tensorboard writer open.
```
tb_writer = SummaryWriter(log_dir="some_dir")
# do some stuff with tb_writer before training
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=test_dataset,
compute_metrics=compute_metrics,
tb_writer=tb_writer,
)
trainer.train(close_tb_writer=False)
# you can now use tb_writer even after training!
```
Closes #5329 | 06-27-2020 04:05:17 | 06-27-2020 04:05:17 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5328?src=pr&el=h1) Report
> Merging [#5328](https://codecov.io/gh/huggingface/transformers/pull/5328?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/393b8dc09a97197df1937a7e86c0c6b4ce69c7e9&el=desc) will **increase** coverage by `0.37%`.
> The diff coverage is `50.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5328?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5328 +/- ##
==========================================
+ Coverage 77.54% 77.91% +0.37%
==========================================
Files 138 138
Lines 24284 24284
==========================================
+ Hits 18831 18922 +91
+ Misses 5453 5362 -91
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5328?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.38% <50.00%> (ø)` | |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.26% <0.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.37% <0.00%> (+0.72%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.62% <0.00%> (+1.47%)` | :arrow_up: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: |
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/5328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `95.23% <0.00%> (+38.09%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5328/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5328?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5328?src=pr&el=footer). Last update [393b8dc...d390b0d](https://codecov.io/gh/huggingface/transformers/pull/5328?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@JetRunner I'm not exactly sure why the circleci build failed. I had the same result from this pull request https://github.com/huggingface/transformers/pull/5331, but I can't figure out how if at all it's related to my added code.<|||||>It’s not, don’t worry.
Let’s see if we can get that fixed. Otherwise, we’ll still be able to merge this PR since the CI failure has nothing to do with your added code.<|||||>@sgugger do you want to take a look at this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,327 | closed | [mBART] skip broken forward pass test, stronger integration test | (1) I fixed the config of mbart to go from BLEU score 12 to BLEU score 26. This requires that the `decoder_start_token_id=lang_code['ro']` but that we filter that out when we decode.
(2) I added another, harder, integration test case. The expected result is only 1 word different than GT. This also adds implicit coverage for batched generation.
(3) tried for awhile to fix the broken slow GPU `test_enro_forward`.
On commits where the test passed, it doesn't pass any more. And the model hadn't been updated on S3 since March 26. So for now we skip it. | 06-26-2020 23:18:11 | 06-26-2020 23:18:11 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5327?src=pr&el=h1) Report
> Merging [#5327](https://codecov.io/gh/huggingface/transformers/pull/5327?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/efae6645e223f29cf05eeafe95105a9f869b66dd&el=desc) will **increase** coverage by `0.21%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5327?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5327 +/- ##
==========================================
+ Coverage 77.69% 77.90% +0.21%
==========================================
Files 138 138
Lines 24291 24292 +1
==========================================
+ Hits 18872 18924 +52
+ Misses 5419 5368 -51
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5327?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5327/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `93.87% <100.00%> (+6.37%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5327/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5327/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.26% <0.00%> (-0.34%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5327/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.62% <0.00%> (+0.14%)` | :arrow_up: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5327/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.37% <0.00%> (+0.72%)` | :arrow_up: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5327/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.77% <0.00%> (+2.22%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5327/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5327?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5327?src=pr&el=footer). Last update [1af58c0...e077948](https://codecov.io/gh/huggingface/transformers/pull/5327?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Looks good to me |
transformers | 5,326 | closed | In the run_ner.py example, give the optional label arg a default value | Otherwise, if label is not specified, the following error occurs:
Traceback (most recent call last):
File "run_ner.py", line 303, in <module>
main()
File "run_ner.py", line 101, in main
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1]))
File "/home/user/anaconda3/envs/bert/lib/python3.7/site-packages/transformers/hf_argparser.py", line 159, in parse_json_file
obj = dtype(**inputs)
TypeError: __init__() missing 1 required positional argument: 'labels' | 06-26-2020 23:07:14 | 06-26-2020 23:07:14 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5326?src=pr&el=h1) Report
> Merging [#5326](https://codecov.io/gh/huggingface/transformers/pull/5326?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5543b30aa6b52da3c8f7d9e525b0edc26226d717&el=desc) will **increase** coverage by `0.28%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5326?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5326 +/- ##
==========================================
+ Coverage 77.42% 77.70% +0.28%
==========================================
Files 138 138
Lines 24284 24284
==========================================
+ Hits 18803 18871 +68
+ Misses 5481 5413 -68
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5326?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.65% <0.00%> (-0.73%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.48% <0.00%> (+1.32%)` | :arrow_up: |
| [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5326/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `87.50% <0.00%> (+58.65%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5326?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5326?src=pr&el=footer). Last update [5543b30...bb7d305](https://codecov.io/gh/huggingface/transformers/pull/5326?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>LGTM! |
transformers | 5,325 | closed | Added a model card README.md for my pretrained model. | 06-26-2020 22:14:20 | 06-26-2020 22:14:20 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5325?src=pr&el=h1) Report
> Merging [#5325](https://codecov.io/gh/huggingface/transformers/pull/5325?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5543b30aa6b52da3c8f7d9e525b0edc26226d717&el=desc) will **increase** coverage by `0.47%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5325?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5325 +/- ##
==========================================
+ Coverage 77.42% 77.90% +0.47%
==========================================
Files 138 138
Lines 24284 24284
==========================================
+ Hits 18803 18919 +116
+ Misses 5481 5365 -116
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5325?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5325/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5325/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.48% <0.00%> (+1.32%)` | :arrow_up: |
| [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5325/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `87.50% <0.00%> (+58.65%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5325/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5325?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5325?src=pr&el=footer). Last update [5543b30...523ec13](https://codecov.io/gh/huggingface/transformers/pull/5325?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Nice work. What's your intended use case, @Pradhy729? Topic classification? Something else?<|||||>Thanks! A few different use cases actually. Better entity recognition, topic classification, sentiment extraction and hopefully play around with summarization as well. |
|
transformers | 5,324 | closed | More model cards | 06-26-2020 21:11:11 | 06-26-2020 21:11:11 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5324?src=pr&el=h1) Report
> Merging [#5324](https://codecov.io/gh/huggingface/transformers/pull/5324?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5543b30aa6b52da3c8f7d9e525b0edc26226d717&el=desc) will **increase** coverage by `0.05%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5324?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5324 +/- ##
==========================================
+ Coverage 77.42% 77.48% +0.05%
==========================================
Files 138 138
Lines 24284 24284
==========================================
+ Hits 18803 18816 +13
+ Misses 5481 5468 -13
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5324?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.26% <0.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.93% <0.00%> (+0.66%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5324/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.62% <0.00%> (+1.47%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5324?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5324?src=pr&el=footer). Last update [5543b30...711a3d4](https://codecov.io/gh/huggingface/transformers/pull/5324?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks for fixing my typos :-) |
|
transformers | 5,323 | closed | New model sharing tutorial | This PR removes the old serialization guide which was a bit outdated and adds the new instructions to the current model sharing tutorial (rewritten in rst to have proper links to the rest of the docs).
Also it adds a link to the trainer tutorial recently merged in the quicktour.
Preview is [here](https://53867-155220641-gh.circle-artifacts.com/0/docs/_build/html/model_sharing.html) | 06-26-2020 19:30:10 | 06-26-2020 19:30:10 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5323?src=pr&el=h1) Report
> Merging [#5323](https://codecov.io/gh/huggingface/transformers/pull/5323?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5543b30aa6b52da3c8f7d9e525b0edc26226d717&el=desc) will **increase** coverage by `0.48%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5323?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5323 +/- ##
==========================================
+ Coverage 77.42% 77.91% +0.48%
==========================================
Files 138 138
Lines 24284 24284
==========================================
+ Hits 18803 18921 +118
+ Misses 5481 5363 -118
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5323?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.26% <0.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.62% <0.00%> (+1.47%)` | :arrow_up: |
| [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `87.50% <0.00%> (+58.65%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5323/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5323?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5323?src=pr&el=footer). Last update [5543b30...00ff8c3](https://codecov.io/gh/huggingface/transformers/pull/5323?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>very cool! Let's share it with the hugging face twitter account when it's live to get more people to share their models on the hub!<|||||>That would be now then, it's live in the [master docs](https://huggingface.co/transformers/master/model_sharing.html). |
transformers | 5,322 | closed | examples/seq2seq/run_eval.py fixes and docs | 06-26-2020 19:25:23 | 06-26-2020 19:25:23 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5322?src=pr&el=h1) Report
> Merging [#5322](https://codecov.io/gh/huggingface/transformers/pull/5322?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5543b30aa6b52da3c8f7d9e525b0edc26226d717&el=desc) will **increase** coverage by `0.45%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5322?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5322 +/- ##
==========================================
+ Coverage 77.42% 77.88% +0.45%
==========================================
Files 138 138
Lines 24284 24284
==========================================
+ Hits 18803 18914 +111
+ Misses 5481 5370 -111
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5322?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `66.25% <0.00%> (-32.52%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.74% <0.00%> (+0.58%)` | :arrow_up: |
| [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `87.50% <0.00%> (+58.65%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5322/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `95.38% <0.00%> (+68.46%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5322?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5322?src=pr&el=footer). Last update [5543b30...eae8c2d](https://codecov.io/gh/huggingface/transformers/pull/5322?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,321 | closed | [{m}bart] Fix final_logits bias warning | This warning is safe to ignore, but can be easily fixed.
```bash
Some weights of BartForConditionalGeneration were not initialized from the model checkpoint at facebook/mbart-large-en-ro and are newly initialized: ['final_logits_bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
``` | 06-26-2020 19:24:12 | 06-26-2020 19:24:12 | also
```bash
Some weights of the model checkpoint at facebook/mbart-large-en-ro were not used when initializing BartForConditionalGeneration: ['lm_head.weight']
```<|||||>Hey @sshleifer
Want me to pick this one up?<|||||>It's non trivial. You probably have to update S3. I would pick another.<|||||>Maybe try to understand why
```
tests/test_pipelines.py::MonoColumnInputTestCase::test_torch_fill_mask_results
```
has started failing, half of #5265 |
transformers | 5,320 | closed | Reformer model axial.position.shape config not working | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
I am trying to play around with the config parameters of the Reformer model;
I change the axial position shape parameters to ;
"axial_pos_shape": [
64,
256
]
and the "max_position_embeddings": 8192 (multiplication of axial_pos_shape)
But it get the following error;
**RuntimeError: Error(s) in loading state_dict for ReformerForSequenceClassification:
size mismatch for reformer.embeddings.position_embeddings.weights.0: copying a param with shape torch.Size([512, 1, 64]) from checkpoint, the shape in current model is torch.Size([64, 1, 64]).
size mismatch for reformer.embeddings.position_embeddings.weights.1: copying a param with shape torch.Size([1, 1024, 192]) from checkpoint, the shape in current model is torch.Size([1, 256, 192]).**
The rest of parameters are unchanged. What's wrong with the config?
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 06-26-2020 18:34:19 | 06-26-2020 18:34:19 | Either my calculator and me are wrong or the max pos embedding is not matching the axial pos shape
<|||||>> Either my calculator and me are wrong or the max pos embedding is not matching the axial pos shape
>
> 
Definitely your calculator looks like in learning phase! :)
I am wondering if I load the models;
model = ReformerForSequenceClassification.from_pretrained("./cnp", num_labels = 2, output_attentions = False, output_hidden_states = False,)
I assume the config will not be overridden, I mean it will complain about the config not being default.
Not sure but, I am trying;
model = ReformerForSequenceClassification(ReformerConfig())
model.load_state_dict(torch.load("./cnp"))
which will load the custom model.
<|||||>Could you upload the trained model ?
Then it's easier to reproduce the error and play around with the code<|||||>> Could you upload the trained model ?
> Then it's easier to reproduce the error and play around with the code
The model is trained on a very few samples (~100) and in the office network, which unfortunately does not allow to upload to outside world.
I will try to upload one model after training it on collab.<|||||>
config = ReformerConfig()
config.max_position_embeddings = 8192
config.axial_pos_shape=[64, 128]
model = ReformerForSequenceClassification(config)
model.load_state_dict(torch.load("./cnp/pytorch_model.bin"))
This is how I am trying to load the config, I tried an other approach but that did not work as well.
<|||||>@as-stevens are you training a Reformer Model from scratch?<|||||>Regarding your code:
```python
config = ReformerConfig()
config.max_position_embeddings = 8192
config.axial_pos_shape=[64, 128]
model = ReformerForSequenceClassification(config)
model.load_state_dict(torch.load("./cnp/pytorch_model.bin"))
```
please note that it is not recommended instantiating a config and then later changing already set attributes. It's better to do the following:
```
config = ReformerConfig(max_position_embeddings=8192, axial_pos_shape=[64,128])
```
This code for example works:
```python
from transformers import ReformerModel, ReformerConfig
config = ReformerConfig(max_position_embeddings=8192, axial_pos_shape=[64, 128])
model = ReformerModel(config)
model.eval()
input_ids = torch.tensor([10 * [2, 4]])
model(input_ids)
```<|||||>If you load a model from your checkpoint: "./cnp/pytorch_model.bin" the axial position embedding weights have to have the same dimensions in order for this to work<|||||>Closing this for now. Feel free to re-open if you encounter other problems. Also make sure to have read the corresponding documentation of Reformer: https://huggingface.co/transformers/model_doc/reformer.html#axial-positional-encodings .<|||||>@patrickvonplaten Thank you! Let me try your suggestion and run the classifier. Update you with the findings. |
transformers | 5,319 | closed | Fix `xxx_length` behavior when using XLNet in pipeline | When using a `pipeline` for text gerenation with XLNet and transformer-XL, the pipeline adds a long prompt at the beginning to help the model. It messes up with `min_length` and `max_length` so I fixed that by adding to those kwargs (if passed) the length of the prompt.
Also made the prompt use the tokenizer eos token instead of putting the three possibilities at the end. | 06-26-2020 18:17:50 | 06-26-2020 18:17:50 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5319?src=pr&el=h1) Report
> Merging [#5319](https://codecov.io/gh/huggingface/transformers/pull/5319?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bf0d12c220cfd19025736c488bdabda9efd20b9e&el=desc) will **increase** coverage by `1.33%`.
> The diff coverage is `0.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5319?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5319 +/- ##
==========================================
+ Coverage 77.91% 79.24% +1.33%
==========================================
Files 138 138
Lines 24284 24290 +6
==========================================
+ Hits 18920 19249 +329
+ Misses 5364 5041 -323
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5319?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `75.45% <0.00%> (-0.97%)` | :arrow_down: |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.93% <0.00%> (-3.49%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `38.44% <0.00%> (-0.95%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.03% <0.00%> (-0.59%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.68% <0.00%> (-0.03%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.33% <0.00%> (ø)` | |
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.75% <0.00%> (ø)` | |
| [src/transformers/data/processors/glue.py](https://codecov.io/gh/huggingface/transformers/pull/5319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvZ2x1ZS5weQ==) | `49.44% <0.00%> (ø)` | |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5319/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (ø)` | |
| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/5319/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5319?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5319?src=pr&el=footer). Last update [bf0d12c...d3d5d50](https://codecov.io/gh/huggingface/transformers/pull/5319?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Warnings might scare the user that something went wrong, especially as there is no way to remove that behavior in the current API. Maybe documenting it properly is enough?
Merging in the meantime because I need this to write model cards for XLNet. |
transformers | 5,318 | closed | [CI] GH-runner stores artifacts like CircleCI | Split this out to merge when GH runner is working. | 06-26-2020 18:00:17 | 06-26-2020 18:00:17 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5318?src=pr&el=h1) Report
> Merging [#5318](https://codecov.io/gh/huggingface/transformers/pull/5318?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c4d4e8bdbd25d9463d41de6398940329c89b7fb6&el=desc) will **increase** coverage by `1.17%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5318?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5318 +/- ##
==========================================
+ Coverage 77.90% 79.07% +1.17%
==========================================
Files 140 138 -2
Lines 24334 24078 -256
==========================================
+ Hits 18957 19040 +83
+ Misses 5377 5038 -339
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5318?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5318/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `87.50% <0.00%> (-6.38%)` | :arrow_down: |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5318/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.93% <0.00%> (-3.49%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/5318/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `77.90% <0.00%> (-3.08%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5318/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-2.84%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5318/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.62% <0.00%> (-1.49%)` | :arrow_down: |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5318/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `88.21% <0.00%> (-1.02%)` | :arrow_down: |
| [src/transformers/tokenization\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/5318/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `40.67% <0.00%> (-0.99%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/5318/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.16% <0.00%> (-0.56%)` | :arrow_down: |
| [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5318/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `80.19% <0.00%> (-0.44%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5318/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `75.63% <0.00%> (-0.36%)` | :arrow_down: |
| ... and [45 more](https://codecov.io/gh/huggingface/transformers/pull/5318/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5318?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5318?src=pr&el=footer). Last update [c4d4e8b...54a74d9](https://codecov.io/gh/huggingface/transformers/pull/5318?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,317 | closed | Clearer lr schedule math | 06-26-2020 16:58:55 | 06-26-2020 16:58:55 | looks neat<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
|
transformers | 5,316 | closed | [pl_examples] default warmup steps=0 | 06-26-2020 16:55:27 | 06-26-2020 16:55:27 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5316?src=pr&el=h1) Report
> Merging [#5316](https://codecov.io/gh/huggingface/transformers/pull/5316?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/79a82cc06aaa68088639bf9bb000752cfd33a8c6&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5316?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5316 +/- ##
=======================================
Coverage 79.29% 79.30%
=======================================
Files 138 138
Lines 24282 24282
=======================================
+ Hits 19254 19256 +2
+ Misses 5028 5026 -2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5316?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.14% <0.00%> (-0.13%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5316/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.30% <0.00%> (+1.25%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5316?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5316?src=pr&el=footer). Last update [79a82cc...ee4cc93](https://codecov.io/gh/huggingface/transformers/pull/5316?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 5,315 | closed | Add BART-base modeling and configuration | Fix 404 Not Found for BART-base | 06-26-2020 16:46:23 | 06-26-2020 16:46:23 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5315?src=pr&el=h1) Report
> Merging [#5315](https://codecov.io/gh/huggingface/transformers/pull/5315?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9022ef021a56db975d25c7108cbd19d0dd399174&el=desc) will **increase** coverage by `0.88%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5315?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5315 +/- ##
==========================================
+ Coverage 77.08% 77.97% +0.88%
==========================================
Files 138 138
Lines 23841 23841
==========================================
+ Hits 18379 18590 +211
+ Misses 5462 5251 -211
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5315?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5315/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `93.75% <ø> (ø)` | |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/5315/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.24% <ø> (ø)` | |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5315/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `76.42% <0.00%> (-0.39%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5315/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.38% <0.00%> (-0.24%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5315/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.00% <0.00%> (+0.29%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5315/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.90% <0.00%> (+1.38%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5315/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `94.92% <0.00%> (+75.00%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5315?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5315?src=pr&el=footer). Last update [9022ef0...b9cc4aa](https://codecov.io/gh/huggingface/transformers/pull/5315?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,314 | closed | Transformer-XL not working with DistributedDataParallel | # 🐛 Bug
## Information
I'm using the pretrained `TransfoXLLMHeadModel` for finetuning but I get an error when trying to use it with pytorch's `DistributedDataParallel`:
> self.reducer.prepare_for_backward(list(_find_tensors(output)))
RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument `find_unused_parameters=True` to `torch.nn.parallel.DistributedDataParallel`; (2) making sure all `forward` function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's `forward` function. Please include the loss function and the structure of the return value of `forward` of your module when reporting this issue (e.g. list, dict, iterable). (prepare_for_backward at /opt/conda/conda-bld/pytorch_1579022034529/work/torch/csrc/distributed/c10d/reducer.cpp:514)
I'm using the pytorch distributed launch utility.
I've already gone through similar issues regarding this topic, but none of the solutions solved my problem. A guess from me is, that this is caused by the returned `mems` and `DistributedDataParallel` could think that they require a gradient?
Model I am using: Transformer-XL
Language I am using the model on: English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11.0
- Platform: Linux-4.15.0-108-generic-x86_64-with-debian-buster-sid
- Python version: 3.6.10
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes `torch.distributed.launch`
| 06-26-2020 16:34:46 | 06-26-2020 16:34:46 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I have the same kind of problem with transformers 3.3.1 and a BertModel, but the error is now an inplace operation on torch.cuda.LongTensor problem, something related to EmbeddingBackward.<|||||>Could you please open a new issue with the full stacktrace + the information related to your environment?<|||||>Done as issue #7848. |
transformers | 5,313 | closed | Model cards for finance-koelectra models | Added new cards for the models. | 06-26-2020 15:33:34 | 06-26-2020 15:33:34 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5313?src=pr&el=h1) Report
> Merging [#5313](https://codecov.io/gh/huggingface/transformers/pull/5313?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/08c9607c3d025f9f1a0c40e6d124d5d5d446208e&el=desc) will **increase** coverage by `1.88%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5313?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5313 +/- ##
==========================================
+ Coverage 77.41% 79.29% +1.88%
==========================================
Files 138 138
Lines 24282 24282
==========================================
+ Hits 18798 19255 +457
+ Misses 5484 5027 -457
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5313?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.14% <0.00%> (+0.36%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/5313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.62% <0.00%> (+1.41%)` | :arrow_up: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.37% <0.00%> (+1.44%)` | :arrow_up: |
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.74% <0.00%> (+1.69%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.62% <0.00%> (+2.20%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/5313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `78.86% <0.00%> (+2.27%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `99.15% <0.00%> (+2.53%)` | :arrow_up: |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/5313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `70.93% <0.00%> (+6.97%)` | :arrow_up: |
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/5313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.41% <0.00%> (+9.68%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/5313/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `91.43% <0.00%> (+42.03%)` | :arrow_up: |
| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/5313/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5313?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5313?src=pr&el=footer). Last update [08c9607...f17b437](https://codecov.io/gh/huggingface/transformers/pull/5313?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,312 | closed | Add benchmark notebook | This PR adds a general notebook on how to use the Benchmark tools | 06-26-2020 15:32:27 | 06-26-2020 15:32:27 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5312?src=pr&el=h1) Report
> Merging [#5312](https://codecov.io/gh/huggingface/transformers/pull/5312?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/135791e8ef12802ceb21a4abbb3a93f7da1bf390&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5312?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5312 +/- ##
=======================================
Coverage 79.30% 79.30%
=======================================
Files 138 138
Lines 24283 24283
=======================================
Hits 19258 19258
Misses 5025 5025
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5312?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5312/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.62% <0.00%> (-0.15%)` | :arrow_down: |
| [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/5312/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `78.93% <0.00%> (+0.19%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5312?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5312?src=pr&el=footer). Last update [135791e...487fa69](https://codecov.io/gh/huggingface/transformers/pull/5312?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,311 | closed | AllenNLP SPECTER model | # 🌟 New model addition
SPECTER: Document-level Representation Learning using Citation-informed Transformers
## Model description
[SPECTER](https://arxiv.org/pdf/2004.07180.pdf) by AllenAI is a new method to generate document-level embedding of scientific documents based on pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph. It is based on SciBERT.
This model is showing us promising results for scientific paper similarity tasks. However, the repo is quite rough around the edges and inference performance could be better. According to the AllenNLP repo issues, they do not plan to adapt the models to ONNX runtime.
It would be great to have a plug'n'play implementation for Transformers. Instructions on how to do this are also welcome 🤝
## Open source status
* [x] the model implementation is available: https://github.com/allenai/specter
* [x] the model weights are available: see repo above
* [x] who are the authors: AllenAI, @armancohan, @sergeyf
| 06-26-2020 15:05:32 | 06-26-2020 15:05:32 | Wonderful if we can use embeddings of scientific documents in transformers.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>SPECTER is a great work. Is there any plan to have the Huggingface version of SPECTER?<|||||>This should be addressed now: https://huggingface.co/allenai/specter
How to use: https://github.com/allenai/specter |
transformers | 5,310 | closed | overflowing tokens are now always returned | closes https://github.com/huggingface/transformers/issues/5293
@thomwolf may be superseded by https://github.com/huggingface/transformers/pull/5308/files | 06-26-2020 14:47:10 | 06-26-2020 14:47:10 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5310?src=pr&el=h1) Report
> Merging [#5310](https://codecov.io/gh/huggingface/transformers/pull/5310?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/79a82cc06aaa68088639bf9bb000752cfd33a8c6&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `0.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5310?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5310 +/- ##
=======================================
Coverage 79.29% 79.30%
=======================================
Files 138 138
Lines 24282 24282
=======================================
+ Hits 19254 19256 +2
+ Misses 5028 5026 -2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5310?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (ø)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.14% <0.00%> (-0.13%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5310/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.30% <0.00%> (+1.25%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5310?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5310?src=pr&el=footer). Last update [79a82cc...5583df5](https://codecov.io/gh/huggingface/transformers/pull/5310?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Yes, it's in #5308 :-) |
transformers | 5,309 | closed | Links out of date under transformers/examples/README.md | # 🐛 Typos
## Information
Under the Big Table of Tasks in `transformers/examples/README.md`, the links for summarization and translation are out of date. They should lead to `examples/seq2seq` instead of `examples/summarization` and `examples/translation`. | 06-26-2020 14:36:23 | 06-26-2020 14:36:23 | |
transformers | 5,308 | closed | [tokenizers] Updates data processors, docstring, examples and model cards to the new API | Updates the data-processors to the new recommended tokenizers' API instead of the old one.
Also update the docstrings, examples, and model-cards which were using the old API.
Supersede #5310
@sshleifer you have a couple of methods only your models use (bart and marian). I'm not sure about the consequences of updating those API so I'll let you update them. Here is the doc on the new tokenizer API if you need it: https://huggingface.co/transformers/master/preprocessing.html
Recommended updates:
- use `__call__` instead of `encode_plus` and `batch_encode_plus`
- use `padding` and `truncation` instead of `max_length` only and `pad_to_max_length` | 06-26-2020 14:27:14 | 06-26-2020 14:27:14 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5308?src=pr&el=h1) Report
> Merging [#5308](https://codecov.io/gh/huggingface/transformers/pull/5308?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/135791e8ef12802ceb21a4abbb3a93f7da1bf390&el=desc) will **decrease** coverage by `2.10%`.
> The diff coverage is `58.82%`.
[](https://codecov.io/gh/huggingface/transformers/pull/5308?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5308 +/- ##
==========================================
- Coverage 79.30% 77.20% -2.11%
==========================================
Files 138 138
Lines 24283 24285 +2
==========================================
- Hits 19258 18749 -509
- Misses 5025 5536 +511
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5308?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/5308/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `28.66% <0.00%> (ø)` | |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5308/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <ø> (ø)` | |
| [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/5308/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `80.73% <ø> (ø)` | |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/5308/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `87.65% <ø> (ø)` | |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/5308/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `96.62% <ø> (-2.54%)` | :arrow_down: |
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/5308/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `96.04% <ø> (-1.70%)` | :arrow_down: |
| [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/5308/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `80.62% <ø> (ø)` | |
| [src/transformers/modeling\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/5308/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mbGF1YmVydC5weQ==) | `84.37% <ø> (ø)` | |
| [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/5308/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `85.94% <ø> (ø)` | |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/5308/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `93.11% <ø> (ø)` | |
| ... and [47 more](https://codecov.io/gh/huggingface/transformers/pull/5308/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5308?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5308?src=pr&el=footer). Last update [135791e...5b1fbb2](https://codecov.io/gh/huggingface/transformers/pull/5308?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@sshleifer @patrickvonplaten and @yjernite I updated your examples (seq2seq and eli5). Maybe you want to check it.<|||||>> @sshleifer @patrickvonplaten and @yjernite I updated your examples (seq2seq and eli5). Maybe you want to check it.
lgtm! |
transformers | 5,307 | closed | More model cards | 06-26-2020 14:09:04 | 06-26-2020 14:09:04 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5307?src=pr&el=h1) Report
> Merging [#5307](https://codecov.io/gh/huggingface/transformers/pull/5307?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/88d7f96e33c3f3e541bcdd913f2ff1e50aa18c1b&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5307?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5307 +/- ##
=======================================
Coverage 79.30% 79.30%
=======================================
Files 138 138
Lines 24283 24283
=======================================
+ Hits 19257 19258 +1
+ Misses 5026 5025 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5307?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5307/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `79.59% <0.00%> (+0.33%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5307?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5307?src=pr&el=footer). Last update [88d7f96...0bf35c5](https://codecov.io/gh/huggingface/transformers/pull/5307?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Looks great!
<|||||>That's awesome, thanks! |
|
transformers | 5,306 | closed | [Generation] fix docs for decoder_input_ids | Fix docs for decoder input ids. | 06-26-2020 12:14:43 | 06-26-2020 12:14:43 | Pinging @sshleifer, @sgugger and @LysandreJik for notification.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5306?src=pr&el=h1) Report
> Merging [#5306](https://codecov.io/gh/huggingface/transformers/pull/5306?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/135791e8ef12802ceb21a4abbb3a93f7da1bf390&el=desc) will **decrease** coverage by `0.84%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/5306?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #5306 +/- ##
==========================================
- Coverage 79.30% 78.45% -0.85%
==========================================
Files 138 138
Lines 24283 24283
==========================================
- Hits 19258 19052 -206
- Misses 5025 5231 +206
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/5306?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5306/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.77% <ø> (ø)` | |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/5306/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.26% <ø> (+0.12%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/5306/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `20.78% <0.00%> (-74.20%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/5306?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/5306?src=pr&el=footer). Last update [135791e...d4e9c3f](https://codecov.io/gh/huggingface/transformers/pull/5306?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 5,305 | closed | Does BERT public an embedding file like glove.840B.300d.txt? | # ❓ Questions & Help
Hi, I have seen BERT public the pre-training language model like BERT-based-uncased. Does BERT public an embedding file like glove.840B.300d.txt? Many thanks.
| 06-26-2020 09:49:08 | 06-26-2020 09:49:08 | BERT is a fine-tuning based model. As far as I'm aware, there are no 'embedding files' as such. You could generate embedding relevant to your case, by providing the model with words (see documentation) and generate outputs.<|||||>> BERT is a fine-tuning based model. As far as I'm aware, there are no 'embedding files' as such. You could generate embedding relevant to your case, by providing the model with words (see documentation) and generate outputs.
Many thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 5,304 | closed | Bert Abs not using GPU | # 🐛 Bug
## Information
Model I am using: Bert abs (https://github.com/huggingface/transformers/tree/master/examples/seq2seq/bertabs)
-> remi/bertabs-finetuned-cnndm-extractive-abstractive-summarization model
Language I am using the model on: English
The problem arises when using:
* [x] the official example scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Install Conda Environment with proper pytorch installation
2. Verify that `torch.cuda.is_available()` return True
3. Start run_summarization.py as stated in the Readme
## Expected behavior
I would expect, that the models uses GPU for Training / Generation. Instead I have a very high CPU usage, Gpu usage is nearly zero and GPU clock stays idle. Since I verified that the GPU and cuda is available, I expect it to use it.
--no_cuda is not set, so the default of False is taken used
## Environment info
- `transformers` version: 2.11.0
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| 06-26-2020 09:00:35 | 06-26-2020 09:00:35 | This can be resolved by not using the remi/bertabs-finetuned-cnndm-extractive-abstractive-summarization model but the https://huggingface.co/remi/bertabs-finetuned-extractive-abstractive-summarization. With cnndm prefix I would expect it as a gpu version, but the opposite is the case.
Changing the model fixes the error |
transformers | 5,303 | closed | Attempted relative import with no known parent package | # 🐛 Bug
## Information
Model I am using: Bertabs (https://github.com/huggingface/transformers/tree/master/examples/seq2seq/bertabs)
Language I am using the model on: English
The problem arises when using:
[x ] the official example scripts: (give details below)
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
## To reproduce
Hi, I did as the Readme says but the following error is thrown, when I want to start the training via:
`python run_summarization.py --documents_dir "data/stories" --summaries_output_dir "out" --no_cuda false --batch_size 4 --min_length 50 --max_length 200 --beam_size 5 --alpha 0.95 --block_trigram true
`:
` File "run_summarization.py", line 15, in <module>
from .utils_summarization import (
ImportError: attempted relative import with no known parent package`
I am using Win10, and an Anaconda Env.
Steps to reproduce the behavior:
1. Install a new Anaconda Env with torch
2. Do as the Readme says
3. Put the code into a file or start it via console.
## Expected behavior
Start Training, or any error message, which I can resolve.
## Environment info
Using Windows 10, and an Anaconda Env.
Thank you
| 06-26-2020 08:02:26 | 06-26-2020 08:02:26 | Changing line 15 in run_summarization.py from
`from .utils_summarization import (` to `from utils_summarization import (`
fixed that error<|||||>@sshleifer - can you take a look here maybe? <|||||>I do have a similar error, when I changed from .utils_summarization import ( to from utils_summarization import ( I got another error which is:
/content/transformers/examples/seq2seq/bertabs
Traceback (most recent call last):
File "run_summarization.py", line 12, in <module>
from modeling_bertabs import BertAbs, build_predictor
File "/content/transformers/examples/seq2seq/bertabs/modeling_bertabs.py", line 30, in <module>
from configuration_bertabs import BertAbsConfig
File "/content/transformers/examples/seq2seq/bertabs/configuration_bertabs.py", line 19, in <module>
from transformers import PretrainedConfig
ModuleNotFoundError: No module named 'transformers'
Can you help me please?<|||||>@Hildweig did you install the dependencies with
`pip install -r requirements.txt` ?<|||||>Also to make the example run I had to change the used model in run_summarization.py to
remi/bertabs-finetuned-extractive-abstractive-summarization<|||||>I did install the requirements, thank you! changing the model in run_summarization.py fixed the issue! However the output is very bad and makes no sense, this is what I got for a good text:
the u.s. is on the way to meet with the u.s.. the u.s. has pledged to help ease ease ease the situation. the u.n. is the most most most likely to provide provide provide a detailed detailed detailed detail. the u..s. is due to the u.s. , and and the u.s. will provide provide some areas to help
------------------------------------------------
Did you get a good summary?<|||||>Why are you guys using bertabs?
It seems to not work very well, according to #3647 .
@MichaelJanz would you be willing to contribute a PR with your changes?<|||||>@sshleifer what do you suggest using for abstractive summarization? <|||||>@Hildweig Depends what your goal is?
For getting good scores on the cnndm dataset, I'd recommend `sshleifer/distilbart-cnn-12-6` and the finetuning in `examples/seq2seq/finetune.py`.
For your own data, start with a cnn checkpoint if you want 3 sentence summaries and an xsum checkpoint if you want 1 sentence summaries.
For running experiments to see how summarization finetuning works, you can start from `bart-large`, but these experiments are slow.
To make a useful open source contribution, you could try modifying/changing hyperparameters in `./train_distilbart_cnn.sh` to try to get a high score. Bonus points if you use `--logger wandb_shared`.
Also I recently update the setup instructions and script. It's now on master [here](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/README.md). There are some tips in there that try to cover common cases.
Speed/Rouge tradeoffs detailed of different bart variants: [here](https://docs.google.com/spreadsheets/d/1EkhDMwVO02m8jCD1cG3RoFPLicpcL1GQHTQjfvDYgIM/edit#gid=0&range=B2:G12)
[tweet](https://twitter.com/sam_shleifer/status/1276160367853547522?s=20) describing the distilbart project from a high level.
<|||||>@Hildweig I tested different input data and it performed (qualitatively considered) well for a random book review: https://bookpage.com/reviews/25272-deb-caletti-girl-unframed-ya#.Xvmf-CgzaUm
The result was:
_sydney has to spend the summer with her mother , the once - famous movie star , lila shore , at her sumptuous mansion in san francisco 's exclusive sea cliff neighborhood. sydney has a lot of freedom to explore the city on her own ,
which is how she meets nicco and begins a relationship that will unexpectedly change all of their lives forever. in the book , sydney 's story appears to suggest that
the book itself is her testimony about the lead - up to a terrible crime. as a woman in the world , it often means being looked at but not seen by it , she says_
However, using larger texts like a Sherlock Holmes novel did not work well, without any metrics considered.
@sshleifer
Sure, I can create a PR.
Also thank you for the hint, I am thankful for any advices I can get for my master thesis.
May I ask you about your advice on my goal?
I want to build a system, that is capable of creating summaries of book reviews for instagram posts in the german language. I am thinking of using the german bert (or similar) and fine tune it on a dataset I still have to get. Do you have any advice for me, you would like to share?<|||||>However, the generated text does not look like abstractive, rather extractive<|||||>Pr created in [#5355](https://github.com/huggingface/transformers/pull/5355)<|||||>@sshleifer thank you! My goal is to fine-tune it on duc dataset.
@MichaelJanz for me I have to summarize huge documents (17 page or so).<|||||>Hi @sshleifer and @MichaelJanz,
This seems to be a problem of python package structure. I am getting a similar error but with the token_classification/run_ner.py file.
File "examples/token-classification/run_ner.py", line 42, in <module>
from modeling_auto import AutoModelForTokenClassification
File "transformers/src/transformers/modeling_auto.py", line 22, in <module>
from .configuration_auto import (
ImportError: attempted relative import with no known parent package
I have not installed transformers library using pip because I want to use the local codes (cloned from transformers library). After reading various stackoverflow suggestions (https://stackoverflow.com/questions/16981921/relative-imports-in-python-3 and https://napuzba.com/a/import-error-relative-no-parent), I believe that when I am importing the transformer package locally from my own directory, then it is not able read+load transformer as a package.
I am using python3.7
Can you please suggest how to read transformer as a package from local codes.
Thanks... |
transformers | 5,302 | closed | Transformer-XL: Fixed tokenization of brackets, numbers etc. | Fixes #5136
As explained in the above issue, this PR fixes the tokenization of the `TransfoXLTokenizer` by using the `sacremoses` library with an extended feature of tokenizing comma-separated and floating point numbers. That way the input text is tokenized the same way as in the WikiText-103 dateset used for pretraining.
Open questions:
- [ ] Should the method `prepare_for_tokenization` be removed, since it just issues a warning for no whitespace before punctuation? This is solved by the `MosesTokenizer` anyway. | 06-26-2020 08:01:25 | 06-26-2020 08:01:25 | Test fail due to the test in `test_tokenization_fast`, but I have not touched that file.
For me, the problem is in [test_tokenization_fast.py line 45](../tree/master/tests/test_tokenization_fast.py#L45). It works for me if I replace the path `tests/fixtures/...` with `./fixtures/...`
Should I fix this?<|||||>Hey! I'm interested in this as I have noticed the same phenomenon. Some of those files were moved recently; could you rebase from master to check whether that fixes the pathing issue?<|||||>That's already looking better! If the tests still fail, it might be because the current expected outputs of the tests are incorrect. If your results are better than the expected outputs, that should be clear quickly.<|||||>Is this fine if there are 229 files which have changed? If I do a compare from my repo with the master branch, it's only the 2 as before...<|||||>For me, the reason for failure is still the same as mentioned above: `test_tokenization_fast.py:45: FileNotFoundError`
```
> with open("tests/fixtures/sample_text.txt", encoding="utf-8") as f_data:
E FileNotFoundError: [Errno 2] No such file or directory: 'tests/fixtures/sample_text.txt'
```<|||||>That's weird, I'm seeing
`AssertionError: Sequences differ: [100,[42 chars]832, 6193, 43, 24, 24, 24, 24, 24, 24, 29289, [4136 chars]1, 3] != [100,[42 chars]832, 24, 24, 29289, 476, 35, 24, 19, 5387, 812[3333 chars], 24]`
in both CI tests, at `tests/test_tokenization_fast.py:62`<|||||>Hmm strange... Maybe it's because I executed the tests in PyCharm.
**Edit**: In PyCharm the root path for the tests was configured to be `.../tests/`, so that was the problem for me earlier.
Ok if I run it in the console I see the same result as you above!<|||||>@TevenLeScao
> If the tests still fail, it might be because the current expected outputs of the tests are incorrect.
How can I fix this?<|||||>The issue seems to be that the Rust and Python tokenizers don't agree as the issue is also present in the Rust tokenizers. @n1t0 @Pierrci maybe could you take a look at this?<|||||>I have no idea what changed unfortunately, but yes the rust tokenizers and the python one should have the same output for the tests to pass.<|||||>Does it make sense if the `TransfoXLTokenizerFast` would also use the tokenizer from `sacremoses`?<|||||>Not really. The fast tokenizers are implemented in Rust, not Python, so it's not possible to use sacremoses. Also, the fast tokenizers keep track of the offsets/alignment information, and I guess this is not something that sacremoses provides, does it? I'm really not familiar with it so I have no idea if it would be easy to re-implement otherwise.<|||||>What exactly do you mean with offsets/alignment information? I'm not familiar with Rust nor the fast tokenizers in general. <|||||>Hi again, is there any update on the issue with the fast tokenizers?
Should I maybe open a new PR which is up to date with the master branch so that we get rid of the messy commit history and lots of "changes"?<|||||>Hi @RafaelWO , sorry for the delay.
- A PR without the extra commits would definitely help
- In the end we couldn't really decide on a solution: I think the consensus with @n1t0 and @thomwolf was that deleting the Rust TransfoXL tokenizer was the best but it's pretty harsh in terms of BC-breaking. I think the best would be to give it a deprecation warning for a bit and remove the Rust = Python tokenizer equality test for TransfoXL in the meantime.<|||||>- I will reopen one
- Sounds good, I will post my changes in a new PR and then we can discuss the deletion of the Rust tokenizer<|||||>Reopened in PR #6322 |
transformers | 5,301 | closed | Request for inclusion of PEGASUS for text summarization by Google. | # 🚀 Feature request
Google has recently released PEGASUS, a new model for text summarization: https://ai.googleblog.com/2020/06/pegasus-state-of-art-model-for.html. I think It would be interesting to have it included in Transformers library. The code and checkpoints are here: https://github.com/google-research/pegasus. This model seems to accomplish human-like summarization. | 06-26-2020 07:53:14 | 06-26-2020 07:53:14 | We are currently working on it as far as I know :-) @sshleifer <|||||>Duplicate of #4918 , but yes it is WIP! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.