repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
β | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 4,292 | closed | Fix special token doc | closes #4274 | 05-11-2020 19:04:33 | 05-11-2020 19:04:33 | |
transformers | 4,291 | closed | Fix quickstart | GPT-2 now requires the entire context to be passed, even when specifying the past since #3734
closes #4259 | 05-11-2020 18:07:45 | 05-11-2020 18:07:45 | |
transformers | 4,290 | closed | [Marian Fixes] prevent predicting pad_token_id before softmax, support language codes, name multilingual models | (this should probably be 3 PRs, sorry).
TLDR: This gets us to equal or better BLEU for 99+% of the converted models.
### Issue 1: non-breaking Generation Issue
`pad_token_id` is predicted with score 0, but we never want to generate it.
Previously, we set its score to inf after softmax, so probabilities summed to something much less than 1. At the end of long generations, the probabilities got infinitesimally small, and BLEU suffered.
Solution: set `logits[:, pad_token_id] = -float('inf')` before calling `log_softmax`
Also, renames `prepare_scores_for_generation` -> `prepare_logits_for_generation` because it is now called on logits.
- Bart outputs are unchanged, other models unaffected.
- This could also be done for other filters, like bad_words_ids, but results may change slightly.
- This works fine with fp16
- Improves BLEU scores for 50% of Marian models, often by a huge amount for those with long examples in test set. No change for others models.
### Issue 2 Multilingual Models require language codes.
`>>pt<< Don't spend so much time watching TV.`
The language code at the beginning, which tells the model to generate in portuguese, is not known to sentencepiece, so `MarianTokenizer` must manually extract it and add it as it's own piece.
We do this using `re.match` inside `_tokenize`.
### Issue 3: Opus' Multilingual models have very long names
We remap them to much shorter names, like `Helsinki-NLP/opus-mt-en-ROMANCE` in `convert_marian_to_pytorch.py`, and document the naming scheme in `docs/`.
### Working snippet:
```
from transformers import MarianMTModel, MarianTokenizer
text = ['>>fr<< this is a sentence in english that we want to translate to french',
'>>pt<< This should go to portuguese',
'>>es<< And this to Spanish'
]
model_name = 'Helsinki-NLP/opus-mt-en-ROMANCE'
tokenizer = MarianTokenizer.from_pretrained(model_name)
model = MarianMTModel.from_pretrained(model_name)
translated = model.generate(**tokenizer.prepare_translation_batch(text))
[tokenizer.decode(t, skip_special_tokens=True) for t in translated]
=> ["c'est une phrase en anglais que nous voulons traduire en franΓ§ais",
'Isto deve ir para o portuguΓͺs.',
'Y esto al espaΓ±ol']
``` | 05-11-2020 17:59:40 | 05-11-2020 17:59:40 | The slow bart summarization tests all pass with this? <|||||>yes!<|||||>LGTM<|||||>Correct, @LysandreJik not a breaking change.
<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4290?src=pr&el=h1) Report
> Merging [#4290](https://codecov.io/gh/huggingface/transformers/pull/4290?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7d7fe4997f83d6d858849a659302b9fdc32c3337&el=desc) will **increase** coverage by `0.06%`.
> The diff coverage is `95.65%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4290?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4290 +/- ##
==========================================
+ Coverage 78.16% 78.22% +0.06%
==========================================
Files 120 120
Lines 20005 20013 +8
==========================================
+ Hits 15636 15655 +19
+ Misses 4369 4358 -11
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4290?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/4290/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `88.88% <75.00%> (+26.38%)` | :arrow_up: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/4290/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `96.79% <100.00%> (+0.56%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4290/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `91.08% <100.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/4290/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `87.14% <100.00%> (+6.19%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4290/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (+0.16%)` | :arrow_up: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/4290/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `90.00% <0.00%> (+5.00%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4290?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4290?src=pr&el=footer). Last update [7d7fe49...54d704c](https://codecov.io/gh/huggingface/transformers/pull/4290?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Merging in advance of release tomorrow. Feel free to leave comments and I will fix them! |
transformers | 4,289 | closed | CamemBERT does not make use of Token Type IDs | CamemBERT, like RoBERTa, does not make use of token type IDs.
closes https://github.com/huggingface/transformers/issues/4153
closes #4163
| 05-11-2020 17:07:12 | 05-11-2020 17:07:12 | |
transformers | 4,288 | closed | TAPAS: Weakly Supervised Table Parsing via Pre-training | # π TAPAS (Weakly Supervised Table Parsing via Pre-Training)
## Model description
**TAPAS** [ArXiV Paper](https://arxiv.org/pdf/2004.02349.pdf) [Google AI Blog](https://ai.googleblog.com/2020/04/using-neural-networks-to-find-answers.html?utm_campaign=Learning%20Posts&utm_content=128690771&utm_medium=social&utm_source=linkedin&hss_channel=lcp-9358547), an approach to question answering over tables without generating logical forms. TAPAS trains from weak supervision, and predicts the denotation by selecting table cells and optionally applying a corresponding aggregation operator to such selection. TAPAS extends BERTβs architecture to encode tables as input, initializes from an effective joint pre-training of text segments and tables crawled from Wikipedia, and is trained end-to-end. We experiment with three different semantic parsing datasets, and find that TAPAS outperforms or rivals semantic parsing models by improving state-of-the-art accuracy on SQA from 55.1 to 67.2 and performing on par with the state-of-the-art on WIKISQL and WIKITQ, but with a simpler model architecture. We additionally find that transfer learning, which is trivial in our setting, from WIKISQL to WIKITQ, yields 48.7 accuracy, 4.2 points above the state-of-the-art.
## Open source status
* [ X ] the model implementation is available: [GitHub repo](https://github.com/google-research/tapas/tree/master/tapas/models)
* [ X ] the model weights are available: in the "Data" section at [their GitHub page](https://github.com/google-research/tapas)
* [ X ] who are the authors: Jonathan Herzig, PaweΕ Krzysztof Nowak, Thomas Muller, Francesco Piccinno, Julian Martin Eisenschlos | 05-11-2020 15:31:56 | 05-11-2020 15:31:56 | Duplicated of [https://github.com/huggingface/transformers/issues/4166](url) |
transformers | 4,287 | closed | T5 fp16 forward yields nan | # π Bug
## Information
Model I am using (Bert, XLNet ...): T5
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
I use pytorch-lightning to manage fp16. This is the minimal example that reproduces the result.
```python
from transformers import T5Model, T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("t5-base")
model = T5Model.from_pretrained("t5-base").cuda().half()
text = "hello world!"
inputs = tokenizer.encode(text, return_tensors="pt").cuda()
out = model(input_ids=inputs, decoder_input_ids=inputs)
print(out[0][:, :, :10])
```
output:
```
tensor([[[nan, nan, nan, nan, nan, nan, nan, nan, nan, nan],
[nan, nan, nan, nan, nan, nan, nan, nan, nan, nan],
[nan, nan, nan, nan, nan, nan, nan, nan, nan, nan]]], device='cuda:0',
dtype=torch.float16, grad_fn=<SliceBackward>)
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Get non-nan values.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.0
- Platform: Linux-4.15.0-88-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.6
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| 05-11-2020 14:12:28 | 05-11-2020 14:12:28 | Thanks for the detailed error description @binshengliu! I linked a PR that should fix it :-) <|||||>Hi, there's still some chance we get `nan` values during training. It does not always happen but is still pretty easy to reproduce. My current version is `2.11.0`.
```
import torch
from transformers import T5Model, T5Tokenizer
torch.manual_seed(0)
tokenizer = T5Tokenizer.from_pretrained("t5-base")
model = T5Model.from_pretrained("t5-base").cuda().half()
text = "hello world!"
model.train()
inputs = tokenizer.encode(text, return_tensors="pt").cuda()
for idx in range(1000):
out = model(input_ids=inputs, decoder_input_ids=inputs)
if torch.isnan(out[0]).any():
print(idx)
print(out[0][:, :, :10])
exit()
```
Output:
```
143
tensor([[[nan, nan, nan, nan, nan, nan, nan, nan, nan, 0.],
[nan, nan, nan, 0., nan, nan, nan, nan, nan, 0.],
[nan, nan, 0., nan, nan, 0., 0., 0., nan, nan]]], device='cuda:0',
dtype=torch.float16, grad_fn=<SliceBackward>)
```<|||||>Sorry I just noticed you commented https://github.com/huggingface/transformers/issues/4586#issuecomment-639704855. I can confirm the issue I encountered is caused by the same reason.<|||||>Hi, I have the same problem. I get NAN when using fp16. And when I set fp16=False, the NAN problem is gone. <|||||>Yeah that's still an open problem....not sure how easy it will be to solve it, see: https://github.com/huggingface/transformers/issues/4586#issuecomment-639704855<|||||>Same when fine-tuning GPT Neo. |
transformers | 4,286 | closed | Reformer enwik8 - Model card | Create a model card with char level tokenizer for reformer enwik8. | 05-11-2020 14:12:01 | 05-11-2020 14:12:01 | |
transformers | 4,285 | closed | Default code snippet for shared model doesn't work because of different model and tokenizer types | # π Bug
## Information
Model I am using (Bert, XLNet ...): GPT2 with AlbertTokenizer
Language I am using the model on (English, Chinese ...): Ukrainian
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
When I share GPT2 model trained with AlbertTokenizer on huggingface.co the default code snippet doesn't work. The reason is that AutoTokenizer detects tokenizer type by model type.
## To reproduce
Steps to reproduce the behavior:
1. Go to https://huggingface.co/Tereveni-AI/gpt2-124M-uk-fiction
2. Copy code from default snippet `How to use this model directly from the π€/transformers library`
3. Try to execute snippet code
```
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("Tereveni-AI/gpt2-124M-uk-fiction")
Downloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 645/645 [00:00<00:00, 89.4kB/s]
Downloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 121/121 [00:00<00:00, 18.8kB/s]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/oleksandrbushkovskyi/Projects/transformers-ua/venv/lib/python3.7/site-packages/transformers/tokenization_auto.py", line 200, in from_pretrained
return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/Users/oleksandrbushkovskyi/Projects/transformers-ua/venv/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 898, in from_pretrained
return cls._from_pretrained(*inputs, **kwargs)
File "/Users/oleksandrbushkovskyi/Projects/transformers-ua/venv/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 1051, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/Users/oleksandrbushkovskyi/Projects/transformers-ua/venv/lib/python3.7/site-packages/transformers/tokenization_gpt2.py", line 151, in __init__
with open(vocab_file, encoding="utf-8") as vocab_handle:
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
## Expected behavior
I would expect `AutoTokenizer` to detect `spiece.model` and initialize `AlbertTokenizer`.
Even better if `AutoTokenizer` read tokenizer type explicitly defined in `tokenizer_config.json`.
## Environment info
- `transformers` version: 2.9.0
- Platform: Darwin-19.4.0-x86_64-i386-64bit
- Python version: 3.7.7
- PyTorch version (GPU?): 1.5.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
| 05-11-2020 13:53:41 | 05-11-2020 13:53:41 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I have the exact same issue. When I share GPT2 model trained with AlbertTokenizer on huggingface.co the default code snippet doesn't work. The reason is that AutoTokenizer detects tokenizer type by model type. Is there any workaround?<|||||>Hi @obsh and @khashei β yes you need to add a `tokenizer_class` attribute to your model's config.json:
> tokenizer_class (str, optional) β The name of the associated tokenizer class to use (if none is set, will use the tokenizer associated to the model by default).
https://huggingface.co/transformers/main_classes/configuration.html
cc @patrickvonplaten |
transformers | 4,284 | closed | [Marian] Fix typo in docstring | 05-11-2020 13:32:51 | 05-11-2020 13:32:51 | ||
transformers | 4,283 | closed | [TF 2.2 compat] use tf.VariableAggregation.ONLY_FIRST_REPLICA | This PR updates the TF gradient accumulator optimization to make it compliant with the new TF 2.2. | 05-11-2020 13:21:42 | 05-11-2020 13:21:42 | Good to be merged for me :) We should update the `setup.py` in same time to do not limit anymore to TF 2.1. Should it be done in this PR? /cc @julien-c @LysandreJik <|||||>I think you can cc @LysandreJik @julien-c <|||||>Yes can you do it in the same PR? and it'll be ready to merge.<|||||>Nice! Unfortunately there are other issues with TF2.2, hence why the tests fail and why the version was pinned. I'm patching it now.<|||||>Yep I think @mfuntowicz is working on it as it is related to the Tokenizer. At least we have identified why it is not working :) |
transformers | 4,282 | closed | [Reformer] Add Enwiki8 Reformer Model - Adapt convert script | This PR adapts the reformer conversion script to the newest trax code and adds `reformer-enwik8` to the docs. | 05-11-2020 12:48:59 | 05-11-2020 12:48:59 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4282?src=pr&el=h1) Report
> Merging [#4282](https://codecov.io/gh/huggingface/transformers/pull/4282?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b290c32e1617fa74f44ccd8b83365fe764437be9&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4282?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4282 +/- ##
==========================================
+ Coverage 78.39% 78.40% +0.01%
==========================================
Files 120 120
Lines 19932 19932
==========================================
+ Hits 15626 15628 +2
+ Misses 4306 4304 -2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4282?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4282/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `93.10% <0.00%> (+0.16%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4282/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.38% <0.00%> (+0.41%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4282?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4282?src=pr&el=footer). Last update [b290c32...5babcfc](https://codecov.io/gh/huggingface/transformers/pull/4282?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,281 | closed | Model card for bert-turkish-question-answering question-answering model | First model card for bert-turkish-question-answering question-answering model | 05-11-2020 12:35:38 | 05-11-2020 12:35:38 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4281?src=pr&el=h1) Report
> Merging [#4281](https://codecov.io/gh/huggingface/transformers/pull/4281?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b290c32e1617fa74f44ccd8b83365fe764437be9&el=desc) will **decrease** coverage by `0.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4281?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4281 +/- ##
==========================================
- Coverage 78.39% 78.39% -0.01%
==========================================
Files 120 120
Lines 19932 19932
==========================================
- Hits 15626 15625 -1
- Misses 4306 4307 +1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4281?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4281/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (-0.17%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4281?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4281?src=pr&el=footer). Last update [b290c32...2222fd5](https://codecov.io/gh/huggingface/transformers/pull/4281?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Awesome, thanks for sharing! Did you fine-tune https://huggingface.co/dbmdz/bert-base-turkish-cased?
cc @stefan-it <|||||>[Model page](https://huggingface.co/lserinol/bert-turkish-question-answering)<|||||>> Awesome, thanks for sharing! Did you fine-tune https://huggingface.co/dbmdz/bert-base-turkish-cased?
>
> cc @stefan-it
Yes |
transformers | 4,280 | closed | Update model-card | This PR is to:
- Fix the XTREME link.
- add the number of evaluation documents for each language.
- fix the usage piece of code to specify which framework to use. | 05-11-2020 12:20:50 | 05-11-2020 12:20:50 | |
transformers | 4,279 | closed | Documentation: fix links to NER examples | Hi,
this PR just fixes some links related to the NER examples :) | 05-11-2020 09:53:05 | 05-11-2020 09:53:05 | π
|
transformers | 4,278 | closed | Could you help with a huggingface/transformers pretrained models download link? | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I am a natural language processing researcher from the Lenovo Research in Beijing. My colleagues and I really like your huggingface/transfromers framework and having been using it. I am currently writing a series of Chinese blogs about transformers, hoping to promote it to more Chinese friends. I want to download all of your pretrained models for study and experiments, but due to network problems in China, the download will be very slow and unstable. At the same time, manual downloading one by one is very troublesome. Could you kindly provide a unified download link which I can download all models?
Thank you.
@julien-c
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
| 05-11-2020 07:41:58 | 05-11-2020 07:41:58 | Hi, you can find the model identifiers in this JSON: https://huggingface.co/api/models
Then you can get all files URLs by removing `bert/` from the objects' keys and prefixing with URL endpoint "https://cdn.huggingface.co/"<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,277 | closed | Use BART for longer documents | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
Fairseq folks say we can finetune BART model with longer seq_len on our custom training data. They pre-trained bart on 512 seq_len and during fine-tuning, they use 1024 seq_len. One can raise it further (let's say 2048) and finetune.
For above, we would need to adjust positional embeddings by either:
1. learning them from start.
2. copy 512 from pre-trained BART to the first 512 of your 2048 positional embedding.
with 2 being recommended. If I have to do it any pointers where to change the code ?
https://github.com/pytorch/fairseq/issues/1685 | 05-11-2020 07:15:15 | 05-11-2020 07:15:15 | @thomwolf @sshleifer <|||||>Yeah, here is working code for the copy approach, starting with bart-large-cnn weights.
```python
from copy import deepcopy
from transformers import BartForConditionalGeneration
# Get original model
model = BartForConditionalGeneration.from_pretrained('bart-large-cnn')
sd = model.state_dict()
shorter_pos_embeds = sd['model.encoder.embed_positions.weight']
new_config = model.config
new_config.max_position_embeddings = 2048
new_model = BartForConditionalGeneration(new_config)
# If you want to learn everything from scratch, you can stop here.
correctly_shaped_pos_weight = new_model.model.encoder.embed_positions.weight
correctly_shaped_pos_weight[:shorter_pos_embeds.shape[0]] = shorter_pos_embeds
sd['model.decoder.embed_positions.weight'] = correctly_shaped_pos_weight
sd['model.encoder.embed_positions.weight'] = correctly_shaped_pos_weight
new_model.load_state_dict(sd, strict=True)
```<|||||>@sshleifer
Can we use this same approach for T5 as well ?<|||||>Does it make sense to initialize the weights by copying the first 512 weights multiple times ?
Like : `[[0:512], [0:512], [0:512], [0:512]]`
So the model just have to learn the "relativeness" between positions ? <|||||>I am editing finetune.py
```
model = SummarizationTrainer(args)
sd = model.model.state_dict()
shorter_pos_embeds = sd['model.encoder.embed_positions.weight']
new_config = model.config
new_config.max_position_embeddings = 4096
new_model = BartForConditionalGeneration(new_config)
correctly_shaped_pos_weight = new_model.model.encoder.embed_positions.weight
correctly_shaped_pos_weight[:shorter_pos_embeds.shape[0]] = shorter_pos_embeds
sd['model.decoder.embed_positions.weight'] = correctly_shaped_pos_weight
sd['model.encoder.embed_positions.weight'] = correctly_shaped_pos_weight
new_model.load_state_dict(sd, strict=True)
model.model = new_model.cuda()
trainer = generic_train(model, args)
```
```
INFO:transformers.tokenization_utils:loading file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-large-vocab.json from cache at /u/tuhin1/.cache/torch/transformers/1ae1f5b6e2b22b25ccc04c000bb79ca847aa226d0761536b011cf7e5868f0655.ef00af9e673c7160b4d41cfda1f48c5f4cba57d5142754525572a846a1ab1b9b
INFO:transformers.tokenization_utils:loading file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-large-merges.txt from cache at /u/tuhin1/.cache/torch/transformers/f8f83199a6270d582d6245dc100e99c4155de81c9745c6248077018fe01abcfb.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda
INFO:transformers.modeling_utils:loading weights file https://cdn.huggingface.co/facebook/bart-large/pytorch_model.bin from cache at /u/tuhin1/.cache/torch/transformers/2e7cae41bb1dd1f18e498ff4ff0ea85f7e9bc2b637439e2d95c485c5d5bdd579.5be2a88ec29f5969270f98902db392beab8be8a6a7ecc588d410ada3e32c4263
INFO:transformers.modeling_utils:Weights of BartForConditionalGeneration not initialized from pretrained model: ['final_logits_bias']
INFO:transformers.modeling_utils:Weights from pretrained model not used in BartForConditionalGeneration: ['encoder.version', 'decoder.version']
INFO:lightning:GPU available: True, used: True
INFO:lightning:CUDA_VISIBLE_DEVICES: [0]
INFO:lightning:Using 16bit precision.
/dccstor/tuhinstor/condatuhin/envs/BARTQA/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:23: RuntimeWarning: You have defined a `val_dataloader()` and have defined a `validation_step()`, you may also want to define `validation_epoch_end()` for accumulating stats.
warnings.warn(*args, **kwargs)
Traceback (most recent call last):
File "/dccstor/tuhinstor/transformers/examples/finetune1.py", line 196, in <module>
main(args)
File "/dccstor/tuhinstor/transformers/examples/finetune1.py", line 177, in main
trainer = generic_train(model, args)
File "/dccstor/tuhinstor/transformers/examples/lightning_base.py", line 278, in generic_train
trainer.fit(model)
File "/dccstor/tuhinstor/condatuhin/envs/BARTQA/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 765, in fit
self.single_gpu_train(model)
File "/dccstor/tuhinstor/condatuhin/envs/BARTQA/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 484, in single_gpu_train
self.optimizers, self.lr_schedulers, self.optimizer_frequencies = self.init_optimizers(model)
File "/dccstor/tuhinstor/condatuhin/envs/BARTQA/lib/python3.7/site-packages/pytorch_lightning/trainer/optimizers.py", line 18, in init_optimizers
optim_conf = model.configure_optimizers()
File "/dccstor/tuhinstor/transformers/examples/lightning_base.py", line 87, in configure_optimizers
optimizer = AdamW(optimizer_grouped_parameters, lr=self.hparams.learning_rate, eps=self.hparams.adam_epsilon)
File "/dccstor/tuhinstor/transformers/src/transformers/optimization.py", line 117, in __init__
super().__init__(params, defaults)
File "/dccstor/tuhinstor/condatuhin/envs/BARTQA/lib/python3.7/site-packages/torch/optim/optimizer.py", line 51, in __init__
self.add_param_group(param_group)
File "/dccstor/tuhinstor/condatuhin/envs/BARTQA/lib/python3.7/site-packages/torch/optim/optimizer.py", line 202, in add_param_group
raise ValueError("can't optimize a non-leaf Tensor")
ValueError: can't optimize a non-leaf Tensor
```
Any idea to resolve this @sshleifer<|||||>Update I fixed it
```
model = SummarizationTrainer(args)
sd = model.model.state_dict()
shorter_pos_embeds = sd['model.encoder.embed_positions.weight']
new_config = model.config
new_config.max_position_embeddings = 3000
new_model = BartForConditionalGeneration(new_config)
correctly_shaped_pos_weight = new_model.model.encoder.embed_positions.weight.cuda()
correctly_shaped_pos_weight[:shorter_pos_embeds.shape[0]] = shorter_pos_embeds.cuda()
sd['model.decoder.embed_positions.weight'] = correctly_shaped_pos_weight
sd['model.encoder.embed_positions.weight'] = correctly_shaped_pos_weight
new_model.load_state_dict(sd, strict=True)
model.model = new_model.cuda()
trainer = generic_train(model, args)
```
However i get OOM , I have a 32 GB NVIDIA V100 , i feel it caches a bit too much any work around ?
RuntimeError: CUDA out of memory. Tried to allocate 550.00 MiB (GPU 0; 31.72 GiB total capacity; 26.46 GiB already allocated; 81.88 MiB free; **4.04 GiB cached**)<|||||>My batch size is 1<|||||>@tuhinjubcse could you solve the OOM error? is it possible for you to share your working script for finetuning with longer sequence? appreciate it.<|||||>> I am editing finetune.py
>
> ```
> model = SummarizationTrainer(args)
> sd = model.model.state_dict()
> shorter_pos_embeds = sd['model.encoder.embed_positions.weight']
> new_config = model.config
> new_config.max_position_embeddings = 4096
> new_model = BartForConditionalGeneration(new_config)
> correctly_shaped_pos_weight = new_model.model.encoder.embed_positions.weight
> correctly_shaped_pos_weight[:shorter_pos_embeds.shape[0]] = shorter_pos_embeds
> sd['model.decoder.embed_positions.weight'] = correctly_shaped_pos_weight
> sd['model.encoder.embed_positions.weight'] = correctly_shaped_pos_weight
> new_model.load_state_dict(sd, strict=True)
> model.model = new_model.cuda()
> trainer = generic_train(model, args)
> ```
>
> ```
> INFO:transformers.tokenization_utils:loading file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-large-vocab.json from cache at /u/tuhin1/.cache/torch/transformers/1ae1f5b6e2b22b25ccc04c000bb79ca847aa226d0761536b011cf7e5868f0655.ef00af9e673c7160b4d41cfda1f48c5f4cba57d5142754525572a846a1ab1b9b
> INFO:transformers.tokenization_utils:loading file https://s3.amazonaws.com/models.huggingface.co/bert/roberta-large-merges.txt from cache at /u/tuhin1/.cache/torch/transformers/f8f83199a6270d582d6245dc100e99c4155de81c9745c6248077018fe01abcfb.70bec105b4158ed9a1747fea67a43f5dee97855c64d62b6ec3742f4cfdb5feda
> INFO:transformers.modeling_utils:loading weights file https://cdn.huggingface.co/facebook/bart-large/pytorch_model.bin from cache at /u/tuhin1/.cache/torch/transformers/2e7cae41bb1dd1f18e498ff4ff0ea85f7e9bc2b637439e2d95c485c5d5bdd579.5be2a88ec29f5969270f98902db392beab8be8a6a7ecc588d410ada3e32c4263
> INFO:transformers.modeling_utils:Weights of BartForConditionalGeneration not initialized from pretrained model: ['final_logits_bias']
> INFO:transformers.modeling_utils:Weights from pretrained model not used in BartForConditionalGeneration: ['encoder.version', 'decoder.version']
> INFO:lightning:GPU available: True, used: True
> INFO:lightning:CUDA_VISIBLE_DEVICES: [0]
> INFO:lightning:Using 16bit precision.
> /dccstor/tuhinstor/condatuhin/envs/BARTQA/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:23: RuntimeWarning: You have defined a `val_dataloader()` and have defined a `validation_step()`, you may also want to define `validation_epoch_end()` for accumulating stats.
> warnings.warn(*args, **kwargs)
> Traceback (most recent call last):
> File "/dccstor/tuhinstor/transformers/examples/finetune1.py", line 196, in <module>
> main(args)
> File "/dccstor/tuhinstor/transformers/examples/finetune1.py", line 177, in main
> trainer = generic_train(model, args)
> File "/dccstor/tuhinstor/transformers/examples/lightning_base.py", line 278, in generic_train
> trainer.fit(model)
> File "/dccstor/tuhinstor/condatuhin/envs/BARTQA/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 765, in fit
> self.single_gpu_train(model)
> File "/dccstor/tuhinstor/condatuhin/envs/BARTQA/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 484, in single_gpu_train
> self.optimizers, self.lr_schedulers, self.optimizer_frequencies = self.init_optimizers(model)
> File "/dccstor/tuhinstor/condatuhin/envs/BARTQA/lib/python3.7/site-packages/pytorch_lightning/trainer/optimizers.py", line 18, in init_optimizers
> optim_conf = model.configure_optimizers()
> File "/dccstor/tuhinstor/transformers/examples/lightning_base.py", line 87, in configure_optimizers
> optimizer = AdamW(optimizer_grouped_parameters, lr=self.hparams.learning_rate, eps=self.hparams.adam_epsilon)
> File "/dccstor/tuhinstor/transformers/src/transformers/optimization.py", line 117, in __init__
> super().__init__(params, defaults)
> File "/dccstor/tuhinstor/condatuhin/envs/BARTQA/lib/python3.7/site-packages/torch/optim/optimizer.py", line 51, in __init__
> self.add_param_group(param_group)
> File "/dccstor/tuhinstor/condatuhin/envs/BARTQA/lib/python3.7/site-packages/torch/optim/optimizer.py", line 202, in add_param_group
> raise ValueError("can't optimize a non-leaf Tensor")
> ValueError: can't optimize a non-leaf Tensor
> ```
>
> Any idea to resolve this @sshleifer
Which file are you editing? |
transformers | 4,276 | closed | Update README.md | 05-11-2020 06:53:09 | 05-11-2020 06:53:09 | ||
transformers | 4,275 | closed | training script for any tensorflow based pre-trained model for question-answer | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
I am trying to train a question-answer model from scratch using TensorFlow backend. But I am getting unexpected result. It would be a great help if Hugging Face team or anyone can share training script of any TensorFlow based pre-trained model for question-answer (like 'bert-large-uncased-whole-word-masking-finetuned-squad' or 'distilbert-base-uncased-distilled-squad', etc)
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
https://stackoverflow.com/questions/61708227/how-to-train-any-hugging-face-transformer-model-eg-distilbert-for-question-ans | 05-11-2020 04:33:15 | 05-11-2020 04:33:15 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,274 | closed | Wrong returns of API:get_special_tokens_mask() | # π Bug
## Information
Model I am using (Bert, XLNet ...): Roberta
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x ] my own modified scripts: (give details below)
I use transformers tokenizer, and created mask using API: get_special_tokens_mask.
[My Code](https://github.com/ishikawa-takumi/transformers-sample/blob/master/tokenizer.ipynb)
In [RoBERTa Doc](https://huggingface.co/transformers/model_doc/roberta.html), returns of this API is "A list of integers in the range [0, 1]: 0 for a special token, 1 for a sequence token". But I seem that this API returns "0 for a sequence token, 1 for a special token".
## To reproduce
Steps to reproduce the behavior:
1. Use get_special_tokens_mask().
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Returns of this API is "A list of integers in the range [0, 1]: 0 for a special token, 1 for a sequence token".
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
| 05-11-2020 03:49:36 | 05-11-2020 03:49:36 | Yes, this is https://github.com/huggingface/transformers/issues/1784 come again, the issue is in the doc, not in the code<|||||>Indeed, thanks for raising this issue. |
transformers | 4,273 | closed | Add migrating from `pytorch-transformers` | "Migrating from pytorch-transformers to transformers" is missing in the main document. It is available in the main `readme` thought. Just move it to the document. | 05-11-2020 00:57:07 | 05-11-2020 00:57:07 | LGTM |
transformers | 4,272 | closed | FileNotFoundError when running distributed Trainer | # π Bug
## Information
I'm running the language modeling example in distributed mode and am getting FileNotFoundError in the Trainer.
The error :
```
Traceback (most recent call last):
File "run_language_modeling.py", line 292, in <module>
main()
File "run_language_modeling.py", line 257, in main
trainer.train(model_path=model_path)
File "/home/shaoyent/anaconda3/envs/bert/lib/python3.7/site-packages/transformers/trainer.py", line 451, in train
torch.save(optimizer.state_dict(), os.path.join(output_dir, "optimizer.pt"))
File "/home/shaoyent/anaconda3/envs/bert/lib/python3.7/site-packages/torch/serialization.py", line 327, in save
with _open_file_like(f, 'wb') as opened_file:
File "/home/shaoyent/anaconda3/envs/bert/lib/python3.7/site-packages/torch/serialization.py", line 212, in _open_file_like
return _open_file(name_or_buffer, mode)
File "/home/shaoyent/anaconda3/envs/bert/lib/python3.7/site-packages/torch/serialization.py", line 193, in __init__
super(_open_file, self).__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: '/home/shaoyent/transformers/examples/language-modeling/output/checkpoint-500/optimizer.pt'
```
The error seems to be the Trainer saving checkpoints in each [local master](https://github.com/huggingface/transformers/blob/b290c32e1617fa74f44ccd8b83365fe764437be9/src/transformers/trainer.py#L413) while the directory is created by [world master](https://github.com/huggingface/transformers/blob/b290c32e1617fa74f44ccd8b83365fe764437be9/src/transformers/trainer.py#L448) causing a race condition.
I think the suggested method is to handle save checkpoints in world master, since weights should be synchronized after backprop.
| 05-11-2020 00:30:20 | 05-11-2020 00:30:20 | Yes, I think you're right. Relaunching training from a checkpoint would however require the user to copy the weights to other machines, right?<|||||>> Yes, I think you're right. Relaunching training from a checkpoint would however require the user to copy the weights to other machines, right?
Yes, loading from checkpoint can be done on all nodes.
<|||||>So I think we should save checkpoints and final models on each **local master**.
Do you agree?<|||||>Would there be a difference in checkpoints between each local master ?
If they're identical I'd think it be best to reduce write overhead and save only once (i.e world master)<|||||>No difference, but the issue is that you won't be able to restart training from a checkpoint (unless you copy it around) if we only save to world master<|||||>Ah, I see, thanks for the clarification.
I'm assuming a shared file system so it's all saving and loading from the same file.
I guess for other cluster configurations that makes sense, but care would have to be taken to avoid write/mkdir conflicts from multiple processes.<|||||>I see. Thinking about it more, yes, I think it like it if we only save from world_master. (even if it adds one step for some users). Thanks for your insight. |
transformers | 4,271 | closed | Evaluating a checkpoint | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
<!-- Description of your issue -->
I have trained a NER model with the following command
python3.6 examples/token-classification/run_ner.py --save_steps 100 --data_dir $DIR --model_type bert --model_name_or_path bert-base-cased --output_dir $DIR/model --do_train --do_eval --do_predict --overwrite_output_dir --labels $DIR/labels.txt --evaluate_during_training --max_seq_length 300
Now, I am trying to evaluate and predict at checkpoint 500 (already evaluated during training using --evaluate_during_training argument) using the following command
python3.6 examples/token-classification/run_ner.py --data_dir $DIR --model_type bert --model_name_or_path bert-base-cased --output_dir $DIR/model/checkpoint-500 --do_eval --do_predict --labels $DIR/labels.txt --max_seq_length 300
However, I don't get same performance as during training (as evaluated at the same checkpoint using --evaluate_during_training argument), I get lower performance (by far).
Any idea about it?
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 05-10-2020 21:11:16 | 05-10-2020 21:11:16 | Yes: you need to use the path to the saved model as the `--model_name_or_path` argument.
`--model_name_or_path` is the argument that conditions which model is loaded.
Will close this for now. |
transformers | 4,270 | closed | Add MultipleChoice to TFTrainer [WIP] | Working on getting the TF trainer running on TPUs for multiple choice. I'm quite the newb so input is needed.
- [x] Add 1 GPU strategy
- [x] Args for TPU Name
- [x] Add MPC example for TFTrainer
- [x] Fix docstrings
Examples:
GPU on Swag
https://colab.research.google.com/drive/1X9d1TY1I7r8d3rCJ9vWiy0UgvJM6BqQ4
TPU on Swag (fails, havent looked into it might be my misstake):
https://colab.research.google.com/drive/1X9d1TY1I7r8d3rCJ9vWiy0UgvJM6BqQ4#scrollTo=2r1Rng8NWzit
Glue on TPU (fails, havent looked into it, might be my mistake)
https://colab.research.google.com/drive/1f2x5wPSaBV0NbGCpMyUPXGUSMEGjSHeX#scrollTo=YDhXmegN3WWj | 05-10-2020 20:56:57 | 05-10-2020 20:56:57 | Hey!
Thanks a lot for your contribution on the trainer :+1:
Your PR looks great! Just have a small comment, see above. Don't hesitate if you have questions!<|||||>#4252
```
05/11/2020 19:59:05 - INFO - oauth2client.transport - Attempting refresh to obtain initial access_token
INFO:tensorflow:Found TPU system:
05/11/2020 19:59:05 - INFO - tensorflow - Found TPU system:
INFO:tensorflow:*** Num TPU Cores: 8
05/11/2020 19:59:05 - INFO - tensorflow - *** Num TPU Cores: 8
INFO:tensorflow:*** Num TPU Workers: 1
05/11/2020 19:59:05 - INFO - tensorflow - *** Num TPU Workers: 1
INFO:tensorflow:*** Num TPU Cores Per Worker: 8
05/11/2020 19:59:05 - INFO - tensorflow - *** Num TPU Cores Per Worker: 8
```
So its grabbing by name now by specifying --tpu_name<|||||>How do I see what fails with the code quality in __init__.py? I do run make style hehe
Great work with the trainer @jplu I saw your other plans for it. Would be very neat.
Ill mess around with the TFdataset and tpus on another branch, i need to read up on some stuff. Is there anywhere i can see the plans for a framework agnostic approach for the datasets?<|||||>Thanks :)
For the style I always do it manually in this following order:
```
isort --recursive examples templates tests src utils
black --line-length 119 --target-version py35 examples templates tests src utils
flake8 examples templates tests src utils
```
Then git add/commit/push.
What is currently working in your MPC example? Single GPU and CPU?<|||||>> Thanks :)
>
> For the style I always do it manually in this following order:
>
> ```
> isort --recursive examples templates tests src utils
> black --line-length 119 --target-version py35 examples templates tests src utils
> flake8 examples templates tests src utils
> ```
>
> Then git add/commit/push.
>
> What is currently working in your MPC example? Single GPU and CPU?
Yep only tested on my trusty rtx2080 and in colab on gpu.<|||||>Very Good!
You should also update the examples [README](https://github.com/huggingface/transformers/blob/master/examples/README.md) to add the check to the multiple_choice TFTrainer support :wink: <|||||>Can you also add a TF section in the multiple_choice [README](https://github.com/huggingface/transformers/tree/master/examples/multiple-choice) plz<|||||>> Can you also add a TF section in the multiple_choice [README](https://github.com/huggingface/transformers/tree/master/examples/multiple-choice) plz
I should sleep on this. Thanks alot for the help!<|||||>Looks very good! Let me know once you think it can be merged.<|||||>@ViktorAlm I think it's great if you link to your Colab! Maybe just save it to a repo so that one can take a look at the history (and open issues if it ever breaks:)<|||||>None of the failing flake8 code quality seems to be related to me. If this works I think its done for now.<|||||>Looks good to me!
@julien-c and @LysandreJik should double check and merge if it seems good for them.<|||||>Maybe add a link to the syn dataset you added. It's [that one](https://aclweb.org/aclwiki/TOEFL_Synonym_Questions_(State_of_the_art)) right?<|||||>In the Colab ideally we would expose/expand a little bit more what happens in the example scripts (see what we did in the Colab for GLUE for instance), i.e. the calls to Trainer/TFTrainer.
We can improve/expand this in the future though @ViktorAlm @jplu <|||||>> Maybe add a link to the syn dataset you added. It's [that one](https://aclweb.org/aclwiki/TOEFL_Synonym_Questions_(State_of_the_art)) right?
Its my own dataset which ill release once I'm done |
transformers | 4,269 | closed | [Jax] top_k_top_p | 05-10-2020 20:34:23 | 05-10-2020 20:34:23 | ||
transformers | 4,268 | closed | KlDivBackward | Hi,
I fine-tune bert and using KLDivLoss instead of CrossEntropyLoss in binary classification. when I call
loss.backward()
I got this error !?

Thank you in advance | 05-10-2020 19:34:14 | 05-10-2020 19:34:14 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,267 | closed | useless chg | 05-10-2020 18:02:04 | 05-10-2020 18:02:04 | ||
transformers | 4,266 | closed | [CI] remove run_tests_torch_and_tf | Problem:
This job is rerunning all tests, after `run_tests_torch` and `run_tests_tf` with both packages installed. Meaning that it ~doubles the runtime of CI and doesn't find very many bugs.
Solution: This changes `run_tests_torch_and_tf` to only run the tests that require both TF and torch | 05-10-2020 17:59:01 | 05-10-2020 17:59:01 | (fixed git history locally for future PRs)<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4266?src=pr&el=h1) Report
> Merging [#4266](https://codecov.io/gh/huggingface/transformers/pull/4266?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3487be75ef8f0d51c51208f266644fc04a947085&el=desc) will **decrease** coverage by `0.84%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4266?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4266 +/- ##
==========================================
- Coverage 78.39% 77.55% -0.85%
==========================================
Files 120 120
Lines 19932 19932
==========================================
- Hits 15626 15458 -168
- Misses 4306 4474 +168
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4266?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4266/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `8.72% <0.00%> (-81.21%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4266/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `85.71% <0.00%> (-10.00%)` | :arrow_down: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4266/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `95.19% <0.00%> (-2.63%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4266/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `73.44% <0.00%> (-2.30%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4266/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.41% <0.00%> (-1.38%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4266/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `40.51% <0.00%> (-0.58%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4266/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.93% <0.00%> (ΓΈ)` | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4266?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4266?src=pr&el=footer). Last update [3487be7...4aab0f5](https://codecov.io/gh/huggingface/transformers/pull/4266?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I found two common tests that require both packages:
```
test_batch_encode_plus_tensors
test_pt_tf_model_equivalence
```
So I run test_pt_tf_model_equivalence for all test_modeling_tf* files, and
test_batch_encode_plus_tensors for all files.
Still saves quite a bit of time.
<|||||>Kinda reluctant to do this change because then we need to manually maintain over time a whitelist of tests, which complicates the (already convoluted) CI setup. I'm also worried we might miss some side-effects of having both frameworks installed.
What time do those tests take?<|||||>6 mins / 13 mins, at least once per PR.
I think it's a good tradeoff.
- we are losing 0 coverage.
- people rarely add new tests that check equivalence of torch and tf. We would rarely need to update the whitelist.
- For most bugs, we would all be able to reproduce it locally where we have both frameworks installed.
Don't feel too strongly, of course.
- Also happy to move the old job to less often then every push.<|||||>What does 6 mins / 13 mins mean?<|||||>Before this change `run_test_torch_and_tf` takes 6 mins, and the whole CircleCI suite takes 13mins.<|||||>How do you measure the whole CircleCI suite, given that the tests are (AFAIK) parallelized?<|||||>By looking at circleci output like [this](https://app.circleci.com/pipelines/github/huggingface/transformers/5174/workflows/3aecb0e9-14f0-4762-ad6a-73b60db2d626) and guessing
<|||||>[This](https://app.circleci.com/pipelines/github/huggingface/transformers/6581/workflows/b114b306-2553-42cf-bb67-ba507e051929) is after pipelines improvements, so I may be overestimating the benefit of the change. Although it seems ridiculous that `check_code_quality` took 3 mins.<|||||>@julien-c seems like we don't want to do this? Happy to abandon. |
transformers | 4,265 | closed | [DOCS] BertModel documentation doesn't mention the usage of tokenizers for creating input_ids | Documentation of https://huggingface.co/transformers/model_doc/bert.html#transformers.BertModel.forward currently doesn't mention the usage of `tokenizers.BertWordPieceTokenizer` for instance when I am using BertModel.
| 05-10-2020 17:53:39 | 05-10-2020 17:53:39 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,264 | closed | Align sentiment-analysis' tokenizer (currently cased) to the model (uncased) | `sentiment-analysis` pipeline has a tokenizer using a cased vocab while the model is uncased.
This PR makes the tokenizer uncased to match the actual model's vocab. | 05-10-2020 17:22:50 | 05-10-2020 17:22:50 | LGTM, or even better we could duplicate those tokenizer files into `distilbert-base-uncased-finetuned-sst-2-english` as now we kind of assume that a model == model + tokenizer.<|||||>Merging this for now and can update to @julien-c's proposal in a future PR<|||||>Yeah, just wanted to ping @mfuntowicz on this model/tokenizer subject <|||||>Hi guys, I pulled from master after the merge of the PR but the behaviour is still the same. It doesn't look like aligning the tokenizer resolved this issue. |
transformers | 4,263 | closed | Sentiment Analysis Pipeline is predicting incorrect sentiment in 2.9 | # π Bug
Sentiment Analysis Pipeline is predicting incorrect sentiment. Everything seems to be NEGATIVE.
## To reproduce
Steps to reproduce the behavior:
Left a Colab Notebook here for reproducing: https://colab.research.google.com/drive/1hIwyBOGFpcLMUNmaKf5UYb0wPeYb4yKS?usp=sharing
`pip install -qU transformers`
`from transformers import pipeline`
`sentiment_model29 = pipeline('sentiment-analysis')`
`sentiment_model29('I hate this movie')`
Returns: [{'label': 'NEGATIVE', 'score': 0.703369677066803}]
| 05-10-2020 10:50:43 | 05-10-2020 10:50:43 | This PR should fix the problem:
https://github.com/huggingface/transformers/pull/4264
A mismatched tokenizer is hooked up with the pipeline.<|||||>yes, you are right.
I also tested all sentences both positive and negative. But it is showing NEGATIVE all the time.<|||||>> This PR should fix the problem:
> #4264
>
> A mismatched tokenizer is hooked up with the pipeline.
This PR didn't resolve the issue. I pulled from master but behaviour still the same. Have you tested?<|||||>Yes, I'm always getting NEGATIVE class regardless of the text<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Same issue...incorrect sentiment for "nlp("i hate you")" shows [{'label': 'POSITIVE', 'score': 0.536677360534668}]<|||||>in 3.1, I always get label="POSITIVE". Must be a bug somewhere<|||||>I get Positive label with high score(0.99) for neutral statements like "This is just a statement" or "My name is this".
I don't know whats wrong. I used the base example.
from transformers import pipeline
sentiment_model = pipeline('sentiment-analysis')
sentiment_model('This is just a statement.') |
transformers | 4,262 | closed | NER Models not using BIO tagging scheme | Hello,
I have noticed that most (all?) pre-trained models uploaded seem not to use the BIO tagging scheme, instead returning `I-xxx` and `O` for all categories (`B-xxx` is not used). I believe this is just caused by the way these models have been trained. This makes using these models in actual pipelines difficult as entity parsing becomes impossible (two consecutive entities cannot be differentiated from a multi-tokens entity).
Are there any plans for updating these models with a BIO labelling scheme?
@stefan-it including you in the discussion as you have uploaded standard models used in the default pipelines | 05-10-2020 08:29:36 | 05-10-2020 08:29:36 | Hi @guillaume-be , only the CoNLL-2003 models are using this IOB1 tagging scheme, because it is used in the original dataset. You could convert it (+ needs proper testing) but this requires a modification of the original dataset.
I could train models e.g. on the WikiANN corpus (see [here](https://www.aclweb.org/anthology/P17-1178/) and WikiANN with balanced train/dev/test splits [here](https://www.aclweb.org/anthology/P19-1015/)) for English, but this corpus is then just a "silver standard" :)<|||||>Hi @stefan-it ,
Thank you for your response. Assuming the CoNLL-03 model is in IOB1 tagging scheme, it would still use `B-*` labels in case of multi-token entities. For example:
```
Alex I-PER
is O
going O
to O
Los B-LOC
Angeles I-LOC
```
However, the model currently outputs the following for this example:
```
Alex I-PER
is O
going O
to O
Los I-LOC
Angeles I-LOC
```
This means that it is impossible to differentiate multi-token entities (such as Los Angeles) from multiple consecutive entities, e.g. `Alex Bob and Amy are going to Los Angeles`. So far I have never seen the model uploaded generate a `B-*` token. <|||||>`B-*` is only used when an entity follows right after another entity, e.g. different political parties, like this example from the training dataset:
```bash
MAY NNP I-NP O
1996 CD I-NP O
CDU NNP I-NP I-ORG
/ SYM O I-ORG
CSU NNP I-NP I-ORG
SPD NNP I-NP B-ORG
FDP NNP I-NP B-ORG
Greens NNP I-NP B-ORG
PDS NNP I-NP B-ORG
```
Just have a look at the last column. `CDU/CSU` can be considered as one entity, but right after it another political party `SPD` is followed, so it is tagged with a beginning `B-` :)<|||||>@stefan-it thank you for the clarification!<|||||>@stefan-it are there plans to [merge this pr](https://github.com/huggingface/transformers/pull/3957)? it could help with this issue. i have it myself as i need to pool entity embeddings. https://github.com/huggingface/transformers/issues/3548<|||||>@petulla Should be merged soon π |
transformers | 4,261 | closed | I am training a bert model for question classification task, now it is a binary classifier. 0 for Descriptive question and 1 for not descriptive. While testing I am getting the following error :- | I have generated my model with the help of around 1700 questions
But when I am testing it for one question at a time and running
outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_masks)
It is giving me the following error
ValueError: Wrong shape for input_ids (shape torch.Size([1])) or attention_mask (shape torch.Size([1]))
b_input_ids are the padded id of the input question
b_input_masks are the attention mask of the b_input_ids
Any idea how this error can be resolved
| 05-10-2020 08:14:04 | 05-10-2020 08:14:04 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>use torch.unsqueeze(0) on b_input_ids,b_input_masks so that they become of shape (batch_size(=1),dim)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,260 | closed | Strange Behaviour using Transformer pipline with FLASK | # β Questions & Help
I am building a web-frame with Flask to run Transformer pipeline ( sentiment - analysis ) over Excel columns text.
However , In first try running Flask app , gives following error as :
ModuleNotFoundError: No module named 'tensorflow_core._api'
after few days, without any success, As mentioned in other pages , issue will be resolved or better saying ignored by change debug Flask run flag from TRUE to FALSE as below :
app.run(host='0.0.0.0', debug=False)
It runs ! Webpage opens, Excel sheet uploaded to server and then after calling pipline Transformer
!! Now i am facing very strange!!! behaviour !
It seems that pipline works !! but results are different than stand alone run!! !
In light be better clarification
For instance :
Downloading: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 230/230 [00:00<00:00, 255kB/s]
amazing job
----------------
[{'label': 'POSITIVE', 'score': 0.99987316}]
sounds misleading
----------------
[{'label': 'NEGATIVE', 'score': 0.9996892}]
amazing job
----------------
[{'label': 'POSITIVE', 'score': 0.99987316}]
sounds cheap
----------------
[{'label': 'NEGATIVE', 'score': 0.99980205}]
Uninteresting.
----------------
[{'label': 'NEGATIVE', 'score': 0.9997152}]
This is an Interesting item.
----------------
[{'label': 'POSITIVE', 'score': 0.99984246}]
very good
----------------
[{'label': 'POSITIVE', 'score': 0.999852}]
this is inappropriate for you
----------------
[{'label': 'NEGATIVE', 'score': 0.99970394}]
this is appropriate for you
----------------
[{'label': 'POSITIVE', 'score': 0.99979025}]
very bad
----------------
[{'label': 'NEGATIVE', 'score': 0.99978846}]
will be like as below after calling in FLASK app
amazing job | 0.793442070484161 | NEGATIVE
sounds misleading | 0.940393328666687 | POSITIVE
amazing job | 0.793442070484161 | NEGATIVE
sounds cheap | 0.934843122959137 | NEGATIVE
Uninteresting. | 0.995108544826508 | NEGATIVE
This is an Interesting item. | 0.946804642677307 | NEGATIVE
very good | 0.77636182308197 | POSITIVE
this is inappropriate for you | 0.945333242416382 | NEGATIVE
this is appropriate for you | 0.942687690258026 | NEGATIVE
very bad | 0.623483657836914 | NEGATIVE
Your help and comments are greatly appreciated.
## Details
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 05-10-2020 05:48:15 | 05-10-2020 05:48:15 | Hi @nimahm2020,
Thanks for reporting.
It seems the tokenizer used for the default Sentiment Analysis pipeline was not aligned with the actual model. More precisely, the model is using an **uncased** vocab while the tokenizer is **cased**.
I've opened a PR(#4264) to solve the issue, if you can/want to give it a try and let us know if it solves the issue :).<|||||>> Hi @nimahm2020,
>
> Thanks for reporting.
> It seems the tokenizer used for the default Sentiment Analysis pipeline was not aligned with the actual model. More precisely, the model is using an **uncased** vocab while the tokenizer is **cased**.
>
> I've opened a PR(#4264) to solve the issue, if you can/want to give it a try and let us know if it solves the issue :).
Hi @mfuntowicz
In term of accuracy , accidentally, I found that Transformer 2.8 provide the same accuracy performance as it was before . It seems that nlp=pipline (sentiment- analysis ) in Transformer 2.9 is a bit under-estimated .
As a brief ; Transformer connects to Flask with following modifications that i have no clue why it should be :
1- run Flask app with debug=False
2- install Transfomer 2.8 ( NOT last version is 2.9)
more deep explanation and comments are always appreciated .
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,259 | closed | Fix bug in sample code | 05-09-2020 22:13:16 | 05-09-2020 22:13:16 | It seems this example was not updated after #3734. This is not the correct way to do it, as it now requires the user to specify the entire context all the time. Will open a PR now.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
|
transformers | 4,258 | closed | Model call with `inputs_embeds` does not match that with `input_ids` | # π Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using: my own modified scripts: (details below)
## To reproduce
Steps to reproduce the behavior:
```
import tensorflow as tf
from transformers import BertConfig, BertTokenizer, TFBertModel
bert_tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
input_ids = tf.constant(bert_tokenizer.encode("Hello, my dog is cute", add_special_tokens=True))[None, :]
attention_mask = tf.stack([tf.ones(shape=(len(sent),)) for sent in input_ids])
token_type_ids = tf.stack([tf.ones(shape=(len(sent),)) for sent in input_ids])
config = BertConfig.from_pretrained('bert-base-uncased', output_hidden_states=True)
bert_model = TFBertModel.from_pretrained('bert-base-uncased', config=config)
result = bert_model(inputs={'input_ids': input_ids,
'attention_mask': attention_mask,
'token_type_ids': token_type_ids})
inputs_embeds = result[-1][0]
result2 = bert_model(inputs={'inputs_embeds': inputs_embeds,
'attention_mask': attention_mask,
'token_type_ids': token_type_ids})
print(tf.reduce_sum(tf.abs(result[0] - result2[0]))) # 458.2522, should be 0
```
## Expected behavior
The HuggingFace BERT TensorFlow implementation allows us to feed in a precomputed embedding in place of the embedding lookup that is native to BERT. This is done using the model's `call` method's optional parameter `inputs_embeds` (in place of `input_ids`). To test this out, I wanted to make sure that if I did feed in BERT's embedding lookup, I would get the same result as having fed in the `input_ids` themselves.
The result of BERT's embedding lookup can be obtained by setting the BERT configuration parameter `output_hidden_states` to `True` and extracting the first tensor from the last output of the call method. (The remaining 12 outputs correspond to each of the 12 hidden layers.)
Thus, I wrote the above code to test my hypothesis. The output of the `call` method is a tuple, and the first element of this tuple is the output of the last layer of BERT. Thus, I expected `result[0]` and `result2[0]` to match. Why is this not the case?
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.5.1
- Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-debian-stretch-sid
- Python version: 3.6.10
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.1.0 (True)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 05-09-2020 21:11:42 | 05-09-2020 21:11:42 | I was incorrectly treating the output of the first layer as the actual embedding lookup when it is in fact the embedding lookup *plus* positional and token type embeddings. See [here](https://stackoverflow.com/a/61704348/424306) for more details. |
transformers | 4,257 | closed | added functionality for electra classification head | Added functionality for an electra classification head following the google repo.
Tested and trains fine with the run_glue script. I was testing on my own repurposed classification problems so i didnt evaluate the actual GLUE accuracies, but can confirm that it is getting competitive results on my own problems.
Saw a few requests for this, thought it could benefit the community.
Originally I used a tanh activation in the classification head. (which in my tests reliably performed better than a gelu activation) But looking over the google code, it seems that they used gelu, so to keep the integrity of their paper im committing gelu here.
I also opted to make a ClassificationHead class instead of integrating a pooling layer in the base model class because i wasn't sure if that would affect other things, or run time in non sequence-classification tasks. | 05-09-2020 19:33:42 | 05-09-2020 19:33:42 | This is great, thanks @liuzzi!
We would need to add the corresponding tests. Can you do it, or would you like me to do it? If so, may I push directly on your fork?<|||||>@LysandreJik sure! i just invited you to collaborate. I haven't been inside the transformers testing code so it'd probably be much faster for you to do. Let me know if you have any questions/if i can help.<|||||>I just pushed the tests on it, thanks for your contribution!<|||||>Pushed a style change, rebased and merged into master. Thanks @liuzzi ! |
transformers | 4,256 | closed | Vaswani's Transformer (Encoder-Decoder) decoding methods | Hi,
I recently studied this blog https://huggingface.co/blog/how-to-generate for using different decoding methods with PreTrained Models.
I would like to ask if it is possible to use those methods with the simple Encoder-Decoder Transformer model (as mentioned in Vaswani's paper) (as it is available in pytorch library "torch.nn") or I have to implement those decoding methods by my own.
Thank you in advance. | 05-09-2020 18:50:15 | 05-09-2020 18:50:15 | Hello @manzar96
The `.generate` method can be used with any HF model with a LM head on top.<|||||>Ok thank you.
So it can be used only with those models https://pytorch.org/hub/huggingface_pytorch-transformers/ (having a LM head on top) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,255 | closed | bert uncased models needs lower casing option turned on in the example code. | The command to replicate bert-base-uncased model and bert-large-uncased-whole-word-masking model results on Squad requires --do_lower_case to be used. If not, the result is low. | 05-09-2020 15:53:27 | 05-09-2020 15:53:27 | Those models should contain config that initializes their tokenizer with the correct `do_lower_case=True` option. Is this not the case?
i.e. we are moving the tokenizer configuration out of those example scripts (see updated scripts for `Trainer` at https://github.com/huggingface/transformers/tree/master/examples).<|||||>I haven't gone through the recent update sorry.
It looks like the [examples page](https://github.com/huggingface/transformers/tree/master/examples/) has completely changed now.
But in the folder of [question-answering](https://github.com/huggingface/transformers/tree/master/examples/question-answering), the configuration to replicate results still remains the same.
Shall I modify this there and create a request ?
Because the `do_lower_case` argument is a `store_true` argument and not a `boolean` with default as `true`.
I spent 3 runs to realise the error because it worked correctly with cased models.
Reference https://github.com/huggingface/transformers/blob/4658896ee105c504eba23db2efefc5fe14191b81/examples/question-answering/run_squad.py#L579
<|||||>Closing this b/c #4245 was merged
(we still need to investigate why the lowercasing is not properly populated by the model's config)
|
transformers | 4,254 | closed | 'utf-8' codec can't decode byte 0x80 in position 229888: invalid start byte | os : google colab
from transformers import *
tokenizer = AutoTokenizer.from_pretrained('https://s3-us-west-2.amazonaws.com/ai2-s2-research/scibert/huggingface_pytorch/scibert_scivocab_uncased.tar')

| 05-09-2020 14:53:03 | 05-09-2020 14:53:03 | I don't think you can load .tars directly. Simply use the name as described in [the model page](https://huggingface.co/allenai/scibert_scivocab_uncased), and it should work.
```python
tokenizer = AutoTokenizer.from_pretrained("allenai/scibert_scivocab_uncased")
```<|||||>Thank You @BramVanroy .
Its working. |
transformers | 4,253 | closed | Conversion script to export transformers models to ONNX IR. | This PR adds conversion script to export our models to ONNX IR.
Plan is to support both PyTorch and TensorFlow:
- [x] PyTorch
- [x] TensorFlow
TensorFlow currently blocked because of an issue in the conversion script provided by ONNX (seems fixed on master : https://github.com/onnx/tensorflow-onnx/issues/876)
_PS: PR recreated because the first one got corrupted (#4176)._ | 05-09-2020 13:30:48 | 05-09-2020 13:30:48 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4253?src=pr&el=h1) Report
> Merging [#4253](https://codecov.io/gh/huggingface/transformers/pull/4253?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c558761ad11beba5e70dd505e1c97bdc4eb6ee9c&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4253?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4253 +/- ##
=======================================
Coverage 78.16% 78.16%
=======================================
Files 120 120
Lines 20005 20005
=======================================
Hits 15637 15637
Misses 4368 4368
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4253?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4253?src=pr&el=footer). Last update [c558761...c558761](https://codecov.io/gh/huggingface/transformers/pull/4253?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>For the notebook:
```
"environ[\"OMP_NUM_THREADS\"] = str(cpu_count(logical=True))\n",
"environ[\"OMP_WAIT_POLICY\"] = 'ACTIVE'
```
It is better to place the environ setting before importing onnxruntime. I used to find that placing these settings after importing onnxruntime does not work.
```
options.execution_mode = ExecutionMode.ORT_SEQUENTIAL
```
ORT_SEQUENTIAL is the default value so the line is optional.<|||||>@tianleiwu I agree with your regarding optional argument, especially GPT2 and the past context which might provide a significant boost in performance.
For the scope of this release, I decided to not provide support for this, as it would have required much more thinking / changes in the library.
We might have a look at possible solutions for this in short/mid term π |
transformers | 4,252 | closed | How to specify a TPU IP for run_tf_glue.py? | Hi team!
I have a VM on Google Cloud and a TPU. When running your example script `run_tf_glue.py` there is no option for specifying a TPU IP. It seems like the TPU cannot be auto-discovered and then training falls back to training on CPU π’ | 05-09-2020 13:20:41 | 05-09-2020 13:20:41 | TPUs can be autodiscovered if you use them through AI-platform by running the trainer with `gcloud ai-platform jobs submit training ...` This is the advised usage of TPUs I put in the trainer.
Nevertheless if your want to use it through its name, this is done in the following PR #4270 thanks to @ViktorAlm<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,251 | closed | Encoder - Decoder loading wrong weights and missing decoders | # π Bug
## Information
Model I am using (Bert, XLNet ...): Roberta as encoder and Bert as decoder
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. https://github.com/ThilinaRajapakse/simpletransformers/pull/360
## Expected behavior
Predicting the same string before saving and after loading the saved model are completely different.
Expected that i get the same results both times
Furthermore your tutorial on medium is not working, caused missing decoder models.
Link to medium: https://medium.com/huggingface/encoder-decoders-in-transformers-a-hybrid-pre-trained-architecture-for-seq2seq-af4d7bf14bb8
There is missing an argument in the forward function:
`TypeError: forward() got an unexpected keyword argument 'encoder_hidden_states'`
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Linux
- Python version: 3.7
- PyTorch version (GPU?): GPU 1.4.0
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| 05-09-2020 12:53:43 | 05-09-2020 12:53:43 | |
transformers | 4,250 | closed | [Request] NER Scripts on CoNLL 2003 dataset | # π Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
## Motivation
Current NER scripts (data preparation, train, and evaluation) is for the GermEval 2014 dataset.
It is very nice. However, according to the original BERT paper, experiments are done on the CoNLL-2003 dataset. It would be much better if the NER script could reproduce the results on this dataset.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
Looking forward to the feedback. Thanks in advance!
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 05-09-2020 11:10:26 | 05-09-2020 11:10:26 | Hi @gyuwankim the problem with English CoNLL dataset is, that it is not freely available (because it requires access to Reuters corpora in order to create it from scratch with the provided scripts).
You can find the dataset somehow on GitHub, but legally this is very dubious (and you have no guarantee that it is the original dataset).
That's the reason why I did use the GermEval 2014 dataset :)<|||||>But what you think about adding a detailed description of training a NER model on the WNUT17 dataset?
Data can be accessed here: https://github.com/juand-r/entity-recognition-datasets/tree/master/data/WNUT17
I could add a short example with some resuls (for other Transformer-based models as well). So we have both German and an English example of how to use the NER script :)
/cc @julien-c <|||||>Additionally, the BERT evaluation strategy cannot be simply reproduced, see this [thread](https://github.com/allenai/allennlp/pull/2067#issuecomment-443961816) on the problem.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,249 | closed | [docs] fix typo | replace sequence classification with question answering in AutoModelForQuestionAnswering class | 05-09-2020 10:16:42 | 05-09-2020 10:16:42 | |
transformers | 4,248 | closed | sentiment analysis pipeline provides option to output the original rating than positive/negative tags | It'd be helpful to provide an option for sentiment analysis pipeline output the original ratings rather than the positive/negative tags.
Also the documentation doesn't really explain what the output scores mean.. | 05-09-2020 10:16:16 | 05-09-2020 10:16:16 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,247 | closed | removed variable api from pplm example | Same as previous [PR](https://github.com/huggingface/transformers/pull/4156) | 05-09-2020 07:03:53 | 05-09-2020 07:03:53 | |
transformers | 4,246 | closed | Structuring Dataset for Fine Tuning GPT-2 with Song Lyrics? | I'm looking to fine tune a GPT-2 model with several songs from the same artist. However, I'm not exactly sure how I should be feeding nearly 300 songs into the model to learn from.
Currently, I'm breaking the lyrics into blocks of 100 and feeding them into my model:
```
lyrics = open('/JayZ_lyrics.txt', "r", encoding="utf-8").read()
lyrics = lyrics.split('\n')
lyrics = ''.join(lyrics)
tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2')
tokenized_text = tokenizer.encode(lyrics)
examples = []
block_size = 100
for i in range(0, len(tokenized_text)-block_size+1, block_size):
examples.append(tokenized_text[i:i+block_size])
inputs, labels = [], []
for ex in examples:
inputs.append(ex[:-1])
labels.append(ex[1:])
```
Then, I feed my dataset into the model and train on it.
```
dataset= tf.data.Dataset.from_tensor_slices((inputs,labels))
BATCH_SIZE = 16
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
optimizer = tf.keras.optimizers.Adam(epsilon=1e-08, clipnorm=1.0)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
metric = tf.keras.metrics.SparseCategoricalAccuracy('accuracy')
model = TFGPT2LMHeadModel.from_pretrained('distilgpt2')
model.compile(optimizer=optimizer, loss=[loss, None, None, None, None, None, None], metrics=[metric])
model.fit(dataset, epochs=4, callbacks=[LRFinder()])
```
Is this the right approach? Or, should I turn every song into a vector, find the len of longest song, pad the other songs to keep uniformity amongst vector lengths, feed in dataset.
Also, should I include a BOS and/or EOS token `<|endoftext|>` for each block that I create? | 05-09-2020 03:06:09 | 05-09-2020 03:06:09 | By the way, did this approach work for you? I tried a similar one and I couldn't stop getting an error about incompatible output shapes when calculating sparse categorical accuracy.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Any solution or suggestions yet? @1337-Pete |
transformers | 4,245 | closed | Add back option --do_lower_case in SQuAD for uncased models | The option `--do_lower_case` is currently needed by the uncased models (i.e., bert-base-uncased, bert-large-uncased). The results w/ and w/o this option are shown below.
The option was deleted in the current v2.9.0 version so this PR tries to add it back. This would not affect the cased models (i.e., RoBERTa, XLNet).
Results of SQuAD v1.1:
bert-base-uncased **without** `--do_lower_case`: 'exact': 73.83, 'f1': 82.22
bert-base-uncased **with** `--do_lower_cas`e: 'exact': 81.02, 'f1': 88.34 | 05-09-2020 02:37:49 | 05-09-2020 02:37:49 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4245?src=pr&el=h1) Report
> Merging [#4245](https://codecov.io/gh/huggingface/transformers/pull/4245?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7b75aa9fa55bee577e2c7403301ed31103125a35&el=desc) will **increase** coverage by `0.42%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4245?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4245 +/- ##
==========================================
+ Coverage 78.39% 78.82% +0.42%
==========================================
Files 120 120
Lines 19925 19925
==========================================
+ Hits 15620 15705 +85
+ Misses 4305 4220 -85
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4245?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4245/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (-0.17%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4245/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.96% <0.00%> (-0.13%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4245/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.48% <0.00%> (+0.11%)` | :arrow_up: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/4245/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `56.27% <0.00%> (+27.65%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4245?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4245?src=pr&el=footer). Last update [7b75aa9...f1caefd](https://codecov.io/gh/huggingface/transformers/pull/4245?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,244 | closed | Allow gpt2 to be exported to valid ONNX | Problem: gpt2 model and other models using gelu_new or gelu_fast cannot be exported to valid ONNX model.
Causes:
PyTorch will export some constants like 1 or 2 as int64, while ONNX operators like [Add](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Add) and [Pow](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Pow) requires data type matches between two inputs. It is invalid to have one input as float and another as int64.
For GPT2 model, another issue is the ONNX operator [Where](https://github.com/onnx/onnx/blob/master/docs/Operators.md#Where). It requires the first input to be bool tensor but currently it is uint8 tensor.
Add explicit data type cast could solve these problems. It is verified in exporting 3 pretrained models to ONNX model and loading by ONNX Runtime: albert-base-v2, distilgpt2, gpt2. | 05-09-2020 00:39:14 | 05-09-2020 00:39:14 | @LysandreJik failing unittest seems unrelated to the change-set introduced in this PR<|||||>Indeed @mfuntowicz |
transformers | 4,243 | closed | Distributed eval: SequentialDistributedSampler + gather all results | As discussed in #3944
In our [**`Trainer`**](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py), in torch.distributed mode, eval/predict are currently running on a single node. This PR attempts to fix this.
This relies on:
- a custom `DistributedSampler` named SequentialDistributedSampler that makes it easier to collate inference results at the end of the loop
- some tricky code to gather numpy arrays back to a single node (or to all of them).
I might be missing something simpler here. For instance in `pytorch/xla` there seems to be a [built-in function to do this](https://github.com/huggingface/transformers/blob/7b75aa9fa55bee577e2c7403301ed31103125a35/src/transformers/trainer.py#L656-L659).
Please let me know if that's the case π | 05-09-2020 00:18:17 | 05-09-2020 00:18:17 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4243?src=pr&el=h1) Report
> Merging [#4243](https://codecov.io/gh/huggingface/transformers/pull/4243?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2d05480174c736399c212b3d60b8774fbbf2b6c2&el=desc) will **decrease** coverage by `0.13%`.
> The diff coverage is `32.05%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4243?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4243 +/- ##
==========================================
- Coverage 78.15% 78.02% -0.14%
==========================================
Files 120 120
Lines 20053 20105 +52
==========================================
+ Hits 15673 15686 +13
- Misses 4380 4419 +39
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4243?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4243/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.67% <32.05%> (-1.78%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4243/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.48% <0.00%> (-0.13%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4243?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4243?src=pr&el=footer). Last update [2d05480...3e32aea](https://codecov.io/gh/huggingface/transformers/pull/4243?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,242 | closed | Reformer self attention mask value not converted in apex half precision | # π Bug
## Information
Model I am using Reformer:
Language I am using the model on English:
The problem arises when using:
* [ x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Install the latest transformers 2.9.0 and apex
```
from transformers import ReformerModel, ReformerTokenizer
import torch
from apex import amp
tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')
model = ReformerModel.from_pretrained('google/reformer-crime-and-punishment').cuda()
optimizer = torch.optim.Adam(model.parameters(), lr=0.0001)
model, optimizer = amp.initialize(model, optimizer, opt_level='O1')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0).cuda() # Batch size 1
outputs = model(input_ids)
last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
```
## Expected behavior
```
File "/home/model/env/lib/python3.7/site-packages/transformers/modeling_reformer.py", line 840, in forward
query_key_dots = torch.where(mask, query_key_dots, mask_value)
RuntimeError: expected scalar type Half but found Float
```
- `transformers` version:
- Platform: Ubuntu 18.04
- Python version: 3.7
- PyTorch version (GPU?): 1.5.0 cuda
- Tensorflow version (GPU?): No tensorflow
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| 05-08-2020 19:04:34 | 05-08-2020 19:04:34 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4242?src=pr&el=h1) Report
> Merging [#4242](https://codecov.io/gh/huggingface/transformers/pull/4242?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7b75aa9fa55bee577e2c7403301ed31103125a35&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `0.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4242?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4242 +/- ##
=======================================
Coverage 78.39% 78.39%
=======================================
Files 120 120
Lines 19925 19925
=======================================
+ Hits 15620 15621 +1
+ Misses 4305 4304 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4242?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/4242/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `88.17% <0.00%> (ΓΈ)` | |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4242/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.93% <0.00%> (ΓΈ)` | |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4242/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.38% <0.00%> (+0.41%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4242?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4242?src=pr&el=footer). Last update [7b75aa9...a14259c](https://codecov.io/gh/huggingface/transformers/pull/4242?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,241 | closed | Reformer self attention mask value not converted in apex half precision | # π Bug
## Information
Model I am using Reformer:
Language I am using the model on English:
The problem arises when using:
* [ x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Install the latest transformers 2.9.0 and apex
```
from transformers import ReformerModel, ReformerTokenizer
import torch
from apex import amp
tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')
model = ReformerModel.from_pretrained('google/reformer-crime-and-punishment').cuda()
optimizer = torch.optim.Adam(model.parameters(), lr=0.0001)
model, optimizer = amp.initialize(model, optimizer, opt_level='O1')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0).cuda() # Batch size 1
outputs = model(input_ids)
last_hidden_states = outputs[0] # The last hidden-state is the first element of the output tuple
```
## Expected behavior
```
File "/home/model/env/lib/python3.7/site-packages/transformers/modeling_reformer.py", line 840, in forward
query_key_dots = torch.where(mask, query_key_dots, mask_value)
RuntimeError: expected scalar type Half but found Float
```
- `transformers` version: 2.9.0 (latest source from github)
- Platform: Ubuntu 18.04
- Python version: 3.7
- PyTorch version (GPU?): 1.5.0 cuda
- Tensorflow version (GPU?): No tensorflow
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
| 05-08-2020 19:03:02 | 05-08-2020 19:03:02 | I have submitted a pull request in #4242 <|||||>Awesome I will take a look - thanks for the PR :-) |
transformers | 4,240 | closed | RuntimeError: expected device cpu but got device cuda:0 | I am traing a roberta model and running the script examples/run_language_modeling.py
The following error occurs when i am trying to resume training.
Traceback (most recent call last):
File "examples/run_language_modeling.py", line 284, in <module>
main()
File "examples/run_language_modeling.py", line 254, in main
trainer.train(model_path=model_path)
File "/home/socian-pc1/anaconda3/envs/XformerEnv/lib/python3.6/site-packages/transformers/trainer.py", line 326, in train
optimizer.step()
File "/home/socian-pc1/anaconda3/envs/XformerEnv/lib/python3.6/site-packages/torch/optim/lr_scheduler.py", line 67, in wrapper
return wrapped(*args, **kwargs)
File "/home/socian-pc1/anaconda3/envs/XformerEnv/lib/python3.6/site-packages/transformers/optimization.py", line 155, in step
exp_avg.mul_(beta1).add_(1.0 - beta1, grad)
RuntimeError: expected device cpu but got device cuda:0
My config
python examples/run_language_modeling.py \
--train_data_file $TRAIN_FILE \
--eval_data_file $TEST_FILE \
--output_dir ./MyRobertaOutput \
--model_name_or_path ./MyRoBERTa/checkpoint-570000 \
--config_name ../xformer_output \
--tokenizer_name ../xformer_output \
--mlm \
--do_train \
--do_eval \
--line_by_line \
--learning_rate 1e-5 \
--num_train_epochs 2 \
--save_total_limit 20 \
--save_steps 5000 \
--per_gpu_train_batch_size 6 \
--warmup_steps=10000 \
--logging_steps=100 \
--gradient_accumulation_steps=4 \
--seed 666 --block_size=512
| 05-08-2020 18:44:21 | 05-08-2020 18:44:21 | Try initialize the model in the trainer script from transformers with self.model = model.cuda()<|||||>I am getting the same error. Is there work around ?
What @tebandesade mentioned didnt work out for.<|||||>I faced the same problem with RoBERTa pretraining, however inserting line model = model.cuda() before trainer in run_language_modeling.py file helped me
@tebandesade, Thank you!
<|||||>Hello! I'm having trouble reproducing that on master. Do you mind installing from source and letting me know if you still have the issue? Thank you<|||||>Hi @LysandreJik, installing from source doesn't fix the issue, though @tebandesade's suggestion works fine.
@octalpixel try editing this line, it shall work;
https://github.com/huggingface/transformers/blob/62427d0815825436fa55b43725f44776e94abb65/src/transformers/trainer.py#L145<|||||>I am getting this error too.
I think the issue is optimizers are [setup](https://github.com/huggingface/transformers/blob/3e0f06210646a440509efa718b30d18322d6a830/src/transformers/trainer.py#L242) from `self.model` which is in cpu, but the model is [moved to device](https://github.com/huggingface/transformers/blob/3e0f06210646a440509efa718b30d18322d6a830/src/transformers/trainer.py#L338) afterwards. Which is why `self.model = model.cuda()` fixes the error.
<|||||>Should be fixed on master thanks to @shaoyent . Give it a try and let us know:)<|||||>> I am getting this error too.
>
> I think the issue is optimizers are [setup](https://github.com/huggingface/transformers/blob/3e0f06210646a440509efa718b30d18322d6a830/src/transformers/trainer.py#L242) from `self.model` which is in cpu, but the model is [moved to device](https://github.com/huggingface/transformers/blob/3e0f06210646a440509efa718b30d18322d6a830/src/transformers/trainer.py#L338) afterwards. Which is why `self.model = model.cuda()` fixes the error.
It works if you didn't install transformers package, or you will need to modify the installed package file. |
transformers | 4,239 | closed | distilbert-base-uncased | # π Bug/Question
## Information
am trying to execute :
**import ktrain
from ktrain import text
MODEL_NAME='distilbert-base-uncased'
t=text.Transformer(MODEL_NAME, maxlen=500, classes=np.unique(y_train))**
I get the following error:
OSError: Model name 'distilbert-base-uncased' was not found in tokenizers model name list (distilbert-base-uncased, distilbert-base-uncased-distilled-squad, distilbert-base-cased, distilbert-base-cased-distilled-squad, distilbert-base-german-cased, distilbert-base-multilingual-cased). We assumed 'distilbert-base-uncased' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
ktrain version 0.14.6 transformers version 2.8.0
Libraries was installed using pip install. I am following the tutorial : https://towardsdatascience.com/text-classification-with-hugging-face-transformers-in-tensorflow-2-without-tears-ee50e4f3e7ed
Any help would be appreciated. | 05-08-2020 18:32:20 | 05-08-2020 18:32:20 | hi! did you find a solution? I got the same problem!<|||||>I have the same problem with a freshly installed version. Any ideas?<|||||>This can happen if there is a network or firewall issue that prevents the `transformers` library from downloading the required files. To replicate on a laptop, you can delete the cache (e.g., `~/.cache/torch/transformers`), turn off wifi, and run this:
```python
from transformers import *
tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased')
```
If following the above steps, you will see the exact same error as posted above (i.e., `OSError: Model name 'distilbert-base-uncased' was not found in tokenizers model name list `).
As a workaround for machines with download issues, you can download the tokenizer files (e.g., `vocab.txt`) and model files manually using machine with access, copy folder to machine with firewall/network issues, and then point `transformers` to the folder containing the downloaded files (e.g., `DistilBertTokenizer.from_pretrained('path/to/folder)`).
See [this ktrain FAQ entry](https://github.com/amaiya/ktrain/blob/master/FAQ.md#how-do-i-use-ktrain-without-an-internet-connection) for more information on workarounds/solutions on using `transformers` without internet or behind firewall. While the entry focuses on using `transformers` from within the *ktrain* library, most information applies generally to `transformers`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This is resolved if you ensure you use transformers==3.4.0 |
transformers | 4,238 | closed | [tests] make pipelines tests faster with smaller models | - speedup: 8mins -> 90s on 1 CPU
- besides `test_fill_mask` the removed coverage was not meaningful, only shapes were being checked.
- Would be happy to continue for TF or get help.
- could use these in some examples tests also.
| 05-08-2020 15:47:43 | 05-08-2020 15:47:43 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4238?src=pr&el=h1) Report
> Merging [#4238](https://codecov.io/gh/huggingface/transformers/pull/4238?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/274d850d340c0004ee63627c88c433d0e55f5876&el=desc) will **increase** coverage by `0.79%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4238?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4238 +/- ##
==========================================
+ Coverage 77.54% 78.33% +0.79%
==========================================
Files 120 120
Lines 19918 19918
==========================================
+ Hits 15445 15603 +158
+ Misses 4473 4315 -158
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4238?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4238/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (-3.61%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4238/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.36% <0.00%> (-0.61%)` | :arrow_down: |
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/4238/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.17% <0.00%> (-0.20%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4238/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `40.17% <0.00%> (+0.58%)` | :arrow_up: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/4238/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `81.78% <0.00%> (+1.37%)` | :arrow_up: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/4238/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `75.73% <0.00%> (+2.29%)` | :arrow_up: |
| [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/4238/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.81% <0.00%> (+2.62%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/4238/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `95.71% <0.00%> (+10.00%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4238/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `89.93% <0.00%> (+81.20%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4238?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4238?src=pr&el=footer). Last update [274d850...4196a86](https://codecov.io/gh/huggingface/transformers/pull/4238?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Looks good to me!<|||||>please wait before merging, I'd like to take a look<|||||>Yes, @julien-c I will keep all the old integration test logic in @slow.
<|||||>(Feel free to merge when this is done) |
transformers | 4,237 | closed | Only self-attention returned when output_attentions = True for BERT | # π Bug
```
if self.output_attentions:
all_attentions = all_attentions + (layer_outputs[1],)
```
should be
```
if self.output_attentions:
all_attentions = all_attentions + (layer_outputs[1:],)
```
## Information
Line 412 [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py
)
I assume that there is no reason to exclude cross-attention from all_attentions.
| 05-08-2020 14:46:26 | 05-08-2020 14:46:26 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,236 | closed | Pipelines: use tokenizer.max_len | fix #4224 by passing tokenizer.max_len to `tokenizer.batch_encode_plus`
| 05-08-2020 14:29:26 | 05-08-2020 14:29:26 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4236?src=pr&el=h1) Report
> Merging [#4236](https://codecov.io/gh/huggingface/transformers/pull/4236?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cf08830c2868c5335b1657f95779f370862e3a28&el=desc) will **decrease** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4236?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4236 +/- ##
==========================================
- Coverage 78.38% 78.37% -0.02%
==========================================
Files 120 120
Lines 19918 19918
==========================================
- Hits 15613 15611 -2
- Misses 4305 4307 +2
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4236?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/4236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.56% <ΓΈ> (+0.19%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.48% <0.00%> (-0.49%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4236/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.93% <0.00%> (+0.16%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4236?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4236?src=pr&el=footer). Last update [cf08830...158cd06](https://codecov.io/gh/huggingface/transformers/pull/4236?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This PR is actually very related to this issue: #4501<|||||>Abandoning. |
transformers | 4,235 | closed | Error importing Reformer model | # π Bug
## Information
I am trying to import the Reformer model as in here:
https://huggingface.co/transformers/model_doc/reformer.html
'''
from transformers import ReformerModel, ReformerConfig
'''
However, I am getting the following error:
ImportError: cannot import name 'ReformerModel' from 'transformers'
The problem arises when using:
* [ β] the official example scripts: (give details below)
* [ X ] my own modified scripts: (give details below)
- `transformers` version: 2.8.0
| 05-08-2020 14:13:01 | 05-08-2020 14:13:01 | I think you only need to update the library.
Version 2.9, released esterday, contains the reformer model<|||||>Thanks,
Installing Version 2.9.0 solved the issue |
transformers | 4,234 | closed | [infra] make a tiny "distilroberta-base" to speed up test_pipelines.py and test_examples.py | put it in a user namespace on s3 like `sshleifer/tiny-distilroberta`
doing the same for distilbert-base would probably also help. | 05-08-2020 14:10:53 | 05-08-2020 14:10:53 | |
transformers | 4,233 | closed | How can I mute the info from BertTokenizerFast.encode_plus | # π Bug
## Information
Model I am using:
Bert
Language I am using the model on (English, Chinese ...):
Chinese
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
```python
tokenizer.encode_plus(...)
```
Every time I run this line, the info "Fast tokenizers add special tokens by default. To remove special tokens, please specify`add_special_tokens=False` during the initialisation rather than when calling `encode`,`encode_plus` or `batch_encode_plus`." will come out.
Even though I have already set the add_special_tokens to False
```python
tokenizer = BertTokenizerFast.from_pretrained(model_path, add_special_tokens = False)
```
The info can be very annoying when I run an iteration loop. How can I mute this info.
## Environment info
- `transformers` version: 2.8.0
- Platform: Linux-4.4.0-87-generic-x86_64-with-debian-stretch-sid
- Python version: 3.6.9
- PyTorch version (GPU?): 1.4.0+cu100 (True)
- Tensorflow version (GPU?): 2.0.0 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
| 05-08-2020 14:01:54 | 05-08-2020 14:01:54 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,232 | closed | MarianMTModel: Runtime Errors | 11/1010 models cannot me loaded because of state_dict mismatches (not tokenizer).
```python
failing = ['sv-tpi',
'fi-en',
'sv-ty',
'sv-toi',
'en-st',
'en-tl',
'sv-ts',
'sv-tvl',
'en-fi',
'en-ml',
'fi-sv']
for mname in failing:
slug = f'opus-mt-{mname}'
MarianMTModel.from_pretrained(slug)
```
`en-st` is a slightly different error:
```
'Unable to load vocabulary from file. Please check that the provided vocabulary is accessible and not corrupted.'),
``` | 05-08-2020 13:09:46 | 05-08-2020 13:09:46 | Fixed on S3.
|
transformers | 4,231 | closed | Strange difference between NER results in 2.8.0 and previous versions | # π Bug
## Information
Model I am using: Albert
Language I am using the model on: English and Portuguese
The problem arises when using:
* [ ] the official example scripts: (give details below)
The tasks I am working on is:
* [ ] my own task or dataset: (give details below)
## To reproduce
Not sure if this is a bug, but for the same seed and hyperparameters, I'm getting quite different results for different benchmarks on transformers 2.8.0 and 2.7.0.
Steps to reproduce the behavior:
1. For 2.7.0: `python examples/ner/run_ner.py --data_dir <data-dir> --labels <labels-path> --model_type albert --model_name_or_path <albert-model> --output_dir <output-dir> --max_seq_length 512 --overwrite_output_dir --do_train --do_eval --do_predict --per_gpu_train_batch_size 16 --per_gpu_eval_batch_size 8 --learning_rate 5e-05 --num_train_epochs 5 --save_steps 878 --seed 1`
2. For 2.8.0: `python examples/ner/run_ner.py config.json`. Here's the config file for each:
2.1. For CoNLL:
```
{
"data_dir": "/home/corpora/ner/conll2003",
"labels": "/home/corpora/ner/conll2003/labels.txt",
"model_name_or_path": "albert-base-v2",
"output_dir": "/home/models/albert/conll2003",
"max_seq_length": 512,
"num_train_epochs": 5,
"per_gpu_train_batch_size": 16,
"save_steps": 878,
"seed": 1,
"do_train": true,
"do_eval": true,
"do_predict": true,
"overwrite_output_dir": true
}
```
2.2. For HAREM:
```
{
"data_dir": "/home/corpora/ner/harem/selective",
"labels": "/home/corpora/ner/harem/selective/labels.txt",
"model_name_or_path": "/home/models/albert/base_pt_brwac_en/albert_ckp_1176k",
"output_dir": "/home/models/albert/base_pt_brwac_en/albert_ckp_1176k/harem",
"max_seq_length": 512,
"num_train_epochs": 5,
"per_gpu_train_batch_size": 16,
"save_steps": 878,
"seed": 1,
"do_train": true,
"do_eval": true,
"do_predict": true,
"overwrite_output_dir": true
}
```
3. For English CoNLL-2003 used albert-base-v2, and for Portuguese HAREM used a checkpoint for a version we're training.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Following are the results I'm getting for each of the 4 runs. I think the results should be a lot closer to each other, considering the same hyperparameters and seeds are being used, right?
1. 2.7.0 / CoNLL-2003:
```
INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, cache_dir='', config_name='', data_dir='/home/corpora/ner/conll2003', device=device(type='cuda'), do_eval=True, do_lower_case=False, do_predict=True, do_train=True, eval_all_checkpoints=False, evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, keep_accents=None, labels='/home/corpora/ner/conll2003/labels.txt', learning_rate=5e-05, local_rank=-1, logging_steps=500, max_grad_norm=1.0, max_seq_length=512, max_steps=-1, model_name_or_path='albert-base-v2', model_type='albert', n_gpu=1, no_cuda=False, num_train_epochs=5.0, output_dir='/home/models/albert/conll2003-2.7.0', overwrite_cache=False, overwrite_output_dir=True, per_gpu_eval_batch_size=8, per_gpu_train_batch_size=16, save_steps=878, seed=1, server_ip='', server_port='', strip_accents=None, tokenizer_name='', use_fast=None, warmup_steps=0, weight_decay=0.0)
INFO - __main__ - f1 = 0.8424940638466273
INFO - __main__ - loss = 0.184873234364002
INFO - __main__ - precision = 0.836973615236764
INFO - __main__ - recall = 0.8480878186968839
```
2. 2.8.0 / CoNLL-2003:
```
INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='/home/models/albert/conll2003', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=True, evaluate_during_training=False, per_gpu_train_batch_size=16, per_gpu_eval_batch_size=8, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=5, max_steps=-1, warmup_steps=0, logging_dir=None, logging_first_step=False, logging_steps=500, save_steps=878, save_total_limit=None, no_cuda=False, seed=1, fp16=False, fp16_opt_level='O1', local_rank=-1)
INFO - __main__ - precision = 0.890553401480437
INFO - __main__ - recall = 0.8946529745042493
INFO - __main__ - f1 = 0.892598480833775
INFO - __main__ - loss = 0.15480460636704224
```
3. 2.7.0 / HAREM
```
INFO - __main__ - Training/evaluation parameters Namespace(adam_epsilon=1e-08, cache_dir='', config_name='', data_dir='/home/corpora/ner/harem/selective', device=device(type='cuda'), do_eval=True, do_lower_case=False, do_predict=True, do_train=True, eval_all_checkpoints=False, evaluate_during_training=False, fp16=False, fp16_opt_level='O1', gradient_accumulation_steps=1, keep_accents=None, labels='/home/corpora/ner/harem/selective/labels.txt', learning_rate=5e-05, local_rank=-1, logging_steps=500, max_grad_norm=1.0, max_seq_length=512, max_steps=-1, model_name_or_path='albert-base-v2', model_type='albert', n_gpu=1, no_cuda=False, num_train_epochs=5.0, output_dir='/home/models/albert/eduardo/base_pt_brwac_en/albert_ckp_1176k/harem-2.7.0', overwrite_cache=False, overwrite_output_dir=True, per_gpu_eval_batch_size=8, per_gpu_train_batch_size=16, save_steps=878, seed=1, server_ip='', server_port='', strip_accents=None, tokenizer_name='', use_fast=None, warmup_steps=0, weight_decay=0.0)
INFO - __main__ - f1 = 0.805560264451602
INFO - __main__ - loss = 0.11168152010335869
INFO - __main__ - precision = 0.8184636582845333
INFO - __main__ - recall = 0.7930574098798397
```
4. 2.8.0 / HAREM
```
INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='/home/models/albert/eduardo/base_pt_brwac_en/albert_ckp_1176k/harem', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=True, evaluate_during_training=False, per_gpu_train_batch_size=16, per_gpu_eval_batch_size=8, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=5, max_steps=-1, warmup_steps=0, logging_dir=None, logging_first_step=False, logging_steps=500, save_steps=878, save_total_limit=None, no_cuda=False, seed=1, fp16=False, fp16_opt_level='O1', local_rank=-1)
INFO - __main__ - precision = 0.7228915662650602
INFO - __main__ - recall = 0.7209612817089452
INFO - __main__ - f1 = 0.7219251336898397
INFO - __main__ - loss = 0.14575799524457683
```
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Linux-4.15.0-99-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.5
- PyTorch version (GPU?): 1.3.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes, Tesla V100-SXM2
- Using distributed or parallel set-up in script?: No
| 05-08-2020 12:12:21 | 05-08-2020 12:12:21 | FYI, ran in 2.9.0 as well and got the same results.<|||||>Hi @pvcastro , so the CoNLL results are looking good π€ Because there's an improviment with 2.8.
Regarding to HAREM: Could you verify that there's a `tokenizer_config.json` located in the `/home/models/albert/base_pt_brwac_en/albert_ckp_1176k` folder? Is it an uncased or cased model?<|||||>Hi @stefan-it !
There's an improvement for CoNLL, but does this improvement make sense? Shouldn't the results be closer to each other?
As for HAREM, there's a tokenizer_config.json, but it's empty...there are only the {}. The model is **cased**.<|||||>Could you use the following `tokenizer_config.json`:
```json
{"do_lower_case": false, "max_len": 512, "init_inputs": [], "strip_accents":false}
```
By default ALBERT models are lower-cased (if you don't specify it in that configuration):
https://github.com/huggingface/transformers/blob/26dad0a9fab0694ce1128d91838b9ca78e1554f6/src/transformers/tokenization_albert.py#L117
(Because the original ALBERT model is an uncased model).
Then please re-run the experiment (and also clear the `cached_*` files in your data folder/examples folder) - this should hopefully improve performance :)<|||||>Oh I see @stefan-it , it worked! Got 80.55 for HAREM doing this. So the reason this happened after the version change it's because before, the do_lower_case was passed as default from run_ner.py as False, and now this doesn't happen in 2.8.0 anymore. And for CoNLL the effect was the opposite...should be setting --do_lower_case for 2.7.0, since the default for this version is False and the model is uncased.
Thanks!<|||||>Hi @stefan-it !
If you don't mind me asking this in this closed issue, now I'm facing the same problem concerning a portuguese electra model I'm benchmarking.
I did what you suggested for ALBERT model, and for ELECTRA I'm doing the same thing...I create a tokenizer_config.json doing the same thing. However, for ELECTRA I'm getting a huge difference between the NER models trained using transformers 2.9.1 (happened with 2.7.0 and 2.8.0 too) and the ones trained with the chunk task in electra's original repository.
This is what I'm running there as parameter:
`python /home/repositorios/electra/run_finetuning.py --data-dir /home/models/electra/electra_pt_small_pp_uncased_512/electra_ckp_100k/ner/harem/selective/electra_pt_small_pp_uncased_512 --model-name electra_pt_small_pp_uncased_512 --hparams {"task_names": ["chunk"], "model_size": "small", "max_seq_length": 512, "learning_rate": 3e-4, "num_train_epochs": 6, "train_batch_size": 32, "do_lower_case": true}`
And this is what I use for transformers:
`python /home/repositorios/transformers/examples/token-classification/run_ner.py --model_type electra --model_name_or_path /home/models/electra/electra_pt_small_pp_uncased_512/electra_ckp_100k --do_train --do_eval --do_predict --data_dir /home/corpora/ner/harem/selective --labels /home/corpora/ner/harem/selective/labels.txt --num_train_epochs 6 --per_gpu_train_batch_size 32 --per_gpu_eval_batch_size 32 --max_seq_length 512 --output_dir /home/models/electra/electra_pt_small_pp_uncased_512/electra_ckp_100k/ner/harem/selective --evaluate_during_training --save_steps 1000 --logging_steps 1000 --overwrite_output_dir --overwrite_cache`
This is the tokenizer_config.json used for this checkpoint:
`{"do_lower_case": true, "max_len": 512, "init_inputs": [], "strip_accents":false}`
This is what I do to convert the electra tf checkpoint to pytorch:
`python ${REPO_DIR}/src/transformers/convert_electra_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path ${CKP_DIR} --pytorch_dump_path ${CKP_DIR}/pytorch_model.bin --discriminator_or_generator discriminator --config_file ${CKP_DIR}/config.json`
I use the very same electra config.json for both.
For this same checkpoint and parameters above, I get 36,33% using transformers and 71.42% using the electra repository. Do you know anything I should be doing differently? <|||||>Hi @pvcastro
for an uncased model the convention is to use the "strip accents" feature, so please try to set `strip_accents": true` in your tokenizer config. Hopefully this solves the problem :)<|||||>Hi @stefan-it , thanks for the response!
I changed this single property in my tokenizer_config.json as you suggested, but made no difference. Got the exact same results :thinking: <|||||>Just to be clear, I'm using overwrite-output-dir and overwrite-cache to make sure that I'm not using any old files, and the tokenizer_config.json in the output-dir is updated with strip_accents true.<|||||>That's really strange. Could you also set the same learning rate as in the ELECTRA example (Transformers uses 5e-5 as default).
I had a look at the ELECTRA fine-tuning example a while ago -> I did implement an own evaluation class that uses `seqeval` like in Transformers. The results were a bit different between `seqeval` and ELECTRA evaluation output, but definitely not ~35%!<|||||>Oh, that helped a lot. Now it went to 65.41%. Do you think the remaining difference is due to the evaluation differences?
Strange that for english CoNLL 2003 you're able to get good results using the standard 5e-5 learning rate, right?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,230 | closed | example updated to use generation pipeline | 05-08-2020 11:27:14 | 05-08-2020 11:27:14 | That's great, thanks! cc @patrickvonplaten for the text-generation pipeline |
|
transformers | 4,229 | closed | Problem with BertTokenizer using additional_special_tokens | # π Bug
## Information
Model I am using: Bert (bert-base-uncased)
Language I am using the model on: English
The problem arises when using:
* [ ] the official example scripts: (give details below)
The tasks I am working on is:
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. import transformers
2. from transformers import BertTokenizer
3. specify additional_special_tokens as ["[E11]", "[E12]", "[E21]", "[E22]"]
4. instantiate tokenizer as `tokenizer = BertTokenizer.from_pretrained('bert-base-uncased', do_lower_case=True, additional_special_tokens=additional_special_tokens)`
5. tokenize test string with the special tokens `'[E11] Tom Thabane [E12] resigned in October last year to form the [E21] All Basotho Convention [E22] -LRB- ABC -RRB- , crossing the floor with 17 members of parliament , causing constitutional monarch King Letsie III to dissolve parliament and call the snap election .'` using `tokenizer.tokenize(test_string)`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I would expect that the tokenization would give me regular tokens from wordpiece and the special tokens would be kept intact as [E11], [E12], etc, but instead I get:
`['[', 'e', '##11', ']', 'tom', 'tha', '##bane', '[', 'e', '##12', ']', 'resigned', 'in', 'october', 'last', 'year', 'to', 'form', 'the', '[', 'e', '##21', ']', 'all', 'bas', '##otho', 'convention', '[', 'e', '##22', ']', '-', 'l', '##rb', '-', 'abc', '-', 'rr', '##b', '-', ',', 'crossing', 'the', 'floor', 'with', '17', 'members', 'of', 'parliament', ',', 'causing', 'constitutional', 'monarch', 'king', 'lets', '##ie', 'iii', 'to', 'dissolve', 'parliament', 'and', 'call', 'the', 'snap', 'election', '.']`
I'm trying to run a training from [https://github.com/mickeystroller/R-BERT](https://github.com/mickeystroller/R-BERT), and reported this to the author, but he seems to get the proper results, even though we're both using transformers 2.8.0:
His results:

My results:

## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: Linux-4.15.0-99-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.5
- PyTorch version (GPU?): 1.3.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes, GTX 1070 with CUDA 10.1.243
- Using distributed or parallel set-up in script?: No
Here's the output from R-BERT's author as well:
- `transformers` version: 2.8.0
- Platform: Linux-4.15.0-72-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.4
- PyTorch version (GPU?): 1.4.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
The author from R-BERT seems to be using `tokenizers version 0.5.2`, and mine is 0.7.0. I tried downgrading mine to 0.5.2 to see if I would get the same results he did, but this doesn't seem to work because it's not compatible with transformers 2.8.0, as can be seen below. I have no Idea how he was able to use both together:
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-514ca6d60059> in <module>
----> 1 import transformers
2 from transformers import BertTokenizer
/media/discoD/anaconda3/envs/fast-bert/lib/python3.7/site-packages/transformers/__init__.py in <module>
53 from .configuration_xlm_roberta import XLM_ROBERTA_PRETRAINED_CONFIG_ARCHIVE_MAP, XLMRobertaConfig
54 from .configuration_xlnet import XLNET_PRETRAINED_CONFIG_ARCHIVE_MAP, XLNetConfig
---> 55 from .data import (
56 DataProcessor,
57 InputExample,
/media/discoD/anaconda3/envs/fast-bert/lib/python3.7/site-packages/transformers/data/__init__.py in <module>
4
5 from .metrics import is_sklearn_available
----> 6 from .processors import (
7 DataProcessor,
8 InputExample,
/media/discoD/anaconda3/envs/fast-bert/lib/python3.7/site-packages/transformers/data/processors/__init__.py in <module>
3 # module, but to preserve other warnings. So, don't check this module at all.
4
----> 5 from .glue import glue_convert_examples_to_features, glue_output_modes, glue_processors, glue_tasks_num_labels
6 from .squad import SquadExample, SquadFeatures, SquadV1Processor, SquadV2Processor, squad_convert_examples_to_features
7 from .utils import DataProcessor, InputExample, InputFeatures, SingleSentenceClassificationProcessor
/media/discoD/anaconda3/envs/fast-bert/lib/python3.7/site-packages/transformers/data/processors/glue.py in <module>
21
22 from ...file_utils import is_tf_available
---> 23 from ...tokenization_utils import PreTrainedTokenizer
24 from .utils import DataProcessor, InputExample, InputFeatures
25
/media/discoD/anaconda3/envs/fast-bert/lib/python3.7/site-packages/transformers/tokenization_utils.py in <module>
27 from typing import List, Optional, Sequence, Tuple, Union
28
---> 29 from tokenizers import AddedToken, Encoding
30 from tokenizers.decoders import Decoder
31 from tokenizers.implementations import BaseTokenizer
ImportError: cannot import name 'AddedToken' from 'tokenizers' (/media/discoD/anaconda3/envs/fast-bert/lib/python3.7/site-packages/tokenizers/__init__.py)
```
| 05-08-2020 10:38:28 | 05-08-2020 10:38:28 | I've also confronted this issue.
In my case, with `transformers v2.8.0` it correctly splits special token.
But from `v2.9.0`, it splits the special tokens like `'[', 'e', '##11', ']'`. It's quite weird...<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,228 | closed | Reformer tokenizer not working | # π Bug
`TextDataset` throws out of index error.
## Information
```
model_args = ModelArguments(model_name_or_path = 'Reformer',
config_name = 'ReformerConfig',
tokenizer_name = 'ReformerTokenizer')
training_args = TrainingArguments(output_dir = '/tmp', do_train = True,
evaluate_during_training = True,
do_eval = True,
do_predict = False)
data_args = DataTrainingArguments(train_data_file = TRAIN_DATA_DIR,
eval_data_file = EVAL_DATA_DIR,
line_by_line = True)
tokenizer = AutoTokenizer.from_pretrained("google/reformer-crime-and-punishment",
cache_dir=model_args.cache_dir)
train_dataset = (
get_dataset(data_args, tokenizer=tokenizer, local_rank=training_args.local_rank)
if training_args.do_train
else None
)
eval_dataset = (
get_dataset(data_args, tokenizer=tokenizer, local_rank=training_args.local_rank, evaluate=True)
if training_args.do_eval
else None
)
```
It throws this:
```
~/.local/lib/python3.6/site-packages/transformers/tokenization_utils.py in truncate_sequences(self, ids, pair_ids, num_tokens_to_remove, truncation_strategy, stride)
2044 for _ in range(num_tokens_to_remove):
2045 if pair_ids is None or len(ids) > len(pair_ids):
-> 2046 overflowing_tokens = [ids[-1]] + overflowing_tokens
2047 ids = ids[:-1]
2048 else:
IndexError: list index out of range
```
When `line_by_line` is kept False,
```
train_dataset[0]
```
returns
```
~/.local/lib/python3.6/site-packages/transformers/data/datasets/language_modeling.py in __getitem__(self, i)
72
73 def __getitem__(self, i) -> torch.Tensor:
---> 74 return torch.tensor(self.examples[i], dtype=torch.long)
75
76
IndexError: list index out of range
```
Model I am using (Bert, XLNet ...): Reformer
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name) wikitext-2
* [ ] my own task or dataset: (give details below)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
`TextDataset` should work as per standard Pytorch `Dataset` class.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9
- Platform: Ubuntu 18.04
- Python version: 3.8
- PyTorch version (GPU?): 1.5
- Tensorflow version (GPU?): NA
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 05-08-2020 10:16:58 | 05-08-2020 10:16:58 | |
transformers | 4,227 | closed | 2.9.0 Padding Bug | # π Bug
## Information
Model I am using (Bert, XLNet ...): **Roberta with RobertaForSequenceClassification head**
Language I am using the model on (English, Chinese ...): **Polish**
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
**The problem arises when performing a forward pass while training. I assume something is wrong with padding, though with 2.8.0 version it works correctly.**
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
**I'm using pre-trained roBERTa model for classification task. Details can be found in the notebook below.**
## To reproduce
Steps to reproduce the behavior:
1. Run https://colab.research.google.com/drive/1w54ScQ3-dh99-cO4OGG6-IdP4JRfki59?usp=sharing
Error message:
```
56 # Perform a forward pass
---> 57 loss, logits = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels)
...
2033 """
2034 # The series of casts and type-conversions here are carefully balanced to both work with ONNX export and XLA.
-> 2035 mask = input_ids.ne(padding_idx).int()
2036 incremental_indices = torch.cumsum(mask, dim=1).type_as(mask) * mask
2037 return incremental_indices.long() + padding_idx
TypeError: ne() received an invalid combination of arguments - got (NoneType), but expected one of:
* (Tensor other)
didn't match because some of the arguments have invalid types: (!NoneType!)
* (Number other)
didn't match because some of the arguments have invalid types: (!NoneType!)
```
## Expected behavior
It should work without errors. The problem arises after installing 2.9.0 version. With 2.8.0 version it works correctly.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: **2.9.0**
- Platform: Google colab
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0+cu101
- Tensorflow version (GPU?): 2.2.0rc4
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: n/a
| 05-08-2020 09:12:07 | 05-08-2020 09:12:07 | Can you provide a smaller reproduction case?<|||||>```
# Downloading pre-trained model
!wget https://github.com/sdadas/polish-roberta/releases/download/models/roberta_base_transformers.zip
!unzip roberta_base_transformers.zip -d roberta_base_transformers
!pip install transformers -q
from tokenizers import SentencePieceBPETokenizer
from tokenizers.processors import RobertaProcessing
from transformers import RobertaModel, AutoModel
# Loading tokenizer
model_dir = "roberta_base_transformers"
tokenizer = SentencePieceBPETokenizer(f"{model_dir}/vocab.json", f"{model_dir}/merges.txt")
getattr(tokenizer, "_tokenizer").post_processor = RobertaProcessing(sep=("</s>", 2), cls=("<s>", 0))
# Loading model
model: RobertaModel = AutoModel.from_pretrained(model_dir)
input = tokenizer.encode("Ala ma kota.")
output = model(torch.tensor([input.ids]))[0]
print(output[0][1])
```
<|||||>same here!
transformers 2.8.0 works fine. but 'transformers >= 2.9.0' produce exactly same error.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>trouble shooting
```
position_ids = create_position_ids_from_input_ids(input_ids, self.padding_idx).to(input_ids.device)
mask = input_ids.ne(padding_idx).int()
...
ne() received an invalid combination of arguments - got (NoneType)
```
just modify 'pad_token_id: null' to 'pad_token_id: 1' in `config.json` |
transformers | 4,226 | closed | Simplify cache vars and allow for TRANSFORMERS_CACHE env | As it currently stands, "TRANSFORMERS_CACHE" is not an accepted environment variable. It seems that the these variables were not updated when moving from version pytorch_transformers to transformers. In addition, the fallback procedure could be improved. and simplified. Pathlib seems redundant here.
This PR:
- adds the TRANSFORMERS_CACHE environment variable
- change the fall-back order of the cache env variables
File_utils (and perhaps the rest of the library) might benefit from using pathlib more widely. In a future endeavour I might take this on. | 05-08-2020 08:13:10 | 05-08-2020 08:13:10 | |
transformers | 4,225 | closed | [Reformer/Longformer] Add cross-attention layers for Encoder-Decoder setting | # β Questions & Help
## Details
It seems that Reformer model has been added in Transformers. But it's a standard auto-regressive language model. How can we use Reformer in Encoder-Decoder settings ?
| 05-08-2020 05:31:45 | 05-08-2020 05:31:45 | It's a good question. I originally thought it will be quite easy to add - but it's not going to be that trivial.
For the encoder-decoder setting, we need a lsh cross attention layer that receives different embeddings for query and keys so that the usual LSH hashing method does not work.
It will probably take a while until this is implemented since as far as I know the authors of Reformer themselves are still working on it.
<|||||>This question could also be a good opportunity to brainstorm a bit on how a LSH Cross Attention layer could look like :-) <|||||>Are you referring to https://github.com/google/trax/blob/25479c577ef28780118a4f0169e1c7f5a028a7a3/trax/models/reformer/reformer.py#L846-L847 ?
I assume the approach taken for the encoder-decoder in the [machine translation notebook](https://github.com/google/trax/blob/25479c577ef28780118a4f0169e1c7f5a028a7a3/trax/models/reformer/machine_translation.ipynb) wouldn't work here?<|||||>Yeah, for now, we could do the same as is done there and simply add normal "non-chunked" attention for encoder-decoder settings. I'll think about it.<|||||>> Yeah, for now, we could do the same as is done there and simply add normal "non-chunked" attention for encoder-decoder settings. I'll think about it.
Hi Patrick, any update on this?
Thanks :)<|||||>I will take a while until this is implemented. Not sure if @patil-suraj already has an idea of how he would implement it :-) <|||||>@patrickvonplaten there's been an update on https://github.com/google/trax/issues/539#issuecomment-647980326<|||||>Awesome thanks for noting that!
I try to take a look at this as soon as possible.
Longformer and Reformer Encoder-Decoder models definitely are of high priority :-) <|||||>@patrickvonplaten how is the Reformer Encoder Decoder coming along? I would like to compare this model to Longformer Encoder Decoder for long document summarisation so I will probably begin working on this soon if it still looks to be a while away.<|||||>@alexgaskell10, if you want to implement it yourself, you can use a regular encoder-decoder model with the LSH selfattention in the encoder only. The other two attention blocks in the decoder (crossattention and final selfattention) can still use the regular full attention. This works when the output length is not very long, which is typically the case. You won't get the memory savings of the reversible transformer, but maybe you don't need it and gradient checkpointing is enough.
I am curious if you already have numbers for LongformerEncoderDecoder that you can share?
<|||||>@ibeltagy thanks for this. Yes, that is my plan- begin with implementing the self-attention in the encoder and then add the other memory-saving features if necessary. This will probably be fine for my use-case as the encoder is the main memory bottleneck.
I have run lots of experiments for input lengths < 1024 tokens. I am in the process of running more with input len > 1024 but the early results have been strange so I need to do some digging and these aren't ready to share. I'll post them here when they are ready. In the meantime, are you interested in experiments with < 1024 tokens?<|||||>I want to give a short update on EncoderDecoder models for Longformer / Reformer from my side.
Given that the Reformer Encoder / Decoder code is still very researchy in teh original `trax` code-base and thus prone to still change, we will probably wait a bit until we implement Reformer Encoder/Decoder logic for `transformers`.
From my experience, few people would take the Reformer Encoder/Decoder framework to do massive pre-training from scratch. Most likely, we will wait until the Reformer authors publish weights or it least a paper.
At the moment, I work on fine-tuning a Bert2Bert model via the Encoder/Decoder framework (use two pretrained `bert-base-uncased` models => `EncoderDecoder.from_pretrained("bert-base-cased", "bert-base-cased")` and fine-tune the model. This means especially the decoder weights have to be adapted a lot, since in the EncoderDecoder framework the model has a causal mask and the cross attention layers are to be trained from scratch. The results so far are quite promising in that it might not be too difficult to fine-tune two pretrained "encoder-only" models into an encoder-decoder model.
Since Reformer also does not yet have a massively pretrained bi-directional encoder-only model, the focus will most likely shift to the Longformer encoder-decoder framework afterwards. Longformer has more traction, more pre-trained models and is more similar to Bert2Bert fine-tuning.
That's mostly my personal opinion - I would be very interested in your opinions about it!<|||||>hi @patrickvonplaten , I agree with you here. I also wanted to implement Reformer Encoder/Decoder but as no previous experiment are available from original authors and no pre-trained weight are provided I decided to go with `LongBart ` i.e replacing BART's attention with `Longformars` sliding window attention. @alexgaskell10 has already experimented with it and it seems to be doing good enough. @ibeltagy has also added `LongformerEncoderDeocder` using BART weights.
As pre-trained weights are already available for `Longformer` and as the sliding window attention can be used as drop-in replacement for self attention prioritizing Longformer encoder-decoder it makes more sense to me than Reformer <|||||>Thanks for the update @patrickvonplaten and I think that sounds sensible. Having looked into them both Longformer ED is definitely more straightforward to implement than Reformer ED and in my experiments so far it has performed pretty well so makes sense to focus efforts here.
I had started working on the Reformer ED before your update and I have a working implementation (just the bare bones- replacing Bart's Encoder's self attention layers with the local/lsh layers from Reformer). I am running some early experiments on it at the moment but I don't expect it to perform well as the self-attention layers are trained from scratch. I'll share an update if anything interesting arises.
> I work on fine-tuning a Bert2Bert model via the Encoder/Decoder framework
Sounds cool, look forward to trying it out!<|||||>Oh I cannit wait for this to be ready. π
I have a seq2seq problem with documents which are good enough that I need this to be ready before I can hope to solve my task. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,224 | closed | Bart now enforces maximum sequence length in Summarization Pipeline | # π Bug
## Information
Model I am using (Bert, XLNet ...): Bart (bart-large-cnn)
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
Based on example code in docs, though.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Load default summarization pipeline
2. Try to use model to summarize text that has > 1024 tokens
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Example code:
```
from transformers import pipeline
summarizer = pipeline('summarization')
text = '=' * 102570 # Happened to be the length of the file I was testing, my actual file produced 25,257 tokens
print(summarizer(text))
```
Output:
```
Token indices sequence length is longer than the specified maximum sequence length for this model (1605 > 1024). Running this sequence through the model will result in indexing errors
Traceback (most recent call last):
File "ex.py", line 4, in <module>
print(summarizer(text, max_length=250))
File ".../lib/python3.7/site-packages/transformers/pipelines.py", line 1330, in __call__
inputs["input_ids"], attention_mask=inputs["attention_mask"], **generate_kwargs,
File ".../lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 15, in decorate_context
return func(*args, **kwargs)
File ".../lib/python3.7/site-packages/transformers/modeling_utils.py", line 1047, in generate
encoder_outputs: tuple = encoder(input_ids, attention_mask=attention_mask)
File ".../lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File ".../lib/python3.7/site-packages/transformers/modeling_bart.py", line 292, in forward
embed_pos = self.embed_positions(input_ids)
File ".../lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File ".../lib/python3.7/site-packages/transformers/modeling_bart.py", line 763, in forward
return super().forward(positions)
File ".../lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 114, in forward
self.norm_type, self.scale_grad_by_freq, self.sparse)
File ".../lib/python3.7/site-packages/torch/nn/functional.py", line 1724, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self
```
## Expected behavior
As of last week (week of 4/26/2020) this caused no issue. Today (5/7/2020) I tried to run the exact same code, a new model was downloaded (no change in transformers module, just the model itself), and now it enforces a token limit.
Expected behavior is to summarize document regardless of size.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0 (also occurs in 2.9.0)
- Platform: Both macOS 10.15.4 and Windows 10
- Python version: 3.7.5 (Mac) and 3.6.3/Anaconda (Windows)
- PyTorch version (GPU?): 1.5.0, no GPU
- Tensorflow version (GPU?): n/a
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
- Model (from associated JSON file downloaded): `{"url": "https://s3.amazonaws.com/models.huggingface.co/bert/facebook/bart-large-cnn/pytorch_model.bin", "etag": "\"6eeacfe81d9304a6c5015424912f8df8\""}`
- Model config:
```
{
"_num_labels": 3,
"activation_dropout": 0.0,
"activation_function": "gelu",
"add_final_layer_norm": false,
"attention_dropout": 0.0,
"bos_token_id": 0,
"classif_dropout": 0.0,
"d_model": 1024,
"decoder_attention_heads": 16,
"decoder_ffn_dim": 4096,
"decoder_layerdrop": 0.0,
"decoder_layers": 12,
"decoder_start_token_id": 2,
"dropout": 0.1,
"early_stopping": true,
"encoder_attention_heads": 16,
"encoder_ffn_dim": 4096,
"encoder_layerdrop": 0.0,
"encoder_layers": 12,
"eos_token_id": 2,
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2"
},
"init_std": 0.02,
"is_encoder_decoder": true,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_2": 2
},
"length_penalty": 2.0,
"max_length": 142,
"max_position_embeddings": 1024,
"min_length": 56,
"model_type": "bart",
"no_repeat_ngram_size": 3,
"normalize_before": false,
"num_beams": 4,
"num_hidden_layers": 12,
"output_past": true,
"pad_token_id": 1,
"prefix": " ",
"scale_embedding": false,
"task_specific_params": {
"summarization": {
"early_stopping": true,
"length_penalty": 2.0,
"max_length": 142,
"min_length": 56,
"no_repeat_ngram_size": 3,
"num_beams": 4
}
},
"vocab_size": 50264
}
```
EDIT: Tagging @sshleifer as recommended by docs
| 05-08-2020 04:24:15 | 05-08-2020 04:24:15 | #3857 might also be a culprit<|||||>@pwschaedler This is a change in pipelines that we may or may not undo. Previously, the tokenizer truncated your long documents to their beginnings
In the meantime, you can use this code on the latest transformers:
```python
from transformers import BartForConditionalGeneration, BartTokenizer
from typing import List
def old_summarization_pipeline(text: List[str]) -> List[str]:
tokenizer = BartTokenizer.from_pretrained('bart-large-cnn')
model = BartForConditionalGeneration.from_pretrained('bart-large-cnn')
input_ids = tokenizer.batch_encode_plus(text, return_tensors='pt', max_length=1024)['input_ids']
summary_ids = model.generate(input_ids)
summaries = [tokenizer.decode(s) for s in summary_ids]
return summaries
```
```python
text = '=' * 10257
old_summarization_pipeline(text)
```<|||||>Great, thanks for the replacement code. The token limit (whether it's enforced or implied) might be worth mentioning on the pipeline docs.<|||||>Agreed! Would you be interested in sending a PR? The SummarizationPipeline docs live in `docs/source/main_classes/pipelines.rst` I believe.<|||||>Issue still exists when using summarisation pipeline. `WARNING:transformers.tokenization_utils:Token indices sequence length is longer than the specified maximum sequence length for this model (2817 > 1024). Running this sequence through the model will result in indexing errors
IndexError: index out of range in self
`
I saw the above work-around but when can we expect this to be fixed in summarization pipeline as well?<|||||>I am curious why the token limit in the summarization pipeline stops the process for the default model and for BART but not for the T-5 model? When running "t5-large" in the pipeline it will say "Token indices sequence length is longer than the specified maximum sequence length for this model (1069 > 512)" but it will still produce a summary. With the default model or "facebook/bart-large-cnn" models it will give a similar message "Token indices sequence length is longer than the specified maximum sequence length for this model (1034 > 1024)." but then fail to produce a summary (and give the following "index out of range in self"). Thanks!<|||||>Great Q, (prolly belongs on [discuss.huggingface.co](https://discuss.huggingface.co/) in the future :))
T5 uses a technique called relative position bucketing, whereas bart stores 1024 positional embeddings and then looks up each position in them.
Note that T5 will likely perform best with sequences <= 512 tokens, but you are correct that it won't error until OOM.
[relevant t5 code] (https://github.com/huggingface/transformers/blob/c67d1a0259cbb3aef31952b4f37d4fee0e36f134/src/transformers/modeling_t5.py#L242)<|||||>@sshleifer what's the typical recommendation for summarization on larger documents? Chunk them and generate summaries or any other tips?
EDIT: Cross-posted __[here](https://discuss.huggingface.co/t/summarization-on-long-documents/920/5)__, I think this is a much better place for this.
This is what I use currently but open to better recommendations.
```python
# generate chunks of text \ sentences <= 1024 tokens
def nest_sentences(document):
nested = []
sent = []
length = 0
for sentence in nltk.sent_tokenize(document):
length += len(sentence)
if length < 1024:
sent.append(sentence)
else:
nested.append(sent)
sent = []
length = 0
if sent:
nested.append(sent)
return nested
# generate summary on text with <= 1024 tokens
def generate_summary(nested_sentences):
device = 'cuda'
summaries = []
for nested in nested_sentences:
input_tokenized = bart_tokenizer.encode(' '.join(nested), truncation=True, return_tensors='pt')
input_tokenized = input_tokenized.to(device)
summary_ids = bart_model.to(device).generate(input_tokenized,
length_penalty=3.0,
min_length=30,
max_length=100)
output = [bart_tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids]
summaries.append(output)
summaries = [sentence for sublist in summaries for sentence in sublist]
return summaries
```<|||||>Hi!
nest_sentences() has a bug. Whenever a chunk is ready to be saved in 'nested' the current sentence is ignored.<|||||>Yes my bad one sentence is skipped, can be fixed as follows. Effects of implementing it in late hours ;)
Good catch @echatzikyriakidis thanks!
```python
# generate chunks of text \ sentences <= 1024 tokens
def nest_sentences(document):
nested = []
sent = []
length = 0
for sentence in nltk.sent_tokenize(document):
length += len(sentence)
if length < 1024:
sent.append(sentence)
else:
nested.append(sent)
sent = [sentence]
length = len(sentence)
if sent:
nested.append(sent)
return nested
```<|||||>Hi @dipanjanS !
Thank you! This is exactly the way I did it also.
I think there is another catch.
What if a sentence is > 512 in case of T5 models or > 1024 in case of BART (rare scenario).
I think there will be no problem because of truncation=True, right? Or is going to fail? Maybe we need to skip it or split it in half.<|||||>Great. I think in those cases 1024 is a hard coded magic number which can be configurable and replaced with the max length allowed by that specific model maybe as a function parameter<|||||>Hi @dipanjanS,
This is the way I have done it.
But again, what if a sentence is greater than the model's l max input length?
What will happen then?<|||||>I think if we enforce the truncation parameter it should take care of it.
By default it was being done in previous releases of transformers I think
but now we might have to set it ourselves. But do check it out once.
On Mon, Sep 21, 2020, 00:30 Efstathios Chatzikyriakidis <
[email protected]> wrote:
> Hi @dipanjanS <https://github.com/dipanjanS>,
>
> This is the way I have done it.
>
> But again, what if a sentence is greater than the model's l max input
> length?
>
> What will happen then?
>
> β
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/4224#issuecomment-695822993>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AA2J3R6Y2O3VR5KBUXYDUBTSGZGN3ANCNFSM4M3342EA>
> .
>
<|||||>Hi @dipanjanS,
Exactly, I have tested it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hi @sshleifer first of all thanks for creating and maintaining this repo!
I'm exploring the pipelines and sadly the [replacement code](https://github.com/huggingface/transformers/issues/4224#issuecomment-626401256) you shared no longer works.
I added `truncation=True` to the `tokenizer.batch_encode_plus` method but another error happened: `ValueError: expected sequence of length 2 at dim 1 (got 3)` in `tokenization_utils_base.py`
I saw in above discussion you were considering undoing this hard limit on the pipelines, perhaps the limit can be exposed in a configuration file or as a parameter?
Could you please suggest how to overcome the hard limit?
This is my current config:
```
[tool.poetry.dependencies]
python = "^3.8"
transformers = "^4.2.2"
torch = "^1.7.1"
```
- No GPU
- OS is Linux
- Model: "sshleifer/distilbart-cnn-12-6"
Thanks!<|||||>Hi @ig-perez ,
I realize this reply comes a little late to your question, but maybe it can still help you or someone else out. Here is the code from @sshleifer with some modifications to make it work for the current version.
```py
def old_summarization_pipeline(text: List[str]) -> List[str]:
tokenizer = BartTokenizer.from_pretrained('facebook/bart-large-cnn')
model = BartForConditionalGeneration.from_pretrained('facebook/bart-large-cnn')
input_ids = tokenizer.batch_encode_plus(text, truncation=True, padding=True, return_tensors='pt', max_length=1024)['input_ids']
summary_ids = model.generate(input_ids)
summaries = [tokenizer.decode(s, skip_special_tokens=True, clean_up_tokenization_spaces=False) for s in summary_ids]
return summaries
print(old_summarization_pipeline([ARTICLE_TO_SUMMARIZE, ARTICLE_TO_SUMMARIZE_2, ARTICLE_TO_SUMMARIZE2*400]))
```
I tried it with:
- transformers=4.4.2
- pytorch=1.8.0=py3.8_cuda10.2_cudnn7.6.5_0<|||||>Unfortunately, this problem also manifests when deploying BART on SageMaker via `sagemaker.huggingface.HuggingFaceModel`. When a request with > 1024 tokens is sent, the SageMaker endpoint crashes with an out-of-range CUDA error (we're using GPU instances). What's worse, subsequent requests with smaller inputs fail with the same CUDA error. The only fix is to redeploy the endpoint.
For now, we're using an encode-truncate-decode workaround like below, but there clearly has to be a better way:
```
# Inputs longer than 1024 tokens cause irrecoverable CUDA errors on
# SageMaker. Make sure that each text is at most 1024 tokens.
inputs = self.tokenizer(texts, max_length=1024, padding="longest",
truncation=True)
truncated_texts = [self.tokenizer.decode(i, skip_special_tokens=True, clean_up_tokenization_spaces=False)
for i in inputs["input_ids"]]
output = predictor.predict({"inputs": truncated_texts, "parameters": parameters})
summaries = [summary["summary_text"] for summary in output]
```<|||||>@dipanjanS can you write a full code because it is missing a lot of parts
nltk missing
bart_tokenizer missing
bart_model missing
<|||||>@dipanjanS Thanks for sharing your take on how to chunk large texts for summarization. I follow up on @FurkanGozukara's request: could you possibly provide the parts that are missing?
Thanks in advance for your help. |
transformers | 4,223 | closed | [TPU] Doc, fix xla_spawn.py, only preprocess dataset once | - Fixed the `xla_spawn.py` launcher script which I broke earlier today when moving files around. It should be reasonably more robust now.
- Started to add a little documentation
- Changed the dataset logic so that multiple workers don't concurrently preprocess the dataset.
| 05-08-2020 01:02:45 | 05-08-2020 01:02:45 | Ok, merging this! |
transformers | 4,222 | closed | TPU Trainer's PerDeviceLoader has no len() | # π Bug
The new TPU Trainer fails when running on a single TPU core in a colab notebook since the XLA ParallelLoader's PerDeviceLoader does not implement a length method.
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
I used a custom script, but I would imagine it's not hard to reproduce using one of the example scripts. The error trace is below, with the relevant line in bold:
`Exception in device=TPU:0: object of type 'PerDeviceLoader' has no len()
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 119, in _start_fn
fn(gindex, *args)
File "<ipython-input-6-3aa4c6105066>", line 264, in main
trainer.train()
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 286, in train
**t_total = int(len(train_dataloader) // self.args.gradient_accumulation_steps *** self.args.num_train_epochs)
TypeError: object of type 'PerDeviceLoader' has no len()
An exception has occurred, use %tb to see the full traceback.`
## Expected behavior
Based on the conversation in the issue linked below, the PerDeviceLoader does not implement length, so the Trainer would need to be aware of that and get the length another way.
https://github.com/pytorch/xla/issues/1191
## Environment info
- `transformers` version: 2.9.0
- Platform: Linux-4.19.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0a0+ab660ae (False)
- Tensorflow version (GPU?): 2.2.0-rc4 (False)
- Using GPU in script?: Using TPU
- Using distributed or parallel set-up in script?: XLA is used by the Trainer, but it's a single core TPU in a colab notebook
| 05-08-2020 00:05:42 | 05-08-2020 00:05:42 | @jysohn23 (and @dlibenzi) might want to chime in on this but I think this works in xla's nightly from https://github.com/pytorch/xla/pull/1991
(We might still want to improve things but if this is blocking you, please try using the nightly build of xla)<|||||>Yeah it's recently been added, so if you could select a newer version in the first cell of your colab notebook (`env-setup.py` script), it should do.<|||||>Thanks for the quick response. Confirmed that using the "nightly" build of XLA works, so closing this issue. |
transformers | 4,221 | closed | feat(trainer): log epoch and final metrics | This PR adds new logging capabilities to `Trainer`:
* logs epoch -> this let us compare runs with different batch sizes, using "epoch" as the x-axis instead of "global_step"
* logs final metrics -> metrics are also logged by `Trainer.evaluate` at the end of the script (or at the start if we just use `--do_eval` and not train).
In below example, we see `epoch` has been used for x-axis and `eval_loss` has one extra point corresponding to the end of the script.
 | 05-07-2020 23:37:53 | 05-07-2020 23:37:53 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4221?src=pr&el=h1) Report
> Merging [#4221](https://codecov.io/gh/huggingface/transformers/pull/4221?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c99fe0386be118bceaab1c85cdb8309eb8cb8208&el=desc) will **increase** coverage by `0.02%`.
> The diff coverage is `39.39%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4221?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4221 +/- ##
==========================================
+ Coverage 78.31% 78.34% +0.02%
==========================================
Files 120 120
Lines 19854 19864 +10
==========================================
+ Hits 15549 15562 +13
+ Misses 4305 4302 -3
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4221?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4221/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `42.73% <39.39%> (+2.55%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4221/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (ΓΈ)` | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4221?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4221?src=pr&el=footer). Last update [c99fe03...f220d8f](https://codecov.io/gh/huggingface/transformers/pull/4221?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Already merged through PR #4324 |
transformers | 4,220 | closed | Improvements to the wandb integration | Hey guys,
I'm one of the founders of W&B and have been working with @borisdayma on the wandb integration. I made a few changes to address a few cases we thought through:
1. If wandb is installed but the user hasn't logged in we now print a warning instead of prompting for an api_key
2. Made messaging more clear on how to install wandb and linked to more documentation
3. Added an environment variable for disabling wandb. This is important for users using fastai or lightning with their own wandb callback.
4. Added an environment variable for disabling wandb.watch. Because we're pulling gradients and parameters during training there is a small performance impact on training. We wanted to make it easy for users to disable this feature.
Happy to iterate on any feedback you guys have. Really excited about this integration! | 05-07-2020 23:26:01 | 05-07-2020 23:26:01 | Hey guys, any updates on this?<|||||>@vanpelt Can't push to your fork so will tweak things in #4324 and combine with @borisdayma's #4221 |
transformers | 4,219 | closed | Need correction in GPT-2 perplexity equation based on computation | # β Questions & Help
I would like to formulate GPT-2 perplexity [calculation ](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py#L266)in terms of equation as under:


I am confused which equation can best describe the process.
Following are the representation:
1. N is orignal word length
2. M is the BPE word length after tokenization
3. w represents `n_position` size, which is 1024 for GPT-2 pretrained model.
4. i=0 till M
I try to make the first equation, as GPT-2 cannot calculate perplexity in one go, if M is greater than w. It needs to calculate mean of cross-entropy loss.
Can you please share your opinion in formulating an equation?
| 05-07-2020 23:19:43 | 05-07-2020 23:19:43 | Off-topic, but it's high time that Github supports KaTeX =)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,218 | closed | feat(Trainer): log epoch and final metrics | This PR adds new logging capabilities to `Trainer`:
* logs epoch -> this let us compare runs with different batch sizes, using "epoch" as the x-axis instead of "global_step"
* logs final metrics -> metrics are also logged by `Trainer.evaluate` at the end of the script (or at the start if we just use `--do_eval` and not train).
In below example, we see `epoch` has been used for x-axis and `eval_loss` has one extra point corresponding to the end of the script.
 | 05-07-2020 22:37:47 | 05-07-2020 22:37:47 | ^^ I borked your PR unfortunately (π©). Sorry about that. Care to open a new one?
(same for #4211)<|||||>Sure, no worry! |
transformers | 4,217 | closed | [Pipeline, Generation] tf generation pipeline bug | Generation can currently not be used for TF due to a small bug. This PR fixes it and adds tests.
I will update the usage docs and pipeline notebook tomorrow. | 05-07-2020 22:01:33 | 05-07-2020 22:01:33 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4217?src=pr&el=h1) Report
> Merging [#4217](https://codecov.io/gh/huggingface/transformers/pull/4217?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8bf7312654d40cea1a399e86f9fe8e39e1ea3a1e&el=desc) will **increase** coverage by `0.00%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4217?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4217 +/- ##
=======================================
Coverage 78.38% 78.39%
=======================================
Files 120 120
Lines 19916 19918 +2
=======================================
+ Hits 15611 15614 +3
+ Misses 4305 4304 -1
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4217?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/4217/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `76.36% <100.00%> (+0.09%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4217/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.96% <0.00%> (-0.13%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4217/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.93% <0.00%> (+0.16%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4217/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.38% <0.00%> (+0.41%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4217?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4217?src=pr&el=footer). Last update [8bf7312...0e6bf4e](https://codecov.io/gh/huggingface/transformers/pull/4217?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>^^ I borked your PR unfortunately, @patrickvonplaten (me = π©). Sorry about that. Care to open a new one?<|||||>(or commit your change directly as others have approved)<|||||>No worries, I used my bullet proof manual rebase of copy-paste files :D.
This looks good to me. Can you check quickly and merge? @julien-c |
transformers | 4,216 | closed | [examples/summarization] run_train_tiny.sh: remove unused kwarg | 05-07-2020 21:25:05 | 05-07-2020 21:25:05 | I borked your PR unfortunately (π©). Sorry about that. Care to open a new one? |
|
transformers | 4,215 | closed | Examples readme.md | 05-07-2020 19:00:00 | 05-07-2020 19:00:00 | ||
transformers | 4,214 | closed | [examples] text_classification/run_pl.sh error | ```bash
cd examples/text-classification
./run_pl.sh
```
trains for 1 epoch, tests 12/13 steps, then fails with the following traceback
```python
/home/shleifer/.conda/envs/nb/lib/python3.7/site-packages/pytorch_lightning/utilities/warnings.py:18: UserWarning: The dataloader, test dataloader 0, does not have many workers which may be a bottleneck. Consider increasing the value of the `num_workers` argument` in the `DataLoader` init to improve performance.
warnings.warn(*args, **kwargs)
Testing: 92%|ββββββββββββββββββββββββββββββββββββ | 12/13 [00:00<00:00, 14.67it/s]Traceback (most recent call last):
File "run_pl_glue.py", line 195, in <module>
trainer.test(model)
File "/home/shleifer/.conda/envs/nb/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 894, in test
self.fit(model)
File "/home/shleifer/.conda/envs/nb/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 704, in fit
self.single_gpu_train(model)
File "/home/shleifer/.conda/envs/nb/lib/python3.7/site-packages/pytorch_lightning/trainer/distrib_parts.py", line 477, in single_gpu_train
self.run_pretrain_routine(model)
File "/home/shleifer/.conda/envs/nb/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 819, in run_pretrain_routine
self.run_evaluation(test_mode=True)
File "/home/shleifer/.conda/envs/nb/lib/python3.7/site-packages/pytorch_lightning/trainer/evaluation_loop.py", line 374, in run_evaluation
eval_results)
File "/home/shleifer/.conda/envs/nb/lib/python3.7/site-packages/pytorch_lightning/trainer/logging.py", line 107, in process_output
for k, v in output.items():
AttributeError: 'NoneType' object has no attribute 'items'
File "/home/shleifer/.conda/envs/nb/lib/python3.7/site-packages/pytorch_lightning/trainer/logging.py", line 107, in process_output
for k, v in output.items():
AttributeError: 'NoneType' object has no attribute 'items'
```
| 05-07-2020 18:20:14 | 05-07-2020 18:20:14 | I encountered this bug<|||||>Removing the `test_end` method in `BaseTransformer` (file: https://github.com/huggingface/transformers/blob/master/examples/lightning_base.py) solved the issue for me.
From my understanding, what is happening here is that at the end of testing, `test_end` is being called which is just returning `None` and that's why we are getting the error.
`test_end` is actually deprecated now in favor of `test_epoch_end`. In `run_pl_glue.py` we are using `test_epoch_end` whereas in `lightning_base.py`, we are using `test_end`. So if we have both of them, then I think only `test_end` will be called. So if we just remove it, we will have a call to `test_epoch_end` which will give us the correct result.<|||||>@divkakwani Thank you! Perfect!<|||||>@divkakwani nice catch! Will remove `test_end` from `BaseTransformer` when I submit PR for #4494.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>It looks like this particular issue has already been fixed (`test_end` is no more in master), not in a PR suggested in https://github.com/huggingface/transformers/issues/4214#issuecomment-632988310, but elsewhere, so this issue can be closed. |
transformers | 4,213 | closed | BIG Reorganize examples | 05-07-2020 17:31:55 | 05-07-2020 17:31:55 | Really cool! |
|
transformers | 4,212 | closed | GPT2Tokenizer.decode() slow for individual tokens | `GPT2Tokenizer.decode()` is considerable slow when invoking it for each token individually, not for a set of tokens... I need to pass in a set of tokens and get back a set of strings for each token. I tried convert_ids_to_tokens() but I'm getting results with the Δ character. Should I simply remove that character from the results?
I also tried `GPT2TokenizerFast`, but it failed with `"encoder() takes from 2 to 3 positional arguments but 6 were given"` when I was trying to use batch_encode_plus() to pad my tokens with the following arguments: tokens (that were already encoded), pad_to_max_length=True and return_tensors='pt' . transformers ver 2.6.0 - is this a bug? | 05-07-2020 17:23:18 | 05-07-2020 17:23:18 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,211 | closed | W&B integration improvements | Hey guys,
I'm one of the founders of W&B and have been working with @borisdayma on the wandb integration. I made a few changes to address a few cases we thought through:
1. If wandb is installed but the user hasn't logged in we now print a warning instead of prompting for an api_key
2. Made messaging more clear on how to install wandb and linked to more documentation
3. Added an environment variable for disabling wandb. This is important for users using fastai or lightning with their own wandb callback.
4. Added an environment variable for disabling wandb.watch. Because we're pulling gradients and parameters during training there is a small performance impact on training. We wanted to make it easy for users to disable this feature.
Happy to iterate on any feedback you guys have. Really excited about this integration! | 05-07-2020 16:54:24 | 05-07-2020 16:54:24 | ^^ I borked your PR unfortunately (π©). Sorry about that. Care to open a new one? |
transformers | 4,210 | closed | Perplexity in T5/BART | Hi,
Is there a way to compute perplexity for T5 or BART outputs? | 05-07-2020 16:52:04 | 05-07-2020 16:52:04 | T5 uses CrossEntropy loss so I think you can use `torch.exp(loss)` to calculate perplexity.<|||||>Thanks!<|||||>@patil-suraj Sorry to bother you again, I thought my problem has been solved, but I don't know how to deal with the loss. It's a (1,1,vocab_size) tensor. I don't even have an intuition about what that means. Can you help me?<|||||>It seems that you are not passing `lm_labels`. When you pass `lm_labels` to T5/BART's forward methods then the first element of the output tuple is the loss, otherwise its `prediction_scores` tensor which you are receiving.<|||||>hi @patil-suraj bart doesnt have lm_labels has argument<|||||>`lm_labels` is renamed to `labels`<|||||>Thanks for the prompt response, @patil-suraj , previously before trying Bart, I trained it with t5,the valid loss gets to almost zero, but still found it difficult generating correct translation. I would love to know reasons for that. Thanks<|||||>I'm not sure which problem you are talking about, hard to answer without knowing the context.<|||||>Hi patil,
I am working on a translation task/competition, so as a text2text
framework, I used t5 even though it is not pretrained on my source language
just for the knowledge sake,
The notebook link here
https://colab.research.google.com/drive/1w0brh9KULV4zDmgMme4csXeCpfAR960N?usp=sharing
the test and valid loss was declining to zero even though the later
generated predictions wasn't good so I changed to Bart even though no
improvement, I have no idea why? I would love to know the reason.
I found MarianMT just yesterday which was pretrained for my task yo-en, I
have started working with it. I think it would also be the case for MarianMt
Thank you.
On Fri, Jan 1, 2021, 8:38 AM Suraj Patil <[email protected]> wrote:
> I'm not sure which problem you are talking about, hard to answer without
> knowing the context.
>
> β
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/4210#issuecomment-753280527>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AOP3ZJFLJLLFGUSFCU3WKB3SXV3X7ANCNFSM4M3PUPXA>
> .
>
|
transformers | 4,209 | closed | Fix for IndexError when Roberta Tokenizer is called on empty text | Check if `text` is set to avoid IndexError
Fix for https://github.com/huggingface/transformers/issues/3809 | 05-07-2020 16:12:45 | 05-07-2020 16:12:45 | |
transformers | 4,208 | closed | Remove comma that turns int into tuple | cf @stefan-it's error on Slack | 05-07-2020 15:42:17 | 05-07-2020 15:42:17 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4208?src=pr&el=h1) Report
> Merging [#4208](https://codecov.io/gh/huggingface/transformers/pull/4208?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/cafa6a9e29f3e99c67a1028f8ca779d439bc0689&el=desc) will **decrease** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4208?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4208 +/- ##
==========================================
- Coverage 78.32% 78.31% -0.02%
==========================================
Files 120 120
Lines 19854 19854
==========================================
- Hits 15551 15548 -3
- Misses 4303 4306 +3
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4208?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4208/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `39.88% <ΓΈ> (-0.30%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4208/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `73.96% <0.00%> (-0.42%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4208/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `90.96% <0.00%> (-0.13%)` | :arrow_down: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4208?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4208?src=pr&el=footer). Last update [cafa6a9...bfbed71](https://codecov.io/gh/huggingface/transformers/pull/4208?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Superseded and fixed by #4213 |
transformers | 4,207 | closed | BertForTokenClassification, logits on sequence tagging task | # β Questions & Help
## Details
<!-- Description of your issue -->
I'm trying to use BertForTokenClassification to do a sequence tagging task. During training, everything seems fine and loss is decreasing per epoch, but when I put the model in evaluation mode and feed it inputs, the logits for each token are all the same.
My code looks like this:
```
class PretrainedBert(nn.Module):
def __init__(self,
device,
pretrained_path='bert-base-multilingual-cased',
output_dim=5,
learning_rate=0.01,
eps=1e-8,
max_grad_norm=1.0,
model_name='bert_multilingual',
cased=True
):
super(PretrainedBert, self).__init__()
self.device = device
self.output_dim = output_dim
self.learning_rate = learning_rate
self.max_grad_norm = max_grad_norm
self.model_name = model_name
self.cased = cased
# Load pretrained bert
self.bert = BertForTokenClassification.from_pretrained(pretrained_path, num_labels=output_dim, output_attentions = False, output_hidden_states = False)
self.tokenizer = BertTokenizer.from_pretrained(pretrained_path, do_lower_case=not cased)
param_optimizer = list(self.bert.named_parameters())
no_decay = ['bias', 'gamma', 'beta']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.0}]
self.optimizer = AdamW(optimizer_grouped_parameters, lr=learning_rate, eps=eps)
def forward(self, input_ids, attention_mask, labels=None):
output = self.bert(input_ids, attention_mask=attention_mask, labels=labels)
return output
def fit(self, train_loader, dev_loader, epochs=10):
"""
:param test_loader: torch.utils.data.DataLoader object
:param test_loader: torch.utils.data.DataLoader object
:param epochs: int
"""
scheduler = get_linear_schedule_with_warmup(self.optimizer, num_warmup_steps=0,num_training_steps=len(train_loader)*epochs)
for epoch in range(epochs):
# Iterate over training data
epoch_time = time.time()
self.train()
epoch_loss = 0
num_batches = 0
for raw_text, x, y, idx in tqdm(train_loader):
self.zero_grad()
# Get gold labels and extend them to cover the wordpieces created by bert
y = self._extend_labels(raw_text, y)
y, _ = pad_packed_sequence(y, batch_first=True)
batches_len, seq_length = y.size()
y.to(self.device)
# Run input through bert and get logits
bert_tokenized = self.tokenizer.batch_encode_plus([' '.join(
text) for text in raw_text], max_length=seq_length, pad_to_max_length=True)
input_ids, token_type_ids, attention_masks = bert_tokenized[
'input_ids'], bert_tokenized['token_type_ids'], bert_tokenized['attention_mask']
loss, logits = self.forward(torch.LongTensor(
input_ids), torch.LongTensor(attention_masks), labels=y)
# Help prevent the "exploding gradients" problem.
torch.nn.utils.clip_grad_norm_(parameters=self.bert.parameters(), max_norm=self.max_grad_norm)
# update parameters and learning rate
loss.backward()
self.optimizer.step()
scheduler.step()
epoch_loss += loss.item()
num_batches += 1
epoch_time = time.time() - epoch_time
print()
print("Epoch {0} loss: {1:.3f}".format(epoch + 1,
epoch_loss / num_batches))
print("Dev")
binary_f1, propor_f1 = self.evaluate(dev_loader)
def predict(self, test_loader):
"""
:param test_loader: torch.utils.data.DataLoader object with
batch_size=1
"""
self.eval()
predictions, golds, sents = [], [], []
for raw_text, x, y, idx in tqdm(test_loader):
# Run input through bert and get logits
bert_tokenized = self.tokenizer.encode_plus(' '.join(raw_text[0]))
input_ids, token_type_ids, attention_masks = bert_tokenized[
'input_ids'], bert_tokenized['token_type_ids'], bert_tokenized['attention_mask']
with torch.no_grad():
logits = self.forward(torch.LongTensor(
input_ids).unsqueeze(0), torch.LongTensor(attention_masks).unsqueeze(0))
# remove batch dim and [CLS]+[SEP] tokens from logits
logits = logits[0].squeeze(0)[1:-1:]
# mean pool wordpiece rows of logits and argmax to get predictions
preds = self._logits_to_preds(logits, y[0], raw_text[0])
predictions.append(preds)
golds.append(y[0])
sents.append(raw_text[0])
return predictions, golds, sents
def evaluate(self, test_loader):
"""
Returns the binary and proportional F1 scores of the model on the examples passed via test_loader.
:param test_loader: torch.utils.data.DataLoader object with
batch_size=1
"""
preds, golds, sents = self.predict(test_loader)
flat_preds = [int(i) for l in preds for i in l]
flat_golds = [int(i) for l in golds for i in l]
analysis = get_analysis(sents, preds, golds)
binary_f1 = binary_analysis(analysis)
propor_f1 = proportional_analysis(flat_golds, flat_preds)
return binary_f1, propor_f1
def _extend_labels(self, text, labels):
extended_labels = []
for idx, (text_seq, label_seq) in enumerate(zip(text, labels)):
extended_seq_labels = []
for word, label in zip(text_seq, label_seq):
n_subwords = len(self.tokenizer.tokenize(word))
extended_seq_labels.extend([label.item()] * n_subwords)
extended_labels.append(torch.LongTensor(extended_seq_labels))
extended_labels = pack_sequence(extended_labels, enforce_sorted=False)
return extended_labels
def _logits_to_preds(self, logits, y, raw_text):
preds = torch.zeros(len(y))
head = 0
for idx, word in enumerate(raw_text):
n_subwords = len(self.tokenizer.tokenize(word))
if n_subwords > 1:
preds[idx] = torch.argmax(torch.mean(
logits[head:head+n_subwords, :], dim=0))
else:
preds[idx] = torch.argmax(logits[head, :])
head = head + n_subwords
return preds
```
Example input:
```
raw_text = ['Donkey', 'Kong', 'Country', ':']
y = [tensor([0, 0, 0, 0])]
```
Produces these results:
```
tokenized_words = ['Don', '##key', 'Kong', 'Country', ':']
logits = tensor([[-3.2811, 1.1715, 1.2381, 0.5201, 1.0921],
[-3.2813, 1.1715, 1.2382, 0.5201, 1.0922],
[-3.2815, 1.1716, 1.2383, 0.5202, 1.0923],
[-3.2814, 1.1716, 1.2383, 0.5202, 1.0922],
[-3.2811, 1.1715, 1.2381, 0.5201, 1.0921]])
preds = tensor([2., 2., 2., 2.])
```
Any ideas for what I'm doing wrong? Thanks for any help in advance.
| 05-07-2020 15:33:50 | 05-07-2020 15:33:50 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,206 | closed | How to get output from BERTNextSentencePrediction by passing One Sentence and Next Sentence getting predicted | # β Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. Make sure to tag your question with the
right deep learning framework as well as the huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
If your question wasn't answered after a period of time on Stack Overflow, you
can always open a question on GitHub. You should then link to the SO question
that you posted.
-->
## Details
How to get output from BERTNextSentencePrediction by passing One Sentence and Next Sentence getting predicted ?
I got the Prediction score but how and where should I pass the first sentence so I can get the next sentence predicted ?
Codefrom torch.nn.functional import softmax
from transformers import BertForNextSentencePrediction, BertTokenizer
seq_A = 'I like cookies !'
seq_B = 'Do you like them ?'
# load pretrained model and a pretrained tokenizer
model = BertForNextSentencePrediction.from_pretrained('bert-base-cased')
tokenizer = BertTokenizer.from_pretrained('bert-base-cased')
# encode the two sequences. Particularly, make clear that they must be
# encoded as "one" input to the model by using 'seq_B' as the 'text_pair'
encoded = tokenizer.encode_plus(seq_A, text_pair=seq_B, return_tensors='pt')
print(encoded)
# {'input_ids': tensor([[ 101, 146, 1176, 18621, 106, 102, 2091, 1128, 1176, 1172, 136, 102]]),
# 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]]),
# 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
# NOTE how the token_type_ids are 0 for all tokens in seq_A and 1 for seq_B,
# this way the model knows which token belongs to which sequence
# a model's output is a tuple, we only need the output tensor containing
# the relationships which is the first item in the tuple
seq_relationship_logits = model(**encoded)[0]
# we still need softmax to convert the logits into probabilities
# index 0: sequence B is a continuation of sequence A
# index 1: sequence B is a random sequence
probs = softmax(seq_relationship_logits, dim=1)
print(seq_relationship_logits)
print(probs)
# tensor([[9.9993e-01, 6.7607e-05]], grad_fn=<SoftmaxBackward>)
# very high value for index 0: high probability of seq_B being a continuation of seq_A
<!-- You should first ask your question on SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on Stack Overflow**:
| 05-07-2020 15:13:15 | 05-07-2020 15:13:15 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,205 | closed | Issues with RobertaTokenizer and unicode characters | # π Bug
## Information
Model I am using (Bert, XLNet ...):
Roberta
Language I am using the model on (English, Chinese ...):
English
## To reproduce
Steps to reproduce the behavior:
1. Insert a jumble of some weird unicode characters into the vocabulary file. These unicode characters must exist on their own somewhere else in the vocab. (Note: I didn't actually do this manual vocab editing when I encountered the bug. Since I'm using a custom vocabulary I'm simply detailing the steps to reproduce the behavior)
```
# vocab.txt
...
Δ OHΓΒ’
...
```
(Let's say this was inserted on line 1001)
(Don't forget to account for this extra word with a change in vocab size in your config file)
2. Load an instance of `RobertaTokenizer` with this vocab file.
3. Call
```python3
input_ids = tokenizer.encode_plus('OHΓΒ’',
add_special_tokens=True,
add_prefix_space=True,
max_length=max_seq_length,
return_token_type_ids=True)['input_ids']
print(input_ids)
```
## Expected behavior
The special start-of-word character `Δ ` will be added by the tokenizer, so the input ids we'd expect would be `[0, 1000, 2]` (given that `0` and `2` correspond to your start/end tokens, and `1000` corresponds to the index where we inserted the unicode jumble `Δ OHΓΒ’`). Instead, I'm getting some seemingly unrelated input ids corresponding to other weird unicode characters not included in the inserted unicode jumble.
## Environment info
- `transformers` version: 2.8
- Platform: Linux and MacOS
- Python version: 3.7
- PyTorch version (GPU?): torch 1.4.
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 05-07-2020 15:08:02 | 05-07-2020 15:08:02 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,204 | closed | KeyError with a fine-tuned model | I've fine-tuned the allenai's scibert to do some NER using the `run_ner.py` script, but I keep bumping into a key error. My code is the following:
```
m = AutoModel.from_pretrained("models/sciBERT_ner_e50/")
tok = AutoTokenizer.from_pretrained("allenai/scibert_scivocab_uncased")
nlp = pipeline("ner", model=m, tokenizer=tok)
```
where models/scibert_ner_e50 contains my model and the files it needs to run properly.
when i run something like `nlp(abstract)`, the code crashes and sends me the following error:
```
KeyError Traceback (most recent call last)
<ipython-input-86-793f89d86633> in <module>
3 if(len(row.abstract.split())<512):
4 abst = row.abstract
----> 5 print(nlp(abst))
6
7 except ValueError as ve:
c:\users\tbabi\appdata\local\programs\python\python37\lib\site-packages\transformers\pipelines.py in __call__(self, *texts, **kwargs)
792 answer = []
793 for idx, label_idx in enumerate(labels_idx):
--> 794 if self.model.config.id2label[label_idx] not in self.ignore_labels:
795 answer += [
796 {
KeyError: 422
```
Has anyone else been meeting this error ? I've been struggling with this for quiet some time now, and any advice/thoughts/insights on what's going on would be greatly appreciated :)
Thank you ! | 05-07-2020 14:48:55 | 05-07-2020 14:48:55 | How many labels/entities are there in your model ? Can you check if the key 422 exists in the models config's id2label by printing it or by opening the config file. You can print config using `model.config` <|||||>Hello
It seems I had simply been misusing my model : I was instantiating an `AutoModel`, when I should have been using the `BertForTokenClassification` class.
Thanks anyway ! :) |
transformers | 4,203 | closed | Use with_suffix to change the extension of the path | As per https://github.com/huggingface/transformers/pull/3934#discussion_r421307659. Note that this does not rename the file, it simply changes the path 'string' to another one with a different suffix. | 05-07-2020 14:37:24 | 05-07-2020 14:37:24 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4203?src=pr&el=h1) Report
> Merging [#4203](https://codecov.io/gh/huggingface/transformers/pull/4203?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/026097b9ee7862905ec3f3b0e729c5dbc95a0fd9&el=desc) will **increase** coverage by `0.01%`.
> The diff coverage is `0.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/4203?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4203 +/- ##
==========================================
+ Coverage 78.43% 78.44% +0.01%
==========================================
Files 120 120
Lines 19800 19797 -3
==========================================
Hits 15530 15530
+ Misses 4270 4267 -3
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/4203?src=pr&el=tree) | Coverage Ξ | |
|---|---|---|
| [src/transformers/hf\_argparser.py](https://codecov.io/gh/huggingface/transformers/pull/4203/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9oZl9hcmdwYXJzZXIucHk=) | `61.97% <0.00%> (+1.16%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/4203/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `41.83% <0.00%> (-0.35%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4203/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `92.77% <0.00%> (+0.16%)` | :arrow_up: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/4203/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `74.38% <0.00%> (+0.41%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4203?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4203?src=pr&el=footer). Last update [026097b...a73d487](https://codecov.io/gh/huggingface/transformers/pull/4203?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks! |
transformers | 4,202 | closed | Create README.md | 05-07-2020 14:06:04 | 05-07-2020 14:06:04 | ||
transformers | 4,201 | closed | Ensure fast tokenizer can construct single-element tensor without pad token | 05-07-2020 14:02:00 | 05-07-2020 14:02:00 | ||
transformers | 4,200 | closed | [Proposal] Small HfAPI Cleanup | - Implement __dict__ and __str__ so that one need not guess the class attributes of `ModelInfo` and `S3Object`
- Delete redundant `S3Obj` class. (`S3Object` seems to be a superset).
- Rename to `HFApi`?
Thoughts @julien-c @LysandreJik ? | 05-07-2020 13:57:13 | 05-07-2020 13:57:13 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,199 | closed | [README] Corrected some grammatical mistakes | Corrected some grammatical mistakes, like added required punctuation marks, corrected singular-plural mistakes, etc. | 05-07-2020 13:29:08 | 05-07-2020 13:29:08 | Thanks for your contribution @girishponkiya , merging! |
transformers | 4,198 | closed | [Benchmark] Memory benchmark utils | ## Description
Currently, we have a benchmarks file in examples `examples/benchmarks.py` which measures time and speed for inference and one `src/transformers/benchmark_utils.py` file which including the memory tracing from @thomwolf.
This PR generalizes the benchmarks much more:
- We have `benchmark_args_utils, benchmark_args, benchmark_tf_args` similar to `trainer_args`. Because there is such a big overlap between tf and torch, I decided to have a`benchmark_args_utils`
- We have a `benchmark_utils` including the previous code of `benchmark_utils` and a new abstract`Benchmark` class. `PyTorchBenchmark` and `TensorflowBenchmark` inherit from this class and implement the abstract methods `inference` and `train`.
- Instead of being able to just measure our "default" models identified by their `model_name`, we can create a special config and measure the memory and time usage of the corresponding model before starting to train
- In addition to inference we can also measure train
- **IMPORTANT**: The line-by-line tracing implemented by @thomwolf is super useful to optimize one's code and see exactly which line produces most of the code, but it does not give the peek memory usage as one can see in `nvidia-smi` and which is relevant to decide how much RAM is needed for the model. I decided to not use the line-by-line tracing to plot the overall memory usage on GPU. It should rather only be used when one want to optimize his/her code instead of benchmarking the model. For Benchmarks the PyTorch method `torch.cuda.max_memory_reserved` is a better fit - it's much faster and gives the correct value.
- Also I added a `run_script` and a `plot_script` to the examples.
- @julien-c - I think it would be super nice to have such plots on our website somewhere. If people can see quickly how much more efficient model x is over y in for a given batch size and sequence length, I think it would be super useful.
## TODO:
- [x] Write tests
- [x] Make pretty (more type hints + smaller refactoring)
## Review
@thomwolf, @julien-c - It would be awesome if you could take a look here and check if you are ok with the general design choice. To get a better feeling how the Benchmark can be used, you can checkout this notebook: https://colab.research.google.com/drive/1GM3WPXmZ5wZHwhdPyf7RykJS3Y12iceJ?usp=sharing
## UPDATE
I think the PR is ready for merge now. The notebook: https://colab.research.google.com/drive/1GM3WPXmZ5wZHwhdPyf7RykJS3Y12iceJ?usp=sharing is updated and shows what environment and version infos are shown @thomwolf . Tests are included. The next PR should take care of pytorch jit tracing and TPU and TF @LysandreJik @julien-c
## Future PR:
- [ ] Support Pytorch Tracing (discuss with Lysandre a bit before)
- [ ] Support TPU (discuss with Lysandre a bit before)
- [ ] TF version
| 05-07-2020 13:17:10 | 05-07-2020 13:17:10 | This google colab shows nicely how the new benchmarking utils can be used: https://colab.research.google.com/drive/1GM3WPXmZ5wZHwhdPyf7RykJS3Y12iceJ?usp=sharing<|||||>A plotter file is included in `examples/benchmarking` which can give the following results:
### Memory

### Time

<|||||>Can `csv_memory_filename_inference` be called `save_csv_path`?<|||||>> Can `csv_memory_filename_inference` be called `save_csv_path`?
Yeah about the exact naming I think we can decide a bit later. Wanna see first if you guys are ok with the general design. But we have up to four csv files that are saved - so `save_csv_path` is too general.<|||||>> This is really cool, I'm looking forward to using it.
>
> It seems you've implemented TPU support as well, did you get a chance to try it?
I didn't play around with TPU very much yet. I'd like to merge this PR which works for GPU / CPU for PyTorch now and then maybe clean things for TPU in a second PR? <|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4198?src=pr&el=h1) Report
> Merging [#4198](https://codecov.io/gh/huggingface/transformers/pull/4198?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4bd7cfc30952922b6329ec2d7637f13729317014&el=desc) will **not change** coverage.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/4198?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #4198 +/- ##
=======================================
Coverage 77.39% 77.39%
=======================================
Files 128 128
Lines 20939 20939
=======================================
Hits 16205 16205
Misses 4734 4734
```
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/4198?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Ξ = absolute <relative> (impact)`, `ΓΈ = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/4198?src=pr&el=footer). Last update [4bd7cfc...4bd7cfc](https://codecov.io/gh/huggingface/transformers/pull/4198?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 4,197 | closed | AutoTokenizer not able to load saved Roberta Tokenizer | # π Bug
## Information
Model I am using (Bert, XLNet ...): Roberta
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
I'm trying to run `run_language_modelling.py` with my own tokenizer
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
!pip install transformers --upgrade
from transformers import AutoTokenizer
my_tokenizer = AutoTokenizer.from_pretrained("distilroberta-base")
!mkdir my_tokenizer
my_tokenizer.save_pretrained("my_tokenizer")
my_tokenizer2 = AutoTokenizer.from_pretrained("./my_tokenizer")
```
### Stack Trace:
```
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, pretrained_config_archive_map, **kwargs)
246 resume_download=resume_download,
--> 247 local_files_only=local_files_only,
248 )
4 frames
/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, local_files_only)
260 # File, but it doesn't exist.
--> 261 raise EnvironmentError("file {} not found".format(url_or_filename))
262 else:
OSError: file ./my_tokenizer/config.json not found
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-16-b699c4abfe8e> in <module>()
3 get_ipython().system('mkdir my_tokenizer')
4 my_tokenizer.save_pretrained("my_tokenizer")
----> 5 my_tokenizer2 = AutoTokenizer.from_pretrained("./my_tokenizer")
/usr/local/lib/python3.6/dist-packages/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
184 config = kwargs.pop("config", None)
185 if not isinstance(config, PretrainedConfig):
--> 186 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
187
188 if "bert-base-japanese" in pretrained_model_name_or_path:
/usr/local/lib/python3.6/dist-packages/transformers/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
185 """
186 config_dict, _ = PretrainedConfig.get_config_dict(
--> 187 pretrained_model_name_or_path, pretrained_config_archive_map=ALL_PRETRAINED_CONFIG_ARCHIVE_MAP, **kwargs
188 )
189
/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, pretrained_config_archive_map, **kwargs)
268 )
269 )
--> 270 raise EnvironmentError(msg)
271
272 except json.JSONDecodeError:
OSError: Can't load './my_tokenizer'. Make sure that:
- './my_tokenizer' is a correct model identifier listed on 'https://huggingface.co/models'
- or './my_tokenizer' is the correct path to a directory containing a 'config.json' file
```
## Expected behavior
AutoTokenizer should be able to load the tokenizer from the file.
Possible duplicate of #1063 #3838
<!-- A clear and concise description of what you would expect to happen. -->
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Google Colab
- Python version: 3.6.9
- PyTorch version (GPU?): N/A
- Tensorflow version (GPU?): N/A
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| 05-07-2020 13:09:16 | 05-07-2020 13:09:16 | It looks like when you load a tokenizer from a dir it's also looking for files to load it's related model config via `AutoConfig.from_pretrained`. It does this because it's using the information from the config to to determine which model class the tokenizer belongs to (BERT, XLNet, etc ...) since there is no way of knowing that with the saved tokenizer files themselves. This is deceptive, as intuitively it would seem you should be able to load a tokenizer without needing it's model config. Plus in the documentation of `AutoTokenizer` under examples it states:
```
# If vocabulary files are in a directory (e.g. tokenizer was saved using `save_pretrained('./test/saved_model/')`)
tokenizer = AutoTokenizer.from_pretrained('./test/bert_saved_model/')
```
Either:
1. The documentation needs an update to clarify that `config` must be a kwarg of `from_pretrained` or the dir must also have the files for `AutoConfig` (if loading from a dir)
2. `AutoTokenizer` should be changed to be independent from `AutoConfig` (maybe the model base class name should be stored along with "tokenizer_config.json")<|||||>I had the same issue loading a saved tokenizer. Separately loading and saving the config file to the tokenizer directory worked.
```
config_file = AutoConfig.from_pretrained('distilgpt2')
config_file.save_pretrained(saved_tokenizer_directory)
```<|||||>+1 to @jaymody. It seems as though that `AutoTokenizer` is coupled to the model that uses it, and I don't see why it should be impossible to load a tokenizer without an explicit model configuration.
For those that happen to be loading a tokenizer and model using the `Auto` classes, I was able to use the following workaround where my tokenizer relies on the configuration in the model directory:
```
tokenizer = AutoTokenizer.from_pretrained(<tokenizer-dir>, config=AutoConfig.from_pretrained(<model-dir>))
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,196 | closed | Model card for spanish electra small | 05-07-2020 12:23:44 | 05-07-2020 12:23:44 | ||
transformers | 4,195 | closed | add XLMRobertaForQuestionAnswering | 1. add XLMRobertaForQuestionAnswering in modeling_xlm_roberta.py
2. add XLMRobertaForQuestionAnswering in modeling_auto.py | 05-07-2020 11:34:57 | 05-07-2020 11:34:57 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Closing since already implemented |
transformers | 4,194 | closed | Optimize PyTorch's GPT2 computation graph | Brings various optimizations to make the computation graph more efficient. | 05-07-2020 11:08:38 | 05-07-2020 11:08:38 | GPT2 Attention has `merge_heads` method which takes an input as `(batch, heads, seq, repr)` and outputs the flattened _(merged)_ representation of it `(batch, seq, heads * repr)`.
```python
def merge_heads(self, x):
x = x.transpose(2, 1).contiguous()
new_x_shape = x.size()[:-2] + (x.size(-2) * x.size(-1),)
return x.view(*new_x_shape) # in Tensorflow implem: fct merge_states
```
This can be simplified with `flatten()` to avoid constructing a tuple and accessing shape information.
```python
def merge_heads(self, x):
x = x.transpose(2, 1).contiguous()
return x.flatten(-2) # in Tensorflow implem: fct merge_states
```<|||||>GPT2 Attention has a `Conv1D` projection which is nothing else than a linear projection of the input.
The current implementation use BLAS acceleration function `addmm` (Matrix Multiplication + Add) to handle the affine projection `w * x.T + b`.
This is currently implemented with the non batched BLAS call and requires flattening the sequence over the batch axis and unflattening after the call:
```python
def forward(self, x):
size_out = x.size()[:-1] + (self.nf,)
torch.addmm(self.bias, x, self.weight)
return x.view(*size_out)
```
We can optimize this call to remove all the reshape operations by leveraging the batched BLAS call (`baddbmm` : Batched Matrix Multiply + Batched Add) which takes 3D inputs as (`b, m, n)` and thus avoid the reshapings
```python
def forward(self, x):
return torch.baddbmm(self.bias[None, None], x, self.weight[None])
```
<|||||>Use tuples instead of list for returning objects from layers (refer to [this SO post](https://stackoverflow.com/a/22140115))<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 4,193 | closed | Language model training with NSP using run_language_modeling.py | When using "run_language_modeling.py" for fine-tuning BERT is only the MLM task used or also the NSP task as described in the paper? | 05-07-2020 08:58:02 | 05-07-2020 08:58:02 | This is currently not supported. There used to be some community maintained examples IIRC. You can have a look here: https://github.com/huggingface/transformers/tree/1.1.0/examples/lm_finetuning
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.